Unbiased Analysis of Today's Healthcare Issues

Bootstrapping

Written By: Jason Shafrin - May• 24•07

One of the biggest advances statistical modeling in the last 30 years has been the use of the bootstrap. For those interested in learning about the bootstrap in more detail, a good place to start is an article by UCSD math professor Dimitris N. Politis which I will summarize here. For more detailed information, one may want to look at An Introduction to the Bootstrap by Efron and Tibshirani.

Set-up

Suppose we have n observation of a random variable X. We can group these as a vectors so that X=(X1,…,XN), where each Xi are iid with distribution F. If we want to estimate a parameter θ(F) from the data, we can use a statistic T(X) as an approximation. If we assume that F~Normal, we can use traditional statistics to estimate T(X) as well as the confidence interval around θ(F). If we do not know the distribution of F (which a researcher problem does not in reality), then classical statical theory may be less reliable and a bootstrap methodology may be more robust. Bootstrapping methodology allows the researcher to better estimate F, especially if there is significant skewness to the F distribution.

The bootstrap procedure creates a new sample, by randomly sampling each observation in X with replacement until we have a new vector with N observations. We repeat this B times to create our bootstrap data set. Let’s look at an example..

Example

Pretend we have data on how many push up I have completed each day over a week. I want to estimate the median number of push-ups I do each day. In this sample, N=15 and since we will create ten bootstrap samples, B=10.

Obs. Data B1 B2 B3 B4 B5 B6 B7 B8 B9 B10
1 22 18 25 29 21 21 22 18 31 24 14
2 18 24 14 25 14 21 19 35 25 21 19
3 14 25 24 31 25 21 21 14 21 30 21
4 35 19 18 26 19 25 19 31 24 24 14
5 22 29 29 31 26 30 26 21 22 19 26
6 24 31 24 31 22 30 19 30 31 26 19
7 26 25 19 22 21 25 25 26 22 30 18
8 29 30 22 14 22 22 19 18 31 35 29
9 19 31 21 14 14 21 14 26 18 22 18
10 31 25 24 35 29 22 19 14 31 26 25
11 30 22 25 22 29 14 19 35 19 22 22
12 19 22 24 19 18 35 29 26 21 19 35
13 22 22 22 24 25 24 30 19 35 25 29
14 21 31 25 22 25 14 14 22 31 19 18
15 25 22 19 30 22 35 24 19 19 31 26
Mean 23.8 25.1 22.3 25 22.1 24 21.3 23.6 25.4 24.9 22.2
Median 22 25 24 25 22 22 19 22 24 24 21

The median of the actual data we have is 22. But we can also calculate the median using a bootstrap methodology. We first randomly choose one of the data points and put it as the first data point of B1 (the bootstrap sample number 1), we then resample with replacement and put another number as the 2nd observation of sample B1. We can see that data points often repeat. For instance in B1 observations X10 repeats twice. We see that the median varies across the 10 bootstrapping samples, but the average value for the median using the bootstrap methodology is 22.8.

We can also calculate the the bootstrap variance (3.36) and standard deviation (1.83). This are calculated according to the formulas:

  • Variance: B-1ΣiT(X*i)2 – [B-1ΣiT(X*i)]2
  • S.D. = (Var)1/2

Here, T(X*i) is the median for each bootstrap sample i. Since there are 10 bootstrap samples i=1,…,10. To calculate the variance, one simply averages the squared median over the 10 bootstrap samples and then you subtract the squared average median of the 10 samples.

You can follow any responses to this entry through the RSS 2.0 feed. Responses are currently closed, but you can trackback from your own site.

2 Comments

  1. [...] previous posts, I have explained how to create bootstrap estimates for a variety of statistics.  Doing so is fairly simple and involves a 3 step [...]

  2. [...] previous posts, I have explained how to create bootstrap estimates for a variety of statistics.  Doing so is fairly simple and involves a 3 step [...]