Bootstrapping populations

Bootstrapping populations in statistics and mathematics starts with a sample { x 1 , , x m } {\displaystyle \{x_{1},\ldots ,x_{m}\}} observed from a random variable.

When X has a given distribution law with a set of non fixed parameters, we denote with a vector θ {\displaystyle {\boldsymbol {\theta }}} , a parametric inference problem consists of computing suitable values – call them estimates – of these parameters precisely on the basis of the sample. An estimate is suitable if replacing it with the unknown parameter does not cause major damage in next computations. In Algorithmic inference, suitability of an estimate reads in terms of compatibility with the observed sample.

In this framework, resampling methods are aimed at generating a set of candidate values to replace the unknown parameters that we read as compatible replicas of them. They represent a population of specifications of a random vector Θ {\displaystyle {\boldsymbol {\Theta }}} [1] compatible with an observed sample, where the compatibility of its values has the properties of a probability distribution. By plugging parameters into the expression of the questioned distribution law, we bootstrap entire populations of random variables compatible with the observed sample.

The rationale of the algorithms computing the replicas, which we denote population bootstrap procedures, is to identify a set of statistics { s 1 , , s k } {\displaystyle \{s_{1},\ldots ,s_{k}\}} exhibiting specific properties, denoting a well behavior, w.r.t. the unknown parameters. The statistics are expressed as functions of the observed values { x 1 , , x m } {\displaystyle \{x_{1},\ldots ,x_{m}\}} , by definition. The x i {\displaystyle x_{i}} may be expressed as a function of the unknown parameters and a random seed specification z i {\displaystyle z_{i}} through the sampling mechanism ( g θ , Z ) {\displaystyle (g_{\boldsymbol {\theta }},Z)} , in turn. Then, by plugging the second expression in the former, we obtain s j {\displaystyle s_{j}} expressions as functions of seeds and parameters – the master equations – that we invert to find values of the latter as a function of: i) the statistics, whose values in turn are fixed at the observed ones; and ii) the seeds, which are random according to their own distribution. Hence from a set of seed samples we obtain a set of parameter replicas.

Method

Given a x = { x 1 , , x m } {\displaystyle {\boldsymbol {x}}=\{x_{1},\ldots ,x_{m}\}} of a random variable X and a sampling mechanism ( g θ , Z ) {\displaystyle (g_{\boldsymbol {\theta }},Z)} for X, the realization x is given by x = { g θ ( z 1 ) , , g θ ( z m ) } {\displaystyle {\boldsymbol {x}}=\{g_{\boldsymbol {\theta }}(z_{1}),\ldots ,g_{\boldsymbol {\theta }}(z_{m})\}} , with θ = ( θ 1 , , θ k ) {\displaystyle {\boldsymbol {\theta }}=(\theta _{1},\ldots ,\theta _{k})} . Focusing on well-behaved statistics,

s 1 = h 1 ( x 1 , , x m ) , {\displaystyle s_{1}=h_{1}(x_{1},\ldots ,x_{m}),}
                               {\displaystyle \vdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots }
s k = h k ( x 1 , , x m ) , {\displaystyle s_{k}=h_{k}(x_{1},\ldots ,x_{m}),}

for their parameters, the master equations read

s 1 = h 1 ( g θ ( z 1 ) , , g θ ( z m ) ) = ρ 1 ( θ ; z 1 , , z m ) {\displaystyle s_{1}=h_{1}(g_{\boldsymbol {\theta }}(z_{1}),\ldots ,g_{\boldsymbol {\theta }}(z_{m}))=\rho _{1}({\boldsymbol {\theta }};z_{1},\ldots ,z_{m})}
                                                                                             {\displaystyle \vdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots } (1)
s k = h k ( g θ ( z 1 ) , , g θ ( z m ) ) = ρ k ( θ ; z 1 , , z m ) . {\displaystyle s_{k}=h_{k}(g_{\boldsymbol {\theta }}(z_{1}),\ldots ,g_{\boldsymbol {\theta }}(z_{m}))=\rho _{k}({\boldsymbol {\theta }};z_{1},\ldots ,z_{m}).}

For each sample seed { z 1 , , z m } {\displaystyle \{z_{1},\ldots ,z_{m}\}} a vector of parameters θ {\displaystyle {\boldsymbol {\theta }}} is obtained from the solution of the above system with s i {\displaystyle s_{i}} fixed to the observed values. Having computed a huge set of compatible vectors, say N, the empirical marginal distribution of Θ j {\displaystyle \Theta _{j}} is obtained by:

F ^ Θ j ( θ ) = i = 1 N 1 N I ( , θ ] ( θ ˘ j , i ) {\displaystyle {\widehat {F}}_{\Theta _{j}}(\theta )=\sum _{i=1}^{N}{\frac {1}{N}}I_{(-\infty ,\theta ]}({\breve {\theta }}_{j,i})} (2)

where θ ˘ j , i {\displaystyle {\breve {\theta }}_{j,i}} is the j-th component of the generic solution of (1) and where I ( , θ ] ( θ ˘ j , i ) {\displaystyle I_{(-\infty ,\theta ]}({\breve {\theta }}_{j,i})} is the indicator function of θ ˘ j , i {\displaystyle {\breve {\theta }}_{j,i}} in the interval ( , θ ] . {\displaystyle (-\infty ,\theta ].} Some indeterminacies remain if X is discrete and this we will be considered shortly. The whole procedure may be summed up in the form of the following Algorithm, where the index Θ {\displaystyle {\boldsymbol {\Theta }}} of s Θ {\displaystyle {\boldsymbol {s}}_{\boldsymbol {\Theta }}} denotes the parameter vector from which the statistics vector is derived.

Algorithm

Generating parameter populations through a bootstrap
Given a sample { x 1 , , x m } {\displaystyle \{x_{1},\ldots ,x_{m}\}} from a random variable with parameter vector θ {\displaystyle {\boldsymbol {\theta }}} unknown,
  1. Identify a vector of well-behaved statistics S {\displaystyle {\boldsymbol {S}}} for Θ {\displaystyle {\boldsymbol {\Theta }}} ;
  2. compute a specification s Θ {\displaystyle {\boldsymbol {s}}_{\boldsymbol {\Theta }}} of S {\displaystyle {\boldsymbol {S}}} from the sample;
  3. repeat for a satisfactory number N of iterations:
    • draw a sample seed z ˘ i {\displaystyle {\breve {\boldsymbol {z}}}_{i}} of size m from the seed random variable;
    • get θ ˘ i = I n v ( s , z i ) {\displaystyle {\breve {\boldsymbol {\theta }}}_{i}=\mathrm {Inv} ({\boldsymbol {s}},{\boldsymbol {z}}_{i})} as a solution of (1) in θ with s = s Θ {\displaystyle {\boldsymbol {s}}={\boldsymbol {s}}_{\boldsymbol {\Theta }}} and z i = { z ˘ 1 , , z ˘ m } {\displaystyle {\boldsymbol {z}}_{i}=\{{\breve {z}}_{1},\ldots ,{\breve {z}}_{m}\}} ;
    • add θ ˘ i {\displaystyle {\breve {\boldsymbol {\theta }}}_{i}} to Θ {\displaystyle {\boldsymbol {\Theta }}} ; population.
Cumulative distribution function of the parameter Λ of an Exponential random variable when statistic s Λ = 6.36 {\displaystyle s_{\Lambda }=6.36}
Cumulative distribution function of the parameter A of a uniform continuous random variable when statistic s A = 9.91 {\displaystyle s_{A}=9.91}

You may easily see from a table of sufficient statistics that we obtain the curve in the picture on the left by computing the empirical distribution (2) on the population obtained through the above algorithm when: i) X is an Exponential random variable, ii) s Λ = j = 1 m x j {\displaystyle s_{\Lambda }=\sum _{j=1}^{m}x_{j}} , and

 iii) Inv ( s Λ , u i ) = j = 1 m ( log u i j ) / s Λ {\displaystyle {\text{ iii) Inv}}(s_{\Lambda },{\boldsymbol {u}}_{i})=\sum _{j=1}^{m}(-\log u_{ij})/s_{\Lambda }} ,

and the curve in the picture on the right when: i) X is a Uniform random variable in [ 0 , a ] {\displaystyle [0,a]} , ii) s A = max j = 1 , , m x j {\displaystyle s_{A}=\max _{j=1,\ldots ,m}x_{j}} , and

iii) Inv ( s A , u i ) = s A / max j = 1 , , m { u i j } {\displaystyle {\text{iii) Inv}}(s_{A},{\boldsymbol {u}}_{i})=s_{A}/\max _{j=1,\ldots ,m}\{u_{ij}\}} .

Remark

Note that the accuracy with which a parameter distribution law of populations compatible with a sample is obtained is not a function of the sample size. Instead, it is a function of the number of seeds we draw. In turn, this number is purely a matter of computational time but does not require any extension of the observed data. With other bootstrapping methods focusing on a generation of sample replicas (like those proposed by (Efron & Tibshirani 1993)) the accuracy of the estimate distributions depends on the sample size.

Example

For x {\displaystyle {\boldsymbol {x}}} expected to represent a Pareto distribution, whose specification requires values for the parameters a {\displaystyle a} and k,[2] we have that the cumulative distribution function reads:

Joint empirical cumulative distribution function of parameters ( A , K ) {\displaystyle (A,K)} of a Pareto random variable when m = 30 , s 1 = 83.24 {\displaystyle m=30,s_{1}=83.24} and s 2 = 8.37 {\displaystyle s_{2}=8.37} based on 5,000 replicas.
F X ( x ) = 1 ( k x ) a {\displaystyle F_{X}(x)=1-\left({\frac {k}{x}}\right)^{a}} .

A sampling mechanism ( g ( a , k ) , U ) {\displaystyle (g_{(a,k)},U)} has [ 0 , 1 ] {\displaystyle [0,1]} uniform seed U and explaining function g ( a , k ) {\displaystyle g_{(a,k)}} described by:

x = g ( a , k ) = ( 1 u ) 1 a k {\displaystyle x=g_{(a,k)}=(1-u)^{-{\frac {1}{a}}}k}

A relevant statistic s Θ {\displaystyle {\boldsymbol {s}}_{\boldsymbol {\Theta }}} is constituted by the pair of joint sufficient statistics for A {\displaystyle A} and K, respectively s 1 = i = 1 m log x i , s 2 = min { x i } {\displaystyle s_{1}=\sum _{i=1}^{m}\log x_{i},s_{2}=\min\{x_{i}\}} . The master equations read

s 1 = i = 1 m 1 a log ( 1 u i ) + m log k {\displaystyle s_{1}=\sum _{i=1}^{m}-{\frac {1}{a}}\log(1-u_{i})+m\log k}
s 2 = ( 1 u min ) 1 a k {\displaystyle s_{2}=(1-u_{\min })^{-{\frac {1}{a}}}k}

with u min = min { u i } {\displaystyle u_{\min }=\min\{u_{i}\}} .

Figure on the right reports the three-dimensional plot of the empirical cumulative distribution function (2) of ( A , K ) {\displaystyle (A,K)} .

Notes

  1. ^ By default, capital letters (such as U, X) will denote random variables and small letters (u, x) their corresponding realizations.
  2. ^ We denote here with symbols a and k the Pareto parameters elsewhere indicated through k and x m i n {\displaystyle x_{\mathrm {min} }} .

References

  • Efron, B. & Tibshirani, R. (1993). An introduction to the Bootsrap. Freeman, New York: Chapman and Hall.
  • Apolloni, B.; Malchiodi, D.; Gaito, S. (2006). Algorithmic Inference in Machine Learning. International Series on Advanced Intelligence. Vol. 5 (2nd ed.). Adelaide: Magill. Advanced Knowledge International
  • Apolloni, B.; Bassis, S.; Gaito. S.; Malchiodi, D. (2007). "Appreciation of medical treatments by learning underlying functions with good confidence". Current Pharmaceutical Design. 13 (15): 1545–1570. doi:10.2174/138161207780765891. PMID 17504150.