正在加载图片...
2. BOOTSTRAP METHODS 7 The (Monte-Carlo approximation to)the bootstrap estimate of on(F)is B B-1m-p2. 1=1 Finally the jackknife estimate of on(F)is n- n -2: 11 see the beginning of section 2 for the notation used here.We will discuss the jackknife further in sections 2 and 4. Parametric Bootstrap Methods Once the idea of nonparametric bootstrapping(sampling from the empirical measure Pn)be- comes clear,it seems natural to consider sampling from other estimators of the unknown P.For example,if we are quite confident that some parametric model holds,then it seems that we should consider bootstrapping by sampling from an estimator of P based on the parametric model.Here is a formal description of this type of model-based bootstrap procedure. Let (A)be a measurable space,and let P={P:0e}be a model,parametric,semi- parametric or nonparametric.We do not insist that e be finite-dimensional.For example, in a parametric extreme case p could be the family of all normal (Gaussian)distributions on (,A)=(R4,Bd).Or,to give a nonparametric example with only a smoothness restriction,P could be the family of all distributions on(,A)=(Ra,Bd)with a density with respect to Lebesgue measure which is uniformly continuous. Let X1,...,Xn,...be i.i.d.with distribution PE P.We assume that there exists an estimator =(X1,...,Xn)of.Then Efron's parametric (or model-based)bootstrap proceeds by sam- pling from the estimated or fitted model P=P:suppose that ,..are independent and identically distributed with distribution P on (,A),and let (1) =the parametric bootstrap empirical measure. i=1 The key difference between this parametric bootstrap procedure and the nonparametric bootstrap discussed earlier in this section is that we are now sampling from the model-based estimator P=p of P rather than from the nonparametric estimator Pn. Example 2.3 Suppose that X1,...,Xn are i.i.d.Po=N(u,o2)where =(u,o2).Let on= (n,)=(n:2)where 2 is the usual unbiased estimator of o2,and hence n(an-四~tn-, On -)品心xX- 2 Now P=N(),and ifiare i.i.d.P then the bootstrap estimators=(2) satisfy,conditionally on Fn, Vn(inin)~tn-1, 壳 u-1)2~X2-r 6 Thus the bootstrap estimators have exactly the same distributions as the original estimators in this case.2. BOOTSTRAP METHODS 7 The (Monte-Carlo approximation to) the bootstrap estimate of σn(F) is vuutB−1X B j=1 [ρb ∗ j − ρ ∗] 2. Finally the jackknife estimate of σn(F) is vuut n − 1 n Xn j=1 [ρb(i) − ρb(·) ] 2; see the beginning of section 2 for the notation used here. We will discuss the jackknife further in sections 2 and 4. Parametric Bootstrap Methods Once the idea of nonparametric bootstrapping (sampling from the empirical measure Pn) be￾comes clear, it seems natural to consider sampling from other estimators of the unknown P. For example, if we are quite confident that some parametric model holds, then it seems that we should consider bootstrapping by sampling from an estimator of P based on the parametric model. Here is a formal description of this type of model - based bootstrap procedure. Let (X , A) be a measurable space, and let P = {Pθ : θ ∈ Θ} be a model, parametric, semi￾parametric or nonparametric. We do not insist that Θ be finite - dimensional. For example, in a parametric extreme case P could be the family of all normal (Gaussian) distributions on (X , A) = (R d , B d ). Or, to give a nonparametric example with only a smoothness restriction, P could be the family of all distributions on (X , A) = (R d , B d ) with a density with respect to Lebesgue measure which is uniformly continuous. Let X1, . . . , Xn, . . . be i.i.d. with distribution Pθ ∈ P. We assume that there exists an estimator ˆθn = ˆθn(X1, . . . , Xn) of θ. Then Efron’s parametric (or model - based) bootstrap proceeds by sam￾pling from the estimated or fitted model Pθˆ(ω) ≡ Pˆω n : suppose that X∗ n,1 , . . . , X∗ n,n are independent and identically distributed with distribution Pˆω n on (X , A), and let P ∗ n ≡ n −1Xn i=1 δX∗ n,i (1) ≡ the parametric bootstrap empirical measure . The key difference between this parametric bootstrap procedure and the nonparametric bootstrap discussed earlier in this section is that we are now sampling from the model - based estimator Pˆ n = pθˆn of P rather than from the nonparametric estimator Pn. Example 2.3 Suppose that X1, . . . , Xn are i.i.d. Pθ = N(µ, σ2 ) where θ = (µ, σ2 ). Let ˆθn = (ˆµn, σˆ 2 n ) = (Xn, S2 n ) where S 2 n is the usual unbiased estimator of σ 2 , and hence √ n(ˆµn − µ) σˆn ∼ tn−1, (n − 1)ˆσ 2 n σ 2 ∼ χ 2 n−1 . Now Pθˆn = N(ˆµn, σˆ 2 n ), and if X∗ 1 , . . . , X∗ n are i.i.d. Pθˆn , then the bootstrap estimators ˆθ ∗ n = (ˆµ ∗ n , σˆ ∗2 n ) satisfy, conditionally on Fn, √ n(ˆµ ∗ n − µˆn) σˆ ∗ n ∼ tn−1, (n − 1)ˆσ ∗2 n σˆ 2 n ∼ χ 2 n−1 . Thus the bootstrap estimators have exactly the same distributions as the original estimators in this case
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有