正在加载图片...
Chapter 8 Bootstrap and Jackknife Estimation of Sampling Distributions 1 A General view of the bootstrap We begin with a general approach to bootstrap methods.The goal is to formulate the ideas in a context which is free of particular model assumptions. Suppose that the data x~Pa∈P={Pg:B∈曰}.The parameter space曰is allowed to be very general;it could be a subset of R(in which case the model P is a parametric model),or it could be the distributions of all i.i.d.sequences on some measurable space (Y,A)(in which case the model P is the "nonparametric i.i.d."model). Suppose that we have an estimator 0 of 0,and thereby an estimator P of Pa.Consider estimation of: A.The distribution of 0:e.g.P(0 A)=Po(0(X)EA)for a measurable subset A of e; B.f日cRk,Vara(gT(X)for a fixed vector a∈Rk Natural (ideal)bootstrap estimators of these parameters are provided by: A'.Pa(0(X*)∈A): B'.Varo(aT0(X*)). While these ideal bootstrap estimators are often difficult to compute exactly,we can often obtain Monte-Carlo estimates thereof by sampling fromm P:let Xi,...,X be i.i.d.with common distribution P,and calculate 0(X;)for j=1,...,B.Then Monte-Carlo approximations (or implementations)of the bootstrap estimators in A'and B'are given by A".B-1∑B11{X)∈A: B”.B-1∑Ba6X)-B-1∑B1TX》2 If p is a parametric model,the above approach yields a parametric bootstrap.If P is a nonparametric model,then this yields a nonparametric bootstrap.In the following section,we try to make these ideas more concrete first in the context of X =(X1,...,Xn)i.i.d.F or P with P nonparametric so that Po=Fx...x F and P=Fn x...x Fn.Or,if the basic underlying sample space for each Xi is not R,Pa=P×…×P and Pa=PnX·×Pn.Chapter 8 Bootstrap and Jackknife Estimation of Sampling Distributions 1 A General view of the bootstrap We begin with a general approach to bootstrap methods. The goal is to formulate the ideas in a context which is free of particular model assumptions. Suppose that the data X ∼ Pθ ∈ P = {Pθ : θ ∈ Θ}. The parameter space Θ is allowed to be very general; it could be a subset of R k (in which case the model P is a parametric model), or it could be the distributions of all i.i.d. sequences on some measurable space (X , A) (in which case the model P is the “nonparametric i.i.d.” model). Suppose that we have an estimator ˆθ of θ ∈ Θ, and thereby an estimator Pθˆ of Pθ. Consider estimation of: A. The distribution of ˆθ: e.g. Pθ( ˆθ ∈ A) = Pθ( ˆθ(X) ∈ A) for a measurable subset A of Θ; B. If Θ ⊂ R k , V arθ(a T ˆθ(X)) for a fixed vector a ∈ R k . Natural (ideal) bootstrap estimators of these parameters are provided by: A0 . Pθˆ( ˆθ(X∗ ) ∈ A); B0 . V arθˆ(a T ˆθ(X∗ )). While these ideal bootstrap estimators are often difficult to compute exactly, we can often obtain Monte-Carlo estimates thereof by sampling fromm Pθˆ : let X∗ 1 , . . . , X∗ B be i.i.d. with common distribution Pθˆ, and calculate ˆθ(X∗ j ) for j = 1, . . . , B. Then Monte-Carlo approximations (or implementations) of the bootstrap estimators in A’ and B’ are given by A00 . B−1 PB j=1 1{ ˆθ(X∗ j ) ∈ A}; B00 . B−1 PB j=1(a T ˆθ(X∗ j ) − B−1 PB j=1 a T ˆθ(X∗ j ))2 . If P is a parametric model, the above approach yields a parametric bootstrap. If P is a nonparametric model, then this yields a nonparametric bootstrap. In the following section, we try to make these ideas more concrete first in the context of X = (X1, . . . , Xn) i.i.d. F or P with P nonparametric so that Pθ = F × · · · × F and Pθˆ = Fn × · · · × Fn. Or, if the basic underlying sample space for each Xi is not R, Pθ = P × · · · × P and Pθˆ = Pn × · · · × Pn. 3
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有