正在加载图片...
8CHAPTER 8.BOOTSTRAP AND JACKKNIFE ESTIMATION OF SAMPLING DISTRIBUTIONS Example 2.4 Suppose that X1,...,Xn are i.i.d.Po=exponential(1/0):Po(X1 >t)=exp(-t/0) for t 0.Then n =Xn and non/0 Gamma(n,1).Now Pi =exponential(1/n),and if Xi,...,are i.i.d.P,then n=n has (non/nn)Gamma(n,1),so the bootstrap distribution replicates the original estimator exactly. Example 2.5 (Bootstrapping from a "smoothed empirical measure";or the "smoothed boot- strap”).Suppose that P={P on (Ra,Bd):p= dp d入 exists and is uniformly continuous. Then one way to estimate P so that our estimator PnE P is via a kernel estimator of the density p: in(d)=i ∫() dPn(y) where k:Rd->R is a uniformly continuous density function.Then Pn is defined for CA by n(C)= pn(x)dx, and the model-based bootstrap proceeds by sampling from Pn There are many other examples of this type involving nonparametric or semiparametric models P.For some work on "smoothed bootstrap"methods see e.g.Silverman and Young (1987)and Hall,DiCiccio,and Romano (1989). Exchangeably-weighted and "Bayesian"bootstrap methods In the course of example 5.1 we introduced the vector M of counts of how many times the bootstrap variables X;equal the observations Xi(w)in the underlying sample.Thinking about the process of sampling at random (with replacement)from the population described by the empirical measure Pn,it becomes clear that we can think of the bootstrap empirical measure P as the empirical measure with multinomial random weights: P= 1∑x:= M:6x:(@) i= This view of Efron's nonparametric bootstrap as the empirical measure with random weights sug- gests that we could obtain other random measures which would behave much the same way as Efron's nonparametric bootstrap,but without the same random sampling interpretation,by re- placing the vector of multinomial weights by some other random vector W.One of the possible deficiencies of the nonparametric bootstrap involves its "discreteness"via missing observations in the original sample:note that the number of points of the original sample which are missed (or not given any bootstrap weight)is Nn=#jn:M=0)=>11{M=0).hence the proportion of observations missed by the bootstrap is n-1Nn,and the expected number proportion of missed observations is E(n-1Nn)=P(M=0)=(1-1/n)”→e-1=.36787.…8CHAPTER 8. BOOTSTRAP AND JACKKNIFE ESTIMATION OF SAMPLING DISTRIBUTIONS Example 2.4 Suppose that X1, . . . , Xn are i.i.d. Pθ = exponential(1/θ): Pθ(X1 > t) = exp(−t/θ) for t ≥ 0. Then ˆθn = Xn and nˆθn/θ ∼ Gamma(n, 1). Now Pθˆn = exponential(1/ ˆθn), and if X∗ 1 , . . . , X∗ n are i.i.d. Pθˆn , then ˆθ ∗ n = X ∗ n has (nˆθ ∗ n/ ˆθn|Fn) ∼ Gamma(n, 1), so the bootstrap distribution replicates the original estimator exactly. Example 2.5 (Bootstrapping from a “smoothed empirical measure”; or the “smoothed boot￾strap”). Suppose that P = {P on (R d , B d ) : p = dP dλ exists and is uniformly continuous}. Then one way to estimate P so that our estimator Pˆ n ∈ P is via a kernel estimator of the density p: pˆn(x) = 1 b d n Z k  y − x bn  dPn(y) where k : R d → R is a uniformly continuous density function. Then Pˆ n is defined for C ∈ A by Pˆ n(C) = Z C pˆn(x)dx, and the model- based bootstrap proceeds by sampling from Pˆ n. There are many other examples of this type involving nonparametric or semiparametric models P. For some work on “smoothed bootstrap” methods see e.g. Silverman and Young (1987) and Hall, DiCiccio, and Romano (1989). Exchangeably - weighted and “Bayesian” bootstrap methods In the course of example 5.1 we introduced the vector M of counts of how many times the bootstrap variables X∗ i equal the observations Xj (ω) in the underlying sample. Thinking about the process of sampling at random (with replacement) from the population described by the empirical measure Pn, it becomes clear that we can think of the bootstrap empirical measure P ∗ n as the empirical measure with multinomial random weights: P ∗ n = 1 n Xn i=1 δX∗ i = 1 n Xn i=1 MiδXi(ω) . This view of Efron’s nonparametric bootstrap as the empirical measure with random weights sug￾gests that we could obtain other random measures which would behave much the same way as Efron’s nonparametric bootstrap, but without the same random sampling interpretation, by re￾placing the vector of multinomial weights by some other random vector W. One of the possible deficiencies of the nonparametric bootstrap involves its “discreteness” via missing observations in the original sample: note that the number of points of the original sample which are missed (or not given any bootstrap weight) is Nn ≡ #{j ≤ n : Mj = 0} = Pn j=1 1{Mj = 0}. hence the proportion of observations missed by the bootstrap is n −1Nn, and the expected number proportion of missed observations is E(n −1Nn) = P(M1 = 0) = (1 − 1/n) n → e −1 ˙=.36787 . . .
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有