正在加载图片...
2. BOOTSTRAP METHODS 5 be B independent samples of size n drawn with replacement from Fn(or Pn);let Fn()≡n be the empirical d.f.of the j-th sample,and let Tn≡T(Fn),j=1,.,B Then approximations of A'-F are given by: A".bB三n{a∑月Tn-Tn} B”.no品g=n∑月1(Tn-T2 C".Kn.B(Tin -Ta)3/onB D”.H.B(x)=a∑Bl{V元(Tn-Tn)≤x以. E”.K.B(x)≡a∑Bl{VF防n-Fne≤x以 F”.克B()=言∑B1l{VPm-PF≤x以. For fixed sample size n and data Fn,it follows from the Glivenko-Cantelli theorem (applied to the bootstrap sampling)that sup,B(x)-Hn(c,Fn)l→as.0asB→oo, and,by Donsker's theorem, VB(HtB(x)-Hn(x,Fn)》→U*(Hn(x,Fn)asB→o. Moreover,by the Dvoretzky,Kiefer,Wolfowitz (1956)inequality P(Un >A)<2exp(-22)for all n and A>0 where the constant 2 before the exponential comes via Massart (1990)), P(sup.B(c)-Hn(x,Fn)川≥e)≤2exp(-2Be2). For a given e>0 we can make this probability as small as we please by choosing B (over which we have complete control given sufficient computing power)sufficiently large.Since the deviations of H"B from Hn(,Fn)are so well -understood and controlled,much of our discussion below will focus on the differences between Hn(x,Fn)and Hn(,F). Sometimes it is possible to compute the distribution of the bootstrap estimator explicitly with out resort to Monte-Carlo;here is an example of this kind. Example 2.1 (The distribution of the bootstrap estimator of the median).Suppose that T(F)= F-1(1/2).Then T(Fn)=Fn1(1/2)=Xm+l/2 and T()=F路-1(1/2)=Xm+1/22. BOOTSTRAP METHODS 5 be B independent samples of size n drawn with replacement from Fn (or Pn); let F ∗ j,n(x) ≡ n −1Xn i=1 1[X∗ j,i≤x] be the empirical d.f. of the j−th sample, and let T ∗ j,n ≡ T(F ∗ j,n), j = 1, . . . , B. Then approximations of A0 − F 0 are given by: A00 . b ∗ n,B ≡ n n 1 B PB j=1 T ∗ j,n − Tn o . B00 . nσ∗2 n,B ≡ n 1 B PB j=1(T ∗ j,n − T∗ n ) 2 . C00 . κ ∗ 3,n,B ≡ 1 B PB j=1(T ∗ j,n − T∗ n ) 3/σ∗3 n,B. D00 . H∗ n,B(x) ≡ 1 B PB j=1 1{ √ n(T ∗ j,n − Tn) ≤ x}. E00 . K∗ n,B(x) ≡ 1 B PB j=1 1{ √ nkF ∗ j,n − Fnk∞ ≤ x}. F 00 . L ∗ n,B(x) ≡ 1 B PB j=1 1{ √ nkP ∗ j,n − PnkF ≤ x}. For fixed sample size n and data Fn, it follows from the Glivenko - Cantelli theorem (applied to the bootstrap sampling) that sup x |H∗ n,B(x) − Hn(x, Fn)| →a.s. 0 as B → ∞, and, by Donsker’s theorem, √ B(H∗ n,B(x) − Hn(x, Fn)) ⇒ U ∗∗(Hn(x, Fn)) as B → ∞. Moreover, by the Dvoretzky, Kiefer, Wolfowitz (1956) inequality ( P(kUnk ≥ λ) ≤ 2 exp(−2λ 2 ) for all n and λ > 0 where the constant 2 before the exponential comes via Massart (1990)), P(sup x |H∗ n,B(x) − Hn(x, Fn)| ≥ ) ≤ 2 exp(−2B2 ). For a given  > 0 we can make this probability as small as we please by choosing B (over which we have complete control given sufficient computing power) sufficiently large. Since the deviations of H∗ n,B from Hn(x, Fn) are so well -understood and controlled, much of our discussion below will focus on the differences between Hn(x, Fn) and Hn(x, F). Sometimes it is possible to compute the distribution of the bootstrap estimator explicitly with out resort to Monte-Carlo; here is an example of this kind. Example 2.1 (The distribution of the bootstrap estimator of the median). Suppose that T(F) = F −1 (1/2). Then T(Fn) = F −1 n (1/2) = X([n+1]/2) and T(F ∗ n ) = F ∗−1 n (1/2) = X∗ ([n+1]/2)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有