正在加载图片...
We say that x is outside the interior region a+h,b-h if r in [a,a+h or b-h,b. These two regions are called boundary regions of the support.Their sizes are equal to h and so vanish to zero as the sample size T increases. Suppose x=a+λh∈[a,a+h),where入∈[0,l).We shall call x is a point in the left boundary region of the support a,b.Then E[G(x】-g(x)=E[Kh(e-X】-g(x) () g(ydy-g(x) r(b-x)/h K(u)g(x+hu)du-g(x) (a-x)/h K(u)g(x+hu)du-g(x) g()K(u)du-() +h uk(u)g'(z Thu)du = ()K(u)dz-1+0(0). =0(1) if g(c)is bounded away from zero,that is,if g()>e>0 for all r[a,b]for any small but fixed constant e.Note that the (1)term arises sinceK(u)dz=1 for any<1. Thus,if [a,a+h)or (b-h,b],the bias E()]-g()may never vanish to zero even if h0.This is due to the fact that there is no symmetric coverage of observations in the boundary region [a,a +h)or (b-h,b.This phenomenon is called the boundary effect or boundary problem of kernel estimation. There have been several solutions proposed in the smoothed nonparametric literature. These include the following methods. Trimming Observations:Do not use the estimate g(r)when z is in the bound- ary regions.That is,only estimate and use the densities for points in the interior region [a +h,b-h. This approach has a drawback.Namely,valuable information may be lost because g(r)in the boundary regions contain the information on the tail distribution of 18We say that x is outside the interior region [a + h; b ￾ h] if x in [a; a + h] or [b ￾ h; b]: These two regions are called boundary regions of the support. Their sizes are equal to h and so vanish to zero as the sample size T increases. Suppose x = a + h 2 [a; a + h); where  2 [0; 1): We shall call x is a point in the left boundary region of the support [a; b]. Then E [^g(x)] ￾ g(x) = E [Kh (x ￾ Xt)] ￾ g(x) = 1 h Z b a K  x ￾ y h  g(y)dy ￾ g(x) = Z (b￾x)=h (a￾x)=h K(u)g(x + hu)du ￾ g(x) = Z 1 ￾ K(u)g(x + hu)du ￾ g(x) = g(x) Z 1 ￾ K(u)du ￾ g(x) +h Z 1 ￾ uK(u)g 0 (x + hu)du = g(x) Z 1 ￾ K(u)dx ￾ 1  + O(h): = O(1) if g(x) is bounded away from zero, that is, if g(x)   > 0 for all x 2 [a; b] for any small but Öxed constant : Note that the O(1) term arises since R 1 ￾ K(u)dx = 1 for any  < 1: Thus, if x 2 [a; a + h) or (b ￾ h; b]; the bias E[^g(x)] ￾ g(x) may never vanish to zero even if h ! 0. This is due to the fact that there is no symmetric coverage of observations in the boundary region [a; a + h) or (b ￾ h; b]: This phenomenon is called the boundary e§ect or boundary problem of kernel estimation. There have been several solutions proposed in the smoothed nonparametric literature. These include the following methods.  Trimming Observations: Do not use the estimate g^(x) when x is in the bound￾ary regions. That is, only estimate and use the densities for points in the interior region [a + h; b ￾ h]: This approach has a drawback. Namely, valuable information may be lost because g^(x) in the boundary regions contain the information on the tail distribution of 18
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有