正在加载图片...
By convention,the derivatives of g()at boundary points a and b are g(a)=lim g(a+x)-g(a) x→0+ g(6)=lim g(6+x)-g(b) x→0 2 Similarly for the second derivatives g"(a)and g"(b)at the boundary points of the support a,. For convenience,we further impose an additional condition on kernel K(),which will actually be maintained throughout this chapter. Assumption 3.2 [Second Order Kernel with Bounded Support]:K(u)is a positive kernel function with a bounded support on [-1,1 This bounded support assumption is not necessary,but it simplifies the asymptotic analysis and interpretation. 2.1.2 Asymptotic Bias and Boundary Effect Our purpose is to show that g(t)is a consistent estimator for g()for a given point x in the support.Now we decompose (-g()=Eg(x)-g(x+()-Eg(. It follows that the mean squared error of the kernel density estimator g(r)is given by MSE(g(x))=[Eg(x)-g(2)2+E [g(z)-Eg(c)]2 =Bias2[g(z)]+var [()] The first term is the squared bias of the estimator g(r),which is nonstochastic,and the second term is the variance of g(r)at the point r.We shall show that under suitable regularity conditions,both the bias and the variance of g(r)vanish to zero as the sample size T goes to infinity. We first consider the bias.For any given point x in the interior region a+h,b-h] 16By convention, the derivatives of g() at boundary points a and b are g 0 (a) = lim x!0+ g(a + x) ￾ g(a) x ; g 0 (b) = lim x!0￾ g(b + x) ￾ g(b) x : Similarly for the second derivatives g 00(a) and g 00(b) at the boundary points of the support [a; b]. For convenience, we further impose an additional condition on kernel K(); which will actually be maintained throughout this chapter. Assumption 3.2 [Second Order Kernel with Bounded Support]: K(u) is a positive kernel function with a bounded support on [￾1; 1]: This bounded support assumption is not necessary, but it simpliÖes the asymptotic analysis and interpretation. 2.1.2 Asymptotic Bias and Boundary E§ect Our purpose is to show that g^(x) is a consistent estimator for g(x) for a given point x in the support. Now we decompose g^(x) ￾ g(x) = [Eg^(x) ￾ g(x)] + [^g(x) ￾ Eg^(x)]: It follows that the mean squared error of the kernel density estimator g^(x) is given by MSE(^g(x)) = [Eg^(x) ￾ g(x)]2 + E [^g(x) ￾ Eg^(x)]2 = Bias2 [^g(x)] + var[^g(x)] : The Örst term is the squared bias of the estimator g^(x), which is nonstochastic, and the second term is the variance of g^(x) at the point x. We shall show that under suitable regularity conditions, both the bias and the variance of g^(x) vanish to zero as the sample size T goes to inÖnity. We Örst consider the bias. For any given point x in the interior region [a + h; b ￾ h] 16
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有