正在加载图片...
We see immediately that the well-known histogram is a special case of the kernel density estimator g()with the choice of a uniform kernel. Example1[Histogram]:fK(u)=l(ul≤1),then T a=10r-X≤ t=1 Intuitively,with the choice of a uniform kernel,the kernel density estimator g(r)is the relative sample frequency of the observations on the interval [x-h,x+h]which centers at point r and has a size of 2h.Here,2hT is approximately the sample size of the small interval [r-h,+h,when the size 2h is small enough.Alternatively, T1)is the relative sample frequency for the observations falling into the small interval [r-h,r+h,which,by the law of large numbers,is approximately equal to the probability E[1(lx-Xl≤)] =P(r-h≤Xt≤x+h) rzth g(y)dy x-h ≈2hg(x) if h is small enough and g(r)is continuous around the point t.Thus,the histogram is a reasonable estimator for g(r),and indeed it is a consistent estimator g(r)if h vanishes to zero but at a slower rate than sample size T goes to infinity. Question:Under what conditions will the density estimator g(r)be consistent for the known density function g(z)? We impose an assumption on the data generating process and the unknown PDF g(c). Assumption 3.1 [Smoothness of PDF]:(i){X}is a strictly stationary process with marginal PDF g();(ii)g(r)has a bounded support on [a,b],and is continuously twice differentiable on [a,b,with g"()being Lipschitz-continuous in the sense that Ig"(1)-g"(x2)<Cx1-z2 for all x1,2 E [a,b,where a,b and C are finite constants. Question:How to define the derivatives at the boundary points? 15We see immediately that the well-known histogram is a special case of the kernel density estimator g^(x) with the choice of a uniform kernel. Example 1 [Histogram]: If K(u) = 1 2 1(juj  1); then g^(x) = 1 2hT X T t=1 1(jx ￾ Xt j  h): Intuitively, with the choice of a uniform kernel, the kernel density estimator g^(x) is the relative sample frequency of the observations on the interval [x ￾ h; x + h] which centers at point x and has a size of 2h: Here, 2hT is approximately the sample size of the small interval [x ￾ h; x + h]; when the size 2h is small enough. Alternatively, T ￾1 PT t=1 1(jx ￾ Xt j  h) is the relative sample frequency for the observations falling into the small interval [x￾h; x+h]; which, by the law of large numbers, is approximately equal to the probability E [1(jx ￾ Xt j  h)] = P (x ￾ h  Xt  x + h) = Z x+h x￾h g(y)dy  2hg(x) if h is small enough and g(x) is continuous around the point x: Thus, the histogram is a reasonable estimator for g(x); and indeed it is a consistent estimator g(x) if h vanishes to zero but at a slower rate than sample size T goes to inÖnity. Question: Under what conditions will the density estimator g^(x) be consistent for the known density function g(x)? We impose an assumption on the data generating process and the unknown PDF g(x): Assumption 3.1 [Smoothness of PDF]: (i) fXtg is a strictly stationary process with marginal PDF g(x); (ii) g(x) has a bounded support on [a; b]; and is continuously twice di§erentiable on [a; b]; with g 00() being Lipschitz-continuous in the sense that jg 00(x1)￾g 00(x2)j  Cjx1 ￾x2j for all x1; x2 2 [a; b]; where a; b and C are Önite constants. Question: How to deÖne the derivatives at the boundary points? 15
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有