正在加载图片...
In general,we do not assume that the function form of ri(r)is known,except that we still maintain the assumption that ri(c)is a square-integrable function.Because ri(r) is square-integrable,we have ri(x)dz ∑∑aat vi(r)v(x)dx j=0k=0 0000 ∑∑by orthonormality j=0k=0 00 ∑<, j= where oj.k is the Kronecker delta function:6ik=1 if j=k and 0 otherwise. The squares summability implies aj-0 as j-oo,that is,aj becomes less impor- tant as the order j-oo.This suggests that a truncated sum rnp)=∑a, j=0 can be used to approximate ri(x)arbitrarily well if p is sufficiently large.The approxi- mation error,or the bias, b(x)三ri(x)-rnip(x) = ∑a,() j=p+1 →0 asp→o. However,the coefficient a;is unknown.To obtain a feasible estimator for ri(r),we consider the following sequence of truncated regression models X=∑B,,(X-i)+ct, j=0 where p=p(T)-oo is the number of series terms that depends on the sample size T. We need p/T-0 as T-oo,i.e.,the number of p is much smaller than the sample size T.Note that the regression error Ept is not the same as the true innovation et for each given p.Instead,it contains the true innovation et and the bias bp(X:-1). 6In general, we do not assume that the function form of r1(x) is known, except that we still maintain the assumption that r1(x) is a square-integrable function. Because r1(x) is square-integrable, we have Z 1 ￾1 r 2 1 (x)dx = X1 j=0 X1 k=0 j k Z 1 ￾1 j (x) k (x)dx = X1 j=0 X1 k=0 j kj;k by orthonormality = X1 j=0 2 j < 1; where j;k is the Kronecker delta function: j;k = 1 if j = k and 0 otherwise. The squares summability implies j ! 0 as j ! 1; that is, j becomes less impor￾tant as the order j ! 1. This suggests that a truncated sum r1p(x) = X p j=0 j j (x) can be used to approximate r1(x) arbitrarily well if p is su¢ ciently large. The approxi￾mation error, or the bias, bp(x)  r1(x) ￾ r1p(x) = X1 j=p+1 j j (x) ! 0 as p ! 1: However, the coe¢ cient j is unknown. To obtain a feasible estimator for r1(x); we consider the following sequence of truncated regression models Xt = X p j=0 j j (Xt￾1) + "pt; where p  p(T) ! 1 is the number of series terms that depends on the sample size T: We need p=T ! 0 as T ! 1, i.e., the number of p is much smaller than the sample size T. Note that the regression error "pt is not the same as the true innovation "t for each given p: Instead, it contains the true innovation "t and the bias bp(Xt￾1): 6
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有