正在加载图片...
To estimate g(,)it suffices to estimate two unknown parameters u and a2.Based on the random sample {X,we can obtain the maximum likelihood estimators(MLE), T ò2 t-1 The approach taken here is called a parametric approach,that is,assuming that the unknown PDF is a known functional form up to some unknown parameters.It can be shown that the parameter estimator 0 converges to the unknown parameter value 0o at a root-T convergence rate in the sense that vT(-00)=Op(1),or0-00=Op(T-1/2), where=(i,2),0o=(uo,)and Op(1)denotes boundedness in probability.The root-T convergence rate is called the parametric convergence rate for 0 and g(,0).As we will see below,nonparametric density estimators will have a slower convergence rate. Question:What is the definition of Op(r)? Let fr,T>1}be a sequence of positive numbers.A random variable Yr is said to be at most of order or in probability,written Yr=Op(r),if the sequence (Yr/or,T>1} is tight,that is,if lim lim sup P(Yr/6T>)=0. 入→0 Tightness is usually indicated by writing Yr/or=Op(1). Question:What is the advantage of the parametric approach? By the mean-value theorem,we obtain g(x,)-g(x)=g(c,o)-g(x)+ 09z,)0-o) 0+10 V厅09c,vT(0-) =0+0p(T-1/2) =Op(T-1/2). Intuitively,the first term,g(,0o)-g(),is the bias of the density estimator g(,0), which is zero if the assumption of correct model specification holds.The second term, )0),is due to the sampling error of the estimator,which is unavoidable no 11To estimate g(x; ); it su¢ ces to estimate two unknown parameters  and  2 : Based on the random sample fXtg T t=1; we can obtain the maximum likelihood estimators (MLE), ^ = 1 T X T t=1 Xt ; ^ 2 = 1 T X T t=1 (Xt ￾ ^) 2 : The approach taken here is called a parametric approach, that is, assuming that the unknown PDF is a known functional form up to some unknown parameters. It can be shown that the parameter estimator ^ converges to the unknown parameter value 0 at a root-T convergence rate in the sense that p T( ^ ￾0) = OP (1); or ^ ￾0 = OP (T ￾1=2 ); where ^ = (^; ^ 2 ) 0 ; 0 = (0 ; 2 0 ) 0 ; and OP (1) denotes boundedness in probability. The root-T convergence rate is called the parametric convergence rate for ^ and g(x; ^). As we will see below, nonparametric density estimators will have a slower convergence rate. Question: What is the deÖnition of OP (T )? Let fT ; T  1g be a sequence of positive numbers. A random variable YT is said to be at most of order T in probability, written YT = OP (T ); if the sequence fYT =T ; T  1g is tight, that is, if lim !1 lim sup T!1 P (jYT =T j > ) = 0: Tightness is usually indicated by writing YT =T = OP (1): Question: What is the advantage of the parametric approach? By the mean-value theorem, we obtain g(x; ^) ￾ g(x) = g(x; 0) ￾ g(x) + @ @ g(x; )(^ ￾ 0) = 0 + 1 p T @ @ g(x; ) p T( ^ ￾ 0) = 0 + OP (T ￾1=2 ) = OP (T ￾1=2 ): Intuitively, the Örst term, g(x; 0) ￾ g(x); is the bias of the density estimator g(x; ^); which is zero if the assumption of correct model speciÖcation holds. The second term, @ @ g(x; )(^￾0); is due to the sampling error of the estimator ^; which is unavoidable no 11
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有