正在加载图片...
then the EL attains its maximum Ln(B)=n-n at B.In the parametric regression we are considering.the number of parameters and the number of equations in(10)are the same.Hence,(10)has a solution B with probability approaching one in large samples. There are inference situations where the number of estimating equations is larger than the number of parameters (strictly speaking,dimension of the parameter space),for instance the Generalized Method of Moments in econometrics (Hansen,1982).Here, more model information is accounted for by imposing more moment restrictions,leading to more estimating equations than the number of parameters in the model.In statistics, they appear in the form of extra model information.In these so-called over-identified situations,the maximum EL,still using the notation Ln(B),may be different from n-.See Qin and Lawless(1994)for a discussion on this issue. Following the convention of the standard parametric likelihood,we can define from (8)the log EL ratio rn(8)=-2l0g(Ln(B)/n()=2 log1+xm((Y-m(X:B))).(11) 83 =1 Wilks'theorem(Wilks,1938)is a key property of the parametric likelihood ratio.If we replace the EL Ln(B)by the corresponding parametric likelihood,say Lpn(B),and use rpn(B)to denote the parametric likelihood ratio,according to Wilks'theorem,under certain regularity conditions, rpm(o)4X话asn一oo (12) This property is maintained by the EL,as is demonstrated in Owen (1990)for the mean parameter,Owen(1991)for linear regression,and many other situations(Qin and Lawless,1994,Molanes Lopez,Van Keilegom and Veraverbeke,2009).In the context of parametric regression, rn(8o)4xp as noo. (13) This can be viewed as a nonparametric version of Wilks'theorem,and it is quite re- markable for the empirical likelihood to achieve such a property under a nonparametric setting with much less parametric distributional assumptions.We call this analogue of sharing the Wilks'theorem the first order analogue between the parametric and the empirical likelihood. To appreciate why the nonparametric version of Wilks'theorem is valid,we would like to present a few steps of derivation that offer some insights into the nonparametric likelihood.Typically,the first step in a study on EL is considering an expansion for A defined in (7)at Bo,the true value of B,and determining its order of magnitude.It can be shown that for the current parametric regression, X=Op(n-1/2). (14) Such a rate for A is obtained in the original papers of Owen (1988,1990)for the mean parameter (which can be treated as a trivial case of regression without covariates), in Owen (1991)for linear regression,and in Qin and Lawless (1994)and Molanes Lopez,Van Keilegom and Veraverbeke (2009)for the more general case of estimating equations.5 then the EL attains its maximum Ln(βˆ) = n −n at βˆ. In the parametric regression we are considering, the number of parameters and the number of equations in (10) are the same. Hence, (10) has a solution βˆ with probability approaching one in large samples. There are inference situations where the number of estimating equations is larger than the number of parameters (strictly speaking, dimension of the parameter space), for instance the Generalized Method of Moments in econometrics (Hansen, 1982). Here, more model information is accounted for by imposing more moment restrictions, leading to more estimating equations than the number of parameters in the model. In statistics, they appear in the form of extra model information. In these so-called over-identified situations, the maximum EL, still using the notation Ln(βˆ), may be different from n −n . See Qin and Lawless (1994) for a discussion on this issue. Following the convention of the standard parametric likelihood, we can define from (8) the log EL ratio rn(β) = −2 log{Ln(β)/Ln(βˆ)} = 2Xn i=1 log{1 + λ T ∂m(Xi ; β) ∂β (Yi − m(Xi ; β))}. (11) Wilks’ theorem (Wilks, 1938) is a key property of the parametric likelihood ratio. If we replace the EL Ln(β) by the corresponding parametric likelihood, say Lpn(β), and use rpn(β) to denote the parametric likelihood ratio, according to Wilks’ theorem, under certain regularity conditions, rpn(β0) d→ χ 2 p as n → ∞. (12) This property is maintained by the EL, as is demonstrated in Owen (1990) for the mean parameter, Owen (1991) for linear regression, and many other situations (Qin and Lawless, 1994, Molanes L´opez, Van Keilegom and Veraverbeke, 2009). In the context of parametric regression, rn(β0) d→ χ 2 p as n → ∞. (13) This can be viewed as a nonparametric version of Wilks’ theorem, and it is quite re￾markable for the empirical likelihood to achieve such a property under a nonparametric setting with much less parametric distributional assumptions. We call this analogue of sharing the Wilks’ theorem the first order analogue between the parametric and the empirical likelihood. To appreciate why the nonparametric version of Wilks’ theorem is valid, we would like to present a few steps of derivation that offer some insights into the nonparametric likelihood. Typically, the first step in a study on EL is considering an expansion for λ defined in (7) at β0, the true value of β, and determining its order of magnitude. It can be shown that for the current parametric regression, λ = Op(n −1/2 ). (14) Such a rate for λ is obtained in the original papers of Owen (1988, 1990) for the mean parameter (which can be treated as a trivial case of regression without covariates), in Owen (1991) for linear regression, and in Qin and Lawless (1994) and Molanes L´opez, Van Keilegom and Veraverbeke (2009) for the more general case of estimating equations
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有