正在加载图片...
inference.The regression models considered in this review cover parametric,nonpara- metric and semiparametric regression models.In addition to the case of completely observed data,we also accommodate missing and censored data in this review. The EL method (Owen,1988,1990)owns its broad usage and fast research de- velopment to a number of important advantages.Generally speaking,it combines the reliability of nonparametric methods with the effectiveness of the likelihood approach. It yields confidence regions that respect the boundaries of the support of the target parameter.The regions are invariant under transformations and behave often better than confidence regions based on asymptotic normality when the sample size is small. Moreover,they are of natural shape and orientation since the regions are obtained by contouring a log likelihood ratio,and they often do not require the estimation of the variance,as the studentization is carried out internally via the optimization procedure. The EL method turns out appealing not only in getting confidence regions,but it also has its unique attractions in parameter estimation and formulating goodness-of-fit tests. 2 Parametric regression Suppose that we observe a sample of independent observations(i)where each Yi is regarded as the response of a d-dimensional design (covariate)variable Xi The preliminary interest here is in the conditional mean function (regression function) of Yi given Xi.One distinguishes between the design Xi being either fixed or random. Despite regression is conventionally associated with fixed designs,for ease of presen- tation,we will concentrate on random designs.The empirical likelihood analysis for fixed designs can be usually extended by regularizing the random designs. Consider first the following parametric regression model: Yi=m(Xi;B)+i,for i=1,...,n, (1) where m(r;B)is the known regression function with an unknown p-dimensional (p<n) parameter BE RP,the errors ei are independent random variables such that E(eilXi)= 0 and Var(eiXi)=o2(Xi)for some function ()Hence,the errors can be het- eroscedastic.We require,like in all empirical likelihood formulations,that the errors e;have finite conditional variance,which is a minimum condition needed by the em- pirical likelihood method to ensure a limiting chi-square distribution for the empirical likelihood ratio. The parametric regression function includes as special cases(i)the linear regression with m(x;B)=B:(ii)the generalized linear model (McCullagh and Nelder,1989) with m(;B)=G(r B)and a2(r)=ov{G(B)}for a known link function G,a known variance function V(),and an unknown constant o>0.Note that for these two special cases p d. In the absence of model information on the conditional variance,the least squares (LS)regression estimator of B is obtained by minimizing the sum of least squares 5n(a)=:∑-m(X:a2. =1 The LS estimator of B is Bis =arg infg Sn(B).When the regression function m(;B) is smooth enough with respect to B,Bis will be a solution of the following estimating2 inference. The regression models considered in this review cover parametric, nonpara￾metric and semiparametric regression models. In addition to the case of completely observed data, we also accommodate missing and censored data in this review. The EL method (Owen, 1988, 1990) owns its broad usage and fast research de￾velopment to a number of important advantages. Generally speaking, it combines the reliability of nonparametric methods with the effectiveness of the likelihood approach. It yields confidence regions that respect the boundaries of the support of the target parameter. The regions are invariant under transformations and behave often better than confidence regions based on asymptotic normality when the sample size is small. Moreover, they are of natural shape and orientation since the regions are obtained by contouring a log likelihood ratio, and they often do not require the estimation of the variance, as the studentization is carried out internally via the optimization procedure. The EL method turns out appealing not only in getting confidence regions, but it also has its unique attractions in parameter estimation and formulating goodness-of-fit tests. 2 Parametric regression Suppose that we observe a sample of independent observations {(X T i , Yi) T } n i=1, where each Yi is regarded as the response of a d-dimensional design (covariate) variable Xi . The preliminary interest here is in the conditional mean function (regression function) of Yi given Xi . One distinguishes between the design Xi being either fixed or random. Despite regression is conventionally associated with fixed designs, for ease of presen￾tation, we will concentrate on random designs. The empirical likelihood analysis for fixed designs can be usually extended by regularizing the random designs. Consider first the following parametric regression model: Yi = m(Xi ; β) + εi , for i = 1, . . . , n, (1) where m(x; β) is the known regression function with an unknown p-dimensional (p < n) parameter β ∈ R p , the errors εi are independent random variables such that E(εi |Xi) = 0 and Var(εi |Xi) = σ 2 (Xi) for some function σ(·). Hence, the errors can be het￾eroscedastic. We require, like in all empirical likelihood formulations, that the errors ǫi have finite conditional variance, which is a minimum condition needed by the em￾pirical likelihood method to ensure a limiting chi-square distribution for the empirical likelihood ratio. The parametric regression function includes as special cases (i) the linear regression with m(x; β) = x T β; (ii) the generalized linear model (McCullagh and Nelder, 1989) with m(x; β) = G(x T β) and σ 2 (x) = σ 2 0V {G(x T β)} for a known link function G, a known variance function V (·), and an unknown constant σ 2 0 > 0. Note that for these two special cases p = d. In the absence of model information on the conditional variance, the least squares (LS) regression estimator of β is obtained by minimizing the sum of least squares Sn(β) =: Xn i=1 {Yi − m(Xi ; β)} 2 . The LS estimator of β is βˆ ls = arg infβ Sn(β). When the regression function m(x; β) is smooth enough with respect to β, βˆ ls will be a solution of the following estimating
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有