正在加载图片...
and deline as the solution to OIn L* We call L*(o) the concentrated likelihood function of a. It is able to show that the mlE of a from( 8)and(9)simultaneously a, and from concentrated likelihood(12), a, are identical and have the same limiting distribution 3.1.2 Calculate Auxiliary Regressions The first step involve concentrating the likelihood function. This means take Q so as given and maximization(7) with respect to(c, 51, $2, ...$p-1). This restricted maximization problem take the form of seemingly unrelated regres- sion of the elements of the(k x 1) vector Ayt-Soyi-1 on a constant and the explanatory variables(△y-1,△yt-2,…,△yt-p+1). Since each of the k regres sions in this system has the identical explanatory variables, the estimates of (C, 51, 52,-5p-1)would come from OLS regression of each regressions of each elements of Ayt -]t-1 on a constant and(51, 52,,5p-1). Denote the value of (c, 1, 52, $p-1) that maximize(7) for a given value of So(and Q, although it doesn't matter from the properties of SURE model)by These values are characterized by the condition that the following residual vec- tor must have sample mean zero and be orthogonal to(Ayt-1, Ayt-2 Ay-M--{et)+iE)△yx1+△y2+…+s1(6)△y-}413) To obtain(13)with unknown So(although we assume it is known at this stage to form concentrated log-likelihood function), we may form two auxiliary egression and estimate them by ols to get △yt=o+II1△yt-1+∏2△y-2+…+I-1△y-p+1+tand define αˆ ∗ as the solution to ∂ ln L ∗ ∂α αˆ∗ = 0. (12) We call L ∗ (α) the concentrated likelihood function of α. It is able to show that the MLE of α from (8) and (9) simultaneously αˆ , and from concentrated likelihood (12), αˆ ∗ , are identical and have the same limiting distribution. 3.1.2 Calculate Auxiliary Regressions The first step involve concentrating the likelihood function. This means take Ω and ξ0 as given and maximization (7) with respect to (c, ξ1 , ξ2 , ..., ξp−1 ). This restricted maximization problem take the form of seemingly unrelated regres￾sion of the elements of the (k × 1) vector △yt − ξ0yt−1 on a constant and the explanatory variables (△yt−1, △yt−2, ..., △yt−p+1). Since each of the k regres￾sions in this system has the identical explanatory variables, the estimates of (c, ξ1 , ξ2 , ..., ξp−1 ) would come from OLS regression of each regressions of each elements of △yt − ξ0yt−1 on a constant and (ξ1 , ξ2 , ..., ξp−1 ). Denote the value of (c, ξ1 , ξ2 , ..., ξp−1 ) that maximize (7) for a given value of ξ0 (and Ω, although it doesn’t matter from the properties of SURE model) by h cˆ ∗ (ξ0 ), ˆξ ∗ 1 (ξ0 ), ˆξ ∗ 2 (ξ0 ), ..., ˆξ ∗ p−1 (ξ0 ) i . These values are characterized by the condition that the following residual vec￾tor must have sample mean zero and be orthogonal to (△yt−1, △yt−2, ..., △yt−p+1): [△yt − ξ0yt−1] − n cˆ ∗ (ξ0 ) + ˆξ ∗ 1 (ξ0 )△yt−1 + ˆξ ∗ 2 (ξ0 )△yt−2 + ..... + ˆξ ∗ p−1 (ξ0 )△yt−p+1o .(13) To obtain (13) with unknown ξ0 (although we assume it is known at this stage to form concentrated log-likelihood function), we may form two auxiliary regressions and estimate them by OLS to get △yt = πˆ 0 + Πˆ 1△yt−1 + Πˆ 2△yt−2 + ..... + Πˆ p−1△yt−p+1 + uˆt (14) 9
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有