正在加载图片...
3 MlE of a Gaussian MA(1) Process This section discusses a Gaussian MA(1) process Yt=u de with Et w i.i.d. N(O, 02). In this case, the vector of population parameters to be mated is 0=(u,0, a2y 3.1 Evaluating the Likelihood Function The observations in the sample(Y1, Y2, ,Yr) in a(T x 1) vector y which has mean vector p with each element p and variance-covariance matrix given by (1+62)6 0(1+62)6 0 (1+62) 2=E(y-4)(y-p)=a2 (1+62) The likelihood function is then f (y-p)s2-1(y-p) Using triangular factorization of the variances covariance matrix. the likelihood function can be written 1/2 ∫ (m,x-1…,:0)=(2m)m2 and the loglikelihood is therefore c(0=log fy -1,…,y1:6) dtt3 MLE of a Gaussian MA(1) Process This section discusses a Gaussian MA(1) process, Yt = µ + εt + θεt−1 (12) with εt ∼ i.i.d. N(0, σ 2 ). In this case, the vector of population parameters to be estimated is θ = (µ, θ, σ 2 ) 0 . 3.1 Evaluating the Likelihood Function The observations in the sample (Y1, Y2, ..., YT ) in a (T × 1) vector y which has mean vector µ with each element µ and variance-covariance matrix given by Ω = E(y − µ)(y − µ) 0 = σ 2           (1 + θ 2 ) θ 0 . . . 0 θ (1 + θ 2 ) θ . . . 0 0 θ (1 + θ 2 ) . . . 0 . . . . . . . . . . . . . . . . . . . . . 0 0 0 . . . (1 + θ 2 )           The likelihood function is then fYT ,YT −1,...,Y1 (yT , yT −1, ..., y1; θ) = (2π) −T/2 |Ω| −1/2 exp  − 1 2 (y − µ) 0Ω −1 (y − µ)  Using triangular factorization of the variances covariance matrix, the likelihood function can be written fYT ,YT −1,...,Y1 (yT , yT −1, ..., y1; θ) = (2π) −T/2 "Y T t=1 dtt#−1/2 exp " − 1 2 X T t=1 y˜ 2 t dtt# and the loglikelihood is therefore L(θ) = log fYT ,YT −1,...,Y1 (yT , yT −1, ..., y1; θ) = − T 2 log(2π) − 1 2 X T t=1 dtt − 1 2 X T t=1 y˜ 2 t dtt , 11
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有