正在加载图片...
Ch. 17 Maximum likelihood estimation e identica ation process having led to a tentative formulation for the model, we then need to obtain efficient estimates of the parameters. After the parameters have been estimated, the fitted model will be subjected to diagnostic checks This chapter contains a general account of likelihood method for estimation of the parameters in the stochastic model Consider an ARMA (from model identification) model of the form Y=c+φ1Yt-1+2Yt-2+…+qYt-p+et+61et-1 62t-2+…+6et-9 with Et white E(t)=0 E(EtEr) g- for t=T 0 otherwise This chapter explores how to estimate th ne value (c,q,…,p,6…,,a2) on the basis of observations on y The primary principle on which estimation will be based is macimum likelihood estimation. Let 8=(c,o1,.p, 01, , q, 0) denote the vector of population parameters. Suppose we have observed a sample of size T(y1, 92,.,r). The approach will be to calculate the joint probability density f,r11(ym,y-1,…,;) which might loosely be viewed as the probability of having observed this particular sample. The maximum likelihood estimate(MLE)of 8 is the value for which this sample is most likely to have been observed; that is, it is the value of 0 that Maximizes This approach requires specifying a particular distribution for the white noise process Et. Typically we will assume that Et is gaussian white noiseCh. 17 Maximum Likelihood Estimation The identification process having led to a tentative formulation for the model, we then need to obtain efficient estimates of the parameters. After the parameters have been estimated, the fitted model will be subjected to diagnostic checks . This chapter contains a general account of likelihood method for estimation of the parameters in the stochastic model. Consider an ARMA (from model identification) model of the form Yt = c + φ1Yt−1 + φ2Yt−2 + ... + φpYt−p + εt + θ1εt−1 +θ2εt−2 + ... + θqεt−q, with εt white noise: E(εt) = 0 E(εtετ ) =  σ 2 for t = τ 0 otherwise . This chapter explores how to estimate the value of (c, φ1, ..., φp, θ1, ..., θq, σ 2 ) on the basis of observations on Y . The primary principle on which estimation will be based is maximum likelihood estimation. Let θ = (c, φ1, ..., φp, θ1, ..., θq, σ 2 ) 0 denote the vector of population parameters. Suppose we have observed a sample of size T (y1, y2, ..., yT ). The approach will be to calculate the joint probability density fYT ,YT −1,...,Y1 (yT , yT −1, ..., y1; θ), (1) which might loosely be viewed as the probability of having observed this particular sample. The maximum likelihood estimate (MLE) of θ is the value for which this sample is most likely to have been observed; that is, it is the value of θ that maximizes (1). This approach requires specifying a particular distribution for the white noise process εt . Typically we will assume that εt is gaussian white noise: εt ∼ i.i.d. N(0, σ 2 ). 1
向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有