当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

国立中山大学:《计量经济学》(英文版) Chapter 18 Vector Time Series

资源类别:文库,文档格式:PDF,文档页数:26,文件大小:193.86KB,团购合买
Ch. 18 Vector Time series 1 Introduction In dealing with economic variables often the value of one variables is not only related to its predecessors in time but, in addition, it depends on past values of other variables. This naturally extends the concept of univariate stochastic process to vector time series analysis. This chapter describes the dynamic in
点击下载完整版文档(PDF)

Ch. 18 Vector Time series 1 Introduction In dealing with economic variables often the value of one variables is not only related to its predecessors in time but, in addition, it depends on past values of other variables. This naturally extends the concept of univariate stochastic process to vector time series analysis. This chapter describes the dynamic in teractions among a set of variables collected in an(k x 1)vector y. Definition Let(S, F, p) be a probability space and T an index set of real numbers and define the k-dimensional vector function y( by y(, :SxT ordered sequence of random vector y(, t),t ET is called a vector stochastic process 1.1 First Two moments of Stationary Vector Time Series From now on in this chapter we follows convention to use t in stead of y(, t) to indicate that we are considering discrete vector time series. The first two moments of a vector time series yt are E(yt)=μt,and El(y-At(yt-i-u_i) for alltE T If neither At and Tt, j are function of t, that is, A,=u and Tt=T, then we say that y is a covariance-stationary vector process. Note that although 1j=71-j for a scalar stationary process, the same is not true of a vector process T≠r Instead. the correct relation is Since Ely++;-u)y uT E[(y+-1)(yt-1)

Ch. 18 Vector Time Series 1 Introduction In dealing with economic variables often the value of one variables is not only related to its predecessors in time but, in addition, it depends on past values of other variables. This naturally extends the concept of univariate stochastic process to vector time series analysis. This chapter describes the dynamic in￾teractions among a set of variables collected in an (k × 1) vector yt . Definition : Let (S, F,P) be a probability space and T an index set of real numbers and define the k-dimensional vector function y(·, ·) by y(·, ·) : S × T → Rk . The ordered sequence of random vector {y(·,t),t ∈ T } is called a vector stochastic process. 1.1 First Two moments of Stationary Vector Time Series From now on in this chapter we follows convention to use yt in stead of y(·,t) to indicate that we are considering discrete vector time series. The first two moments of a vector time series yt are E(yt) = µt , and Γt,j = E[(yt − µt )(yt−j − µt−j ) 0 ] for all t ∈ T . If neither µt and Γt,j are function of t, that is, µt = µ and Γt,j = Γj , then we say that yt is a covariance-stationary vector process. Note that although γj = γ−j for a scalar stationary process, the same is not true of a vector process: Γj 6= Γ−j . Instead, the correct relation is Γ 0 j = Γ−j since Γj = E[(yt+j − µ)(y(t+j)−j − µ) 0 ] = E[(yt+j − µ)(yt − µ) 0 ], 1

and taking transpose r=E[(y2-)34+-p) 1) 1.2 Vector white noise process Definition a k x I vector process Et, tET) is said to be a white-noise process if (). E(et) (ii). E(EET) 0ift≠ where S2 is an(k x k) symmetric positive definite matrix. It is important to note that in general n2 is not necessary a diagonal matrix, since it is the contempora- neous correlation among variables that called for the needs of vector time series anaIvsis 1.3 Vector MA(q)Process A vector moving average process of order g takes the form yt=p+et+1et-1+白2et-2+…+eet-q, where Et is a vector white noise process and e; denotes an(k x k)matrix of MA coefficients for j=1, 2,.. 9. The mean of yt is u, and the variance is ro=E[(yt-1)(y:-) fEE/+O1EEt-1Et10+E2EEt-2Et-2Je? +…+nEet-9=-le 2+192e1+e2962+…+22e with autocovariance(compares with )j of Ch. 14 on p. 3) e!+6+19261++2!2e2+…+ese for j=1, 2, r={se-+19+1+2+2+…++9e4foj for lil

and taking transpose, Γ 0 j = E[(yt − µ)(yt+j − µ) 0 ] = E[(yt − µ)(yt−(−j) − µ) 0 ] = Γ−j . 1.2 Vector White Noise Process Definition: A k × 1 vector process {εt , t ∈ T } is said to be a white-noise process if (i). E(εt) = 0; (ii). E(εtε 0 τ ) =  Ω if t = τ 0 if t 6= τ, where Ω is an (k × k) symmetric positive definite matrix. It is important to note that in general Ω is not necessary a diagonal matrix, since it is the contempora￾neous correlation among variables that called for the needs of vector time series analysis. 1.3 Vector MA(q) Process A vector moving average process of order q takes the form yt = µ + εt + Θ1εt−1 + Θ2εt−2 + ... + Θqεt−q, where εt is a vector white noise process and Θj denotes an (k ×k) matrix of MA coefficients for j = 1, 2, ..., q. The mean of yt is µ, and the variance is Γ0 = E[(yt − µ)(yt − µ) 0 ] = E[εtε 0 t ] + Θ1E[εt−1ε 0 t−1 ]Θ0 1 + Θ2E[εt−2ε 0 t−2 ]Θ0 2 +... + ΘqE[εt−qε 0 t−q ]Θ0 q = Ω + Θ1ΩΘ0 1 + Θ2ΩΘ0 2 + ... + ΘqΩΘ0 q , with autocovariance (compares with γj of Ch. 14 on p.3) Γj =    ΘjΩ + Θj+1ΩΘ0 1 + Θj+2ΩΘ0 2 + ... + ΘqΩΘ0 q−j for j = 1, 2, ..., q ΩΘ0 −j + Θ1ΩΘ0 −j+1 + Θ2ΩΘ0 −j+2 + ... + Θq+jΩΘ0 q for j = −1, −2, ..., −q 0 for |j| > q, 2

where O0=Ik. Thus any vector MA(q) process is covariance-stationary 1.4Ⅴ ector MA(∞)P rocess The vector MA(oo)process is written yt=l+Et+业1Et-1+业2Et-2+ where Et is a vector white noise process and y i denotes an(k x k) matrix of MA coefficients Definition: For an(n x m) matrix H, the sequence of matrices Hs so_ is absolutely summable if each of its elements forms an absolutely summable scalar sequence Example If i denotes the row i, column j element of the moving average parameters matrix ys associated with lag s, then the sequence ys=o is absolutely if ∑|e|<∞fori=1,2…, k and j=1,2,… Theorem. Let yt=+et+业1Et-1+业2Et-2+ where Et is a vector white noise process and (ilio is absolutely summable. Let lit denote the ith element of yt, and let u; denote the ith element of u. Then (a). the autocovariance between the ith variable at time t and the jth variable s period earlier, E(yit -Hi)(; t-s-ui), exist and is given by the row i, column j r ys+Qyy fa 0,1,2

where Θ0 = Ik. Thus any vector MA(q) process is covariance-stationary. 1.4 Vector MA(∞) Process The vector MA(∞) process is written yt = µ + εt + Ψ1εt−1 + Ψ2εt−2 + .... where εt is a vector white noise process and Ψj denotes an (k ×k) matrix of MA coefficients. Definition: For an (n × m) matrix H, the sequence of matrices {Hs} ∞ s=−∞ is absolutely summable if each of its elements forms an absolutely summable scalar sequence. Example: If ψ (s) ij denotes the row i, column j element of the moving average parameters matrix Ψs associated with lag s, then the sequence {Ψs} ∞ s=0 is absolutely if X∞ s=0 |ψ (s) ij | < ∞ for i = 1, 2, ..., k and j = 1, 2, ..., k. Theorem: Let yt = µ + εt + Ψ1εt−1 + Ψ2εt−2 + .... where εt is a vector white noise process and {Ψl} ∞ l=0 is absolutely summable. Let yit denote the ith element of yt , and let µi denote the ith element of µ. Then (a). the autocovariance between the ith variable at time t and the jth variable s period earlier, E(yit − µi)(yj,t−s − µj), exist and is given by the row i, column j element of Γs = X∞ v=0 Ψs+vΩΨ0 v for s = 0, 1, 2, ...; 3

(b). the sequence of matrices rs so is absolutely summable (a). By definition TS=EO T。=E[t+业1et-1+业2Et-2+…+业Et-s++1et-8-1+… et-s+亚1Et-8-1+亚2Et-s-2+ 业92亚+重+9v1+更+2亚2 y。y fo 0,1,2 The row i, column j element of Is is therefore the autocovariance between the ith variable at time t and the jth variable s period earlier, E(yit -ui(,t-s -ui)

(b). the sequence of matrices {Γs} ∞ s=0 is absolutely summable. Proof: (a). By definition Γs = E(yt − µ)(yt−s − µ) 0 or Γs = E [εt + Ψ1εt−1 + Ψ2εt−2 + ... + Ψsεt−s + Ψs+1εt−s−1 + ....] [εt−s + Ψ1εt−s−1 + Ψ2εt−s−2 + ....] 0 = ΨsΩΨ0 0 + Ψs+1ΩΨ0 1 + Ψs+2ΩΨ0 2 + ... = X∞ v=0 Ψs+vΩΨ0 v for s = 0, 1, 2, ... The row i, column j element of Γs is therefore the autocovariance between the ith variable at time t and the jth variable s period earlier, E(yit −µi)(yj,t−s−µj ). (b). 4

2 Vector Autoregressive Process, VAR A pth order vector autoregression, denoted V AR(p) is written as yt=C+重yt-1+重yt-2+…+中pyt-p+Et where c denotes an(k×1) vector of constants and重;an(k×k) matrix of au toregressive coefficients for j=1, 2, . p and Et is a vector white noise process 2.1 Population Characteristics Let ci denotes the ith element of the vector c and let oia denote the row i, column j element of the matrix s, then the first row of the vector system in(1) specifies that y,t-2+12y2:-2+…+01k9kt y/1,t- 9k,t- Thus, a vector autoregression is a system in which each variable is regressed n a constant and p of its own lags as well as on p lags of each of the other(k-1) variables in the V AR. Note that each regression has the same explanatory vari- Using lag operator notation, (1)can be written in this form k-重1L-重2L 更LP 更(L)y Here p(L) indicate an k x k matrix polynomial in the lag operator L. The row i, column j elements of p(L)is a scalar polynomial in L where Si; is unity if i=j and zero otherwise

2 Vector Autoregressive Process, V AR A pth order vector autoregression, denoted V AR(p) is written as; yt = c + Φ1yt−1 + Φ2yt−2 + ... + Φpyt−p + εt , (1) where c denotes an (k × 1) vector of constants and Φj an (k × k) matrix of au￾toregressive coefficients for j = 1, 2, ..., p and εt is a vector white noise process. 2.1 Population Characteristics Let ci denotes the ith element of the vector c and let φ (s) ij denote the row i, column j element of the matrix Φs, then the first row of the vector system in (1) specifies that y1t = c1 + φ (1) 11 y1,t−1 + φ (1) 12 y2,t−1 + ... + φ (1) 1k yk,t−1 +φ (2) 11 y1,t−2 + φ (2) 12 y2,t−2 + .... + φ (2) 1k yk,t−2 +.... + φ (p) 11 y1,t−p + φ (p) 12 y2,t−p + ... + φ (p) 1k yk,t−p + ε1t . Thus, a vector autoregression is a system in which each variable is regressed on a constant and p of its own lags as well as on p lags of each of the other (k −1) variables in the V AR. Note that each regression has the same explanatory vari￾ables. Using lag operator notation, (1) can be written in this form [Ik − Φ1L − Φ2L 2 − ... − ΦpL p ]yt = c + εt or Φ(L)yt = c + εt . (2) Here Φ(L) indicate an k × k matrix polynomial in the lag operator L. The row i, column j elements of Φ(L) is a scalar polynomial in L: Φ(L)ij = [δij − φ (1) ij L 1 − φ (2) ij L 2 − ... − φ (p) ij L p ], where δij is unity if i = j and zero otherwise. 5

If the V AR(p) process is stationary, we can take expectation of both side of (1)to calculate the mean u of the process =c+重1+重2+…+重p, 更p) Equation(1)can then be written in terms of deviations from the mean (yt-p)=重1(yt-1-1)+重2y-2-1)+…+更p(y-p-1)+Et.(3) 2.1.1 Conditions for Stationarity As in the case of the univariate AR(p)process, it is helpful to rewrite(3)in terms of a VAR(1) process. Toward this end, define P×1) 重1重2更3 更 I00 O IK 0 F 0 0 000 000 0 00000 0

If the V AR(p) process is stationary, we can take expectation of both side of (1) to calculate the mean µ of the process: µ = c + Φ1µ + Φ2µ + ... + Φpµ, or µ = (Ik − Φ1 − Φ2 − ... − Φp) −1 c. Equation (1) can then be written in terms of deviations from the mean as (yt − µ) = Φ1(yt−1 − µ) + Φ2(yt−2 − µ) + ... + Φp(yt−p − µ) + εt . (3) 2.1.1 Conditions for Stationarity As in the case of the univariate AR(p) process, it is helpful to rewrite (3) in terms of a V AR(1) process. Toward this end, define ξt =         yt − µ yt−1 − µ . . . yt−p+1 − µ         (kp×1) (4) F =           Φ1 Φ2 Φ3 . . . Φp−1 Φp IK 0 0 . . . 0 0 0 IK 0 . . . 0 0 0 0 0 . . . 0 0 0 0 0 . . . 0 0 0 0 0 . . . 0 0 0 0 0 . . . 0 0           (kp×kp) (5) and vt =         εt 0 . . . 0         (kp×1) . 6

The V AR(p)in 3 )can then be rewritten as the following VAR(1) St=Fst which implies 5t+s=v++Fv+-1+F2v+s-2+…+F-v+1+F" where E(vv)9 foheruvise and 0 00 0 Q In order for the process to be covariance-stationary, the consequence of any given Et must eventually die out. If the eigenvalues of F all lie inside the unit circle, then the V aR turns out to be covariance-stationary Proposition The eigenvalues of the matrix F in(5) satisfy 2-2-…-重=0. Hence, a V AR(p)is covariance-stationary as long as a 1 for all values of A satisfying(8). Equivalently, the V AR is stationary if all values z satisfying 重2=0 lie outside the unit circle

The V AR(p) in (3) can then be rewritten as the following V AR(1): ξt = Fξt−1 + vt , (6) which implies ξt+s = vt+s + Fvt+s−1 + F 2vt+s−2 + ... + F s−1vt+1 + F s ξt , (7) where E(vtv 0 s ) =  Q for t = s 0 otherwise and Q =         Ω 0 . . . 0 0 0 . . . 0 . . . . . . . . . . . . . . . . . . 0 0 . . . 0         . In order for the process to be covariance-stationary, the consequence of any given εt must eventually die out. If the eigenvalues of F all lie inside the unit circle, then the V AR turns out to be covariance-stationary. Proposition: The eigenvalues of the matrix F in (5) satisfy Ikλ p − Φ1λ p−1 − Φ2λ p−2 − ... − Φp = 0. (8) Hence, a V AR(p) is covariance-stationary as long as |λ| < 1 for all values of λ satisfying (8). Equivalently, the V AR is stationary if all values z satisfying Ik − Φ1z − Φ2z 2 − ... − Φpz p = 0 lie outside the unit circle. 7

2.1.2 Vector MA(oo) Re epresentation The first k rows of the vector system represented in(7)constitute a vector system yt+s +Et+s+业1Et+s-1+业2Et+8-2 +Fi1(y2-)+F12(y-1-p)+…+F1p(yt-p+1-p) ,=FI and Fl denotes the upper left block of Fi, where Fi is the matrix F raised to the jth power If the eigenvalues of f all lie inside the unit circle, then fs-oas s and y can be expressed as convergent sum of the history of E yt=+et+业1et-1+业2Et-2+业3et-3+…=+业(D)et The moving average matrices y, could equivalently be calculated as follows The operator更(L)(=Ik-重1L-重2L2 更LP)at(2)and业(Dat(9)are related by 业(L)=匝(L-1, requiring that Lk-1L一2L2一…-亞L[Lk+业1L+业2L2+…=I Setting the coefficient on L equal to the zero matrix produces Similarly, setting the coefficient on L equal to zero gives 业2=中1y1+更2 and in general for L 业。=重1亚-1+重2亚。-2+…+重2业。-p∫ors=1,2, (10) with业o= I k and业s=0fors<0 Note that the innovation in the MA(oo) representation in(9)is Et, the funda- mental innovation for yt. There are alternative moving average representations

2.1.2 Vector MA(∞) Representation The first k rows of the vector system represented in (7) constitute a vector system: yt+s = µ + εt+s + Ψ1εt+s−1 + Ψ2εt+s−2 + ... + Ψs−1εt+1 +F s 11(yt − µ) + F s 12(yt−1 − µ) + .... + F s 1p (yt−p+1 − µ). Here Ψj = F (j) 11 and F (j) 11 denotes the upper left block of F j , where F j is the matrix F raised to the jth power. If the eigenvalues of F all lie inside the unit circle, then F s → 0 as s → ∞ and yt can be expressed as convergent sum of the history of ε: yt = µ + εt + Ψ1εt−1 + Ψ2εt−2 + Ψ3εt−3 + ... = µ + Ψ(L)εt . (9) The moving average matrices Ψj could equivalently be calculated as follows. The operator Φ(L)(= Ik − Φ1L − Φ2L 2 − ... − ΦpL p ) at (2) and Ψ(L) at (9) are related by Ψ(L) = [Φ(L)]−1 , requiring that [Ik − Φ1L − Φ2L 2 − ... − ΦpL p ][Ik + Ψ1L + Ψ2L 2 + ...] = Ik. Setting the coefficient on L 1 equal to the zero matrix produces Ψ1 − Φ1 = 0. Similarly, setting the coefficient on L 2 equal to zero gives Ψ2 = Φ1Ψ1 + Φ2, and in general for L s , Ψs = Φ1Ψs−1 + Φ2Ψs−2 + ... + ΦpΨs−p for s = 1, 2, ... (10) with Ψ0 = Ik and Ψs = 0 for s < 0. Note that the innovation in the MA(∞) representation in (9) is εt , the funda￾mental innovation for yt . There are alternative moving average representations 8

based on vector white noise process other than Et. Let h denote a nonsingular (k x k) matrix, and define ut= hEt Then certainly ut is white noise E(ut) E(utu) HQ2H for t=T 0fort≠ Moreover, from( 9) we could writ H+H-Het+yiH-HEt-1+y2H-HEt-2+y3H-HEt-3+ u+Jout +J1ut-1+J2ut-2+J3ut-3+ where J。=yH1 (11) For example, H could be any matrix that diagonalizes n2 HSH=D with d a diagonal matrix. For such a choice of h, the element of ut are uncor- related with one another: a E(utu=HSH=D Thus, it is always possible to write a stationary V AR(p) process as a infinite moving average of a white noise vector ut whose elements are mutually uncorre- lated 2.1.3 Computation of Autocovariances of an Stationary V AR(p) Pro- cess We now consider to express the second moments for yt following a VAR(P) Recall that as in the univariate AR(p) process, the Yule-Walker equation are

based on vector white noise process other than εt . Let H denote a nonsingular (k × k) matrix, and define ut = Hεt . Then certainly ut is white noise: E(ut) = 0 and E(utu 0 τ ) =  HΩH0 for t = τ 0 for t 6= τ . Moreover, from (9) we could write yt = µ + H−1Hεt + Ψ1H−1Hεt−1 + Ψ2H−1Hεt−2 + Ψ3H−1Hεt−3 + ... = µ + J0ut + J1ut−1 + J2ut−2 + J3ut−3 + ...., where Js = ΨsH−1 . (11) For example, H could be any matrix that diagonalizes Ω, HΩH0 = D, with D a diagonal matrix. For such a choice of H, the element of ut are uncor￾related with one another:a E(utu 0 t ) = HΩH0 = D. Thus, it is always possible to write a stationary V AR(p) process as a infinite moving average of a white noise vector ut whose elements are mutually uncorre￾lated. 2.1.3 Computation of Autocovariances of an Stationary V AR(p) Pro￾cess We now consider to express the second moments for yt following a V AR(p). Recall that as in the univariate AR(p) process, the Yule-Walker equation are 9

obtained by postmultiplying (3) with(yt-i-u)and taking expectations. For 0. using t= t To= E(y-u(y-u 重1E(y-1-p)(yt-p)2+重2E(y=2-)(yt-1) +…+重,E(y1-p-1)(yt-1)+Eeyt 更1-1+重2r-2+…+重nr-p+ 更11+更2r+…+重r+9 and for j>0 r=中1Ty-1+2I-2+…+pTj-p These equations may be used to compute the t; recursively for j≥pif更1…重p and t To are known Let st be as defined in(4)and let 2 denote the variance of s, E(E!) yt-1- E To TI TI Postmultiplying(4) by its own transpose and taking expectation gives E划=E(F-1+v)(F-1+v)=FE(E1-1-1)F+E(vv) ∑=F∑F+Q

obtained by postmultiplying (3) with (yt−j − µ) 0 and taking expectations. For j = 0, using Γj = Γ 0 −j , Γ0 = E(yt − µ)(yt − µ) 0 = Φ1E(yt−1 − µ)(yt − µ) 0 + Φ2E(yt−2 − µ)(yt − µ) 0 +... + ΦpE(yt−p − µ)(yt − µ) 0 + Eεt(yt − µ) 0 = Φ1Γ−1 + Φ2Γ−2 + ... + ΦpΓ−p + Ω = Φ1Γ 0 1 + Φ2Γ 0 2 + ... + ΦpΓ 0 p + Ω and for j > 0, Γj = Φ1Γj−1 + Φ2Γj−2 + ... + ΦpΓj−p. (12) These equations may be used to compute the Γj recursively for j ≥ p if Φ1,...,Φp and Γp−1,...,Γ0 are known. Let ξt be as defined in (4) and let Σ denote the variance of ξt , Σ = E(ξtξ 0 t ) = E            yt − µ yt−1 − µ . . . yt−p+1 − µ         × (yt − µ) 0 (yt−1 − µ) 0 . . . (yt−p+1 − µ) 0 0    =         Γ0 Γ1 . . . Γp−1 Γ 0 1 Γ0 . . . Γp−2 . . . Γ 0 p−1 Γ 0 p−2 . . . Γ0         . Postmultiplying (4) by its own transpose and taking expectation gives E[ξtξ 0 t ] = E[(Fξt−1 + vt)(Fξt−1 + vt) 0 ] = FE(ξt−1ξ 0 t−1 )F 0 + E(vtv 0 t ) or Σ = FΣF0 + Q. (13) 10

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共26页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有