当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

国立中山大学:《计量经济学》(英文版) Chapter 19 Models of Nonstationary Time Series

资源类别:文库,文档格式:PDF,文档页数:12,文件大小:121KB,团购合买
Ch. 19 Models of Nonstationary Time Series In time series analysis we do not confine ourselves to the analysis of stationary time series. In fact, most of the time series we encounter are nonstationary. How to deal with the nonstationary data and use what we have learned from stationary model are the main subjects of this chapter 1 Integrated Process
点击下载完整版文档(PDF)

Ch. 19 Models of Nonstationary Time Series In time series analysis we do not confine ourselves to the analysis of stationary time series. In fact, most of the time series we encounter are nonstationary. How to deal with the nonstationary data and use what we have learned from stationary model are the main subjects of this chapter 1 Integrated Process Consider the following two process oXt-1+t,|o<1: where ut and ut are mutually uncorrelated white noise process with variance o and of, respectively. Both Xt and Yt are Ar(1) process. The difference between two models is that Yt is a special case of a Xt process when =1 and is called a random walk process. It is also refereed to as a ar(1) model with a unit root since the root of the AR(1)process is 1. When we consider the statistical behavior of the two processes by investigating the mean(the first moment), and the variance and autocovariance(the second moment ), they are completely different. Although the two process belong to the same AR(1) class, Xt is a stationary process, while yt is a nonstationary process Assume that tE T*, T*=10, 1, 2,1,I the two stochastic pr rocesses can be expressed ad Similarly. in the unit root case 0 Suppose that the initial observation is zero, Xo=0 and Yo=0. The means of the two (Xt) I This assumption is required to derive the convergence of integrated process to standard Brownian Motion. A standard Brown Motion is defined on t E0, 1

Ch. 19 Models of Nonstationary Time Series In time series analysis we do not confine ourselves to the analysis of stationary time series. In fact, most of the time series we encounter are nonstationary. How to deal with the nonstationary data and use what we have learned from stationary model are the main subjects of this chapter. 1 Integrated Process Consider the following two process Xt = φXt−1 + ut , |φ| < 1; Yt = Yt−1 + vt , where ut and vt are mutually uncorrelated white noise process with variance σ 2 u and σ 2 v , respectively. Both Xt and Yt are AR(1) process. The difference between two models is that Yt is a special case of a Xt process when φ = 1 and is called a random walk process. It is also refereed to as a AR(1) model with a unit root since the root of the AR(1) process is 1. When we consider the statistical behavior of the two processes by investigating the mean (the first moment), and the variance and autocovariance (the second moment), they are completely different. Although the two process belong to the same AR(1) class, Xt is a stationary process, while Yt is a nonstationary process. Assume that t ∈ T ∗ , T ∗ = {0, 1, 2, ...}, 1 the two stochastic processes can be expressed ad Xt = φ tX0 + X t−1 i=0 φ i ut−i . Similarly, in the unit root case Yt = Y0 + X t−1 i=0 vt−i . Suppose that the initial observation is zero, X0 = 0 and Y0 = 0. The means of the two process are E(Xt) = 0 and E(Yt) = 0, 1This assumption is required to derive the convergence of integrated process to standard Brownian Motion. A standard Brown Motion is defined on t ∈ [0, 1]. 1

and variances are var(x)=∑var(m-)→ Var Var(t-)=t·σ The autocovariance of the two series are E(XtXt-n out E[(u+2u-1+…+u-r+…+9-1u1)(u1-+2u-r-1+…+d-r-hu) ∑φ E(YYt-r)=E Ut-T-i Ut+Ut u1( (t-r) We may expect that the autocorrelation functions are and The means of Xt and Yt are the same, but the variances(including autoco- variance) are different. The important thing to note is that the variances and

and variances are V ar(Xt) = X t−1 i=0 φ 2iV ar(ut−i) −→ 1 1 − φ2 σ 2 u and V ar(Yt) = X t−1 i=0 V ar(vt−i) = t · σ 2 v . The autocovariance of the two series are γ X τ = E(XtXt−τ ) = E " X t−1 i=0 φ iut−i ! t−Xτ−1 i=0 φ iut−τ−i !# = E[(ut + φ 1ut−1 + ... + φ τut−τ + ... + φ t−1u1)(ut−τ + φ 1ut−τ−1 + ... + φ t−τ−1u1) = t−Xτ−1 i=0 φ iφ τ+i σ 2 u = σ 2 uφ τ ( t−Xτ−1 i=0 φ 2i ) −→ φ τ 1 − φ 2 σ 2 u = φ τ γ X 0 . and γ Y τ = E(YtYt−τ ) = E " X t−1 i=0 vt−i ! t−Xτ−1 i=0 vt−τ−i !# = E[(vt + vt−1 + ... + vt−τ + vt−τ−1 + ... + v1)(vt−τ + vt−τ−1 + ... + v1)] = (t − τ )σ 2 v . We may expect that the autocorrelation functions are r X τ = γ X τ γ X 0 = φ τ −→ 0 and r Y τ = γ Y τ γ Y 0 = (t − τ ) t −→ 1 ∀ τ. The means of Xt and Yt are the same, but the variances (including autoco￾variance) are different. The important thing to note is that the variances and 2

autocovariance of Yt are function of t, while those of Xt converge to a constant asymptotically. Thus as t increase the variance of Yt increase, while the variance of Xt converges to a constant If we add a constant to the AR(1)process, then the means of two processes also behave differently. Consider the AR(1) process with a constant(or drift )as follows Xt=a+oXt-1+ut, oo+>but-i and Yt= Yo +at+>ut-1 (1) Note that Yt contains a(deterministic) trend t. If the initial observations are zero, Xo=0 and Yo=0, then the means of two process are E(XL E() but the variances and the autocovariance are the same as those derived from aR(1) model without the constant. By adding a constant to the Ar(1)pro- cesses, the means of two processes as well the variance are different. Both mean and variance of Yt are time varying, while those of Xt are constant Since the variance(the second moment)and even mean(the first moment)of the nonstationary series is not constant over time, the conventional asymptotic theory cannot be applied for these series(Recall the moment condition in CLT on p. 22 of Ch. 4)

autocovariance of Yt are function of t, while those of Xt converge to a constant asymptotically. Thus as t increase the variance of Yt increase, while the variance of Xt converges to a constant. If we add a constant to the AR(1) process, then the means of two processes also behave differently. Consider the AR(1) process with a constant (or drift) as follows Xt = α + φXt−1 + ut , |φ| < 1 and Yt = α + Yt−1 + vt . The successive substitution yields Xt = φ tX0 + α X t−1 i=0 φ i + X t−1 i=0 φ iut−i and Yt = Y0 + αt + X t−1 i=0 vt−i . (1) Note that Yt contains a (deterministic) trend t. If the initial observations are zero, X0 = 0 and Y0 = 0, then the means of two process are E(Xt) −→ α 1 − φ E(Yt) = αt but the variances and the autocovariance are the same as those derived from AR(1) model without the constant. By adding a constant to the AR(1) pro￾cesses, the means of two processes as well the variance are different. Both mean and variance of Yt are time varying, while those of Xt are constant. Since the variance (the second moment) and even mean (the first moment) of the nonstationary series is not constant over time, the conventional asymptotic theory cannot be applied for these series (Recall the moment condition in CLT on p.22 of Ch. 4). 3

2 Deterministic Trend and stochastic Trend Many economic and financial times series do trended upward over time(such as GNP, M2, Stock Index etc. See the plots of Hamilton, p. 436. For a long time each trending (nonstationary) economic time series has been decomposed into a deterministic trend and a stationary process. In recent years the idea of stochas- tic trend has emerged, and enriched the framework of analysis to investigate economic time series 2.1 Detrending methods 2.1.1 Differencing-Stationary One of the easiest ways to analyze those nonstationary-trending series is to make those series stationary by differencing. In our example, the random walk series with drift Yt can be transformed to a stationary series by differencing once AY=Yi-Yi-1=(1-LY=a+ut Since Ut is assumed to be a white noise process, the first difference of Yt is sta- tionary. The variance of AYt is constant over the sample period. In the I(1) Yt=Yo +at+>ut-i i=0 at is a deterministic trend while 2i=o vt-i is a stochastic trend When the nonstationary series can be transformed to the stationary series by differencing once, the series is said to be integrated of order 1 and is denoted by I(1), or in common, a unit root process. If the series needs to be differenced d times to be stationery, then the series is said to be I(d). The I(d) series(d+0) is also called a dif ferencing- stationary process (DSP). When(1-L is a stationary and invertible series that can be represented by an aRMA(p, g) model. i.e (1-1L-2L2-…-qnDP)(1-DY1=a+(1+1L+2L2+…+L)et(3) o(L)△4Y1=a+6(L)t

2 Deterministic Trend and Stochastic Trend Many economic and financial times series do trended upward over time (such as GNP, M2, Stock Index etc.). See the plots of Hamilton, p.436. For a long time each trending (nonstationary) economic time series has been decomposed into a deterministic trend and a stationary process. In recent years the idea of stochas￾tic trend has emerged, and enriched the framework of analysis to investigate economic time series. 2.1 Detrending Methods 2.1.1 Differencing-Stationary One of the easiest ways to analyze those nonstationary-trending series is to make those series stationary by differencing. In our example, the random walk series with drift Yt can be transformed to a stationary series by differencing once 4Yt = Yt − Yt−1 = (1 − L)Yt = α + vt . Since vt is assumed to be a white noise process, the first difference of Yt is sta￾tionary. The variance of 4Yt is constant over the sample period. In the I(1) process, Yt = Y0 + αt + X t−1 i=0 vt−i , (2) αt is a deterministic trend while Pt−1 i=0 vt−i is a stochastic trend. When the nonstationary series can be transformed to the stationary series by differencing once, the series is said to be integrated of order 1 and is denoted by I(1), or in common, a unit root process. If the series needs to be differenced d times to be stationery, then the series is said to be I(d). The I(d) series (d 6= 0) is also called a differencing − stationary process (DSP). When (1 − L) dYt is a stationary and invertible series that can be represented by an ARMA(p, q) model, i.e. (1 − φ1L − φ2L 2 − ... − φpL p )(1 − L) dYt = α + (1 + θ1L + θ2L 2 + ... + θqL q )εt (3) or φ(L)4dYt = α + θ(L)εt , 4

where all the roots of o(L)=0 and 0(L)=0 lie outside the unit circle, we say that Yt is an autoregressive integrated moving-average ARIMA(p, d, q) process In particular an unit root process, d= l or an ARIMA(p, 1, g) process is therefore o(L)△Y=a+b(L)et (1- LY=a+v(LEt where v(l)=o(L)e(L) and is absolutely summable Successive substitution yields Y=Y0+at+v(L)∑= 2.1.2 Trend-Stationary Another important class is the trend- stationary process(TSP). Consider the series v(L)Et where the coefficients of v(L)is absolute summable The mean of Xt is E(X,=u+at and is not constant over time, wh the variance of Xt is Var(Xt)=(1+1+v2 +.o2 and constant. Although the mean of Xt is not constant over the period, it can be forecasted perfectly whenever we know the value of t and the parameters a and d. In the sense it is stationary around the deterministic trend t and Xt can be transformed to stationarity by regressing it on time. Note that both DSP model equation (5) and the TSP model equation(6) exhibit a linear trend, but the appropriated method of eliminating the trend differs.(It can be seen that the DsP is trend nonstationary from the definition of TSP. Most economic analysis is based the variance and covariance among the vari- bles. For example, The OLS estimator from the regression Yt on Xt is the ratio of the covariance between Y and X, to variance of Xt. Thus if the variance of the

where all the roots of φ(L) = 0 and θ(L) = 0 lie outside the unit circle, we say that Yt is an autoregressive integrated moving-average ARIMA(p, d, q) process. In particular an unit root process, d = 1 or an ARIMA(p, 1, q) process is therefore φ(L)4Yt = α + θ(L)εt or (1 − L)Yt = α + ψ(L)εt , (4) where ψ(L) = φ −1 (L)θ(L) and is absolutely summable. Successive substitution yields Yt = Y0 + αt + ψ(L) X t−1 i=0 εt−i . (5) 2.1.2 Trend-Stationary Another important class is the trend − stationary process (TSP). Consider the series Xt = µ + αt + ψ(L)εt , (6) where the coefficients of ψ(L) is absolute summable. The mean of Xt is E(Xt) = µ + αt and is not constant over time, while the variance of Xt is V ar(Xt) = (1+ψ 2 1 + ψ 2 2 + ...)σ 2 and constant. Although the mean of Xt is not constant over the period, it can be forecasted perfectly whenever we know the value of t and the parameters α and δ. In the sense it is stationary around the deterministic trend t and Xt can be transformed to stationarity by regressing it on time. Note that both DSP model equation (5) and the TSP model equation (6) exhibit a linear trend, but the appropriated method of eliminating the trend differs. (It can be seen that the DSP is trend − nonstationary from the definition of TSP.) Most economic analysis is based the variance and covariance among the vari￾ables. For example, The OLS estimator from the regression Yt on Xt is the ratio of the covariance between Yt and Xt to variance of Xt . Thus if the variance of the 5

variables behave differently, the conventional asymptotic theory cannot be appli- cable. When the order of integration is different, the variance of each process behave differently. For example, if Yt is an I(O) variable and Xt is I(1), the OLS estimator from the regression Yt on Xt converges to zero asymptotically, since the denominator of the OLS estimator, the variance of Xt, increase as t increase and thus it dominates the numerator the covariance between Xt and Yt. That the OLS estimator does not have an asymptotic distribution.(It is degenerated with the conventional normalization of VT. See Ch. 21 for details) 2.2 Comparison of Trend-stationary and Differencing Stationary Process The best way to under the meaning of stochastic and deterministic trend is to compare their time series properties. This section compares a trend-stationary process(6)with a unit root process(4)in terms of forecasts of the series, variance of the forecast error, dynamic multiplier, and transformations needs to achieve atonality 2.2.1 Returning to a Central Line The TSP model( 6) has a central line u+at, around which, Xt oscillates. Even if shock let Xt deviate temporarily from the line there takes place a force to bring it back to the line. On the other hand, the unit root process(5) has no such a central line. One might wonder about a deterministic trend combined with a ran- dom walk. The discrepancy between Yt and the line Yo +at, became unbounded ast→∞o 2.2.2 Forecast Erro The TSP and unit root specifications are also very different in their implications for the variance of the forecast error. For the trend-stationary process( 6), the s

variables behave differently, the conventional asymptotic theory cannot be appli￾cable. When the order of integration is different, the variance of each process behave differently. For example, if Yt is an I(0) variable and Xt is I(1), the OLS estimator from the regression Yt on Xt converges to zero asymptotically, since the denominator of the OLS estimator, the variance of Xt , increase as t increase, and thus it dominates the numerator, the covariance between Xt and Yt . That is, the OLS estimator does not have an asymptotic distribution. (It is degenerated with the conventional normalization of √ T. See Ch. 21 for details) 2.2 Comparison of Trend-stationary and Differencing -Stationary Process The best way to under the meaning of stochastic and deterministic trend is to compare their time series properties. This section compares a trend-stationary process (6) with a unit root process (4) in terms of forecasts of the series, variance of the forecast error, dynamic multiplier, and transformations needs to achieve stationarity. 2.2.1 Returning to a Central Line ? The TSP model (6) has a central line µ + αt, around which, Xt oscillates. Even if shock let Xt deviate temporarily from the line there takes place a force to bring it back to the line. On the other hand, the unit root process (5) has no such a central line. One might wonder about a deterministic trend combined with a ran￾dom walk. The discrepancy between Yt and the line Y0 + αt, became unbounded as t → ∞. 2.2.2 Forecast Error The TSP and unit root specifications are also very different in their implications for the variance of the forecast error. For the trend-stationary process (6), the s 6

ahead forecast is Xt+=H+a(t+s)+tt+v+1-1+v+2 hich are associated with forecast error X-X Hu+a(t+s)+Et+s+V1Et+s-1+2/++5-2 +ys-1Et+1+sEt +us+1Et-1+ -u+a(t+s)+sEt +as+1Et-1+. Et+s+v1Et+s-1+2Et+5-2+.+1s-1Et+1 The Mse of this forecast is E[X+-Xt+2={1+n1+n2+…+v21 The MsE increases with the forecasting horizon s. though as s becomes large the added uncertainty from forecasting farther into the future becomes negligible imEX+-X+2={1+2+n+…}a2 Note that the limiting MSE is just the unconditional variance of the stationary component v (L)Et To forecast the unit root process(4), recall that the change AYt is a stationary rocess that can be forecast using the standard formula △Yt+s E[(Y计+s-Yt+8-1)Y,Y-1 a+vsEt +2s+1Et-1+0s+2Et-2+ The level of the variable at date t+ s is simply the sum of the change between t nd t+s (Yt+s-Yt+s-1)+(Yt+s-1-Yt+s-2)+…+(Y+1-Y)+Y(7) △Yt+s+△Yt+s-1+…+△Yt+1+Y Therefore the s period ahead forecast error for the unit root process is {△Yt+8+△Yt+s-1+ Yt+1+Yt {△Y++△Yt+-1t+…+△Y+t+Y {t+s-1+v1 us-2=+1}+…+{et+}

ahead forecast is Xˆ t+s|t = µ + α(t + s) + ψsεt + ψs+1εt−1 + ψs+2εt−2 + .... which are associated with forecast error Xt+s − Xˆ t+s|t = {µ + α(t + s) + εt+s + ψ1εt+s−1 + ψ2εt+s−2 + .... +ψs−1εt+1 + ψsεt + ψs+1εt−1 + ....} −{µ + α(t + s) + ψsεt + ψs+1εt−1 + ....} = εt+s + ψ1εt+s−1 + ψ2εt+s−2 + ... + ψs−1εt+1. The MSE of this forecast is E[Xt+s − Xˆ t+s|t ] 2 = {1 + ψ 2 1 + ψ 2 2 + ... + ψ 2 s−1 }σ 2 . The MSE increases with the forecasting horizon s, though as s becomes large, the added uncertainty from forecasting farther into the future becomes negligible: lims→∞ E[Xt+s − Xˆ t+s|t ] 2 = {1 + ψ 2 1 + ψ 2 2 + ...}σ 2 . Note that the limiting MSE is just the unconditional variance of the stationary component ψ(L)εt . To forecast the unit root process (4), recall that the change 4Yt is a stationary process that can be forecast using the standard formula: 4Yˆ t+s|t = Eˆ[(Yt+s − Yt+s−1)|Yt , Yt−1, ...] = α + ψsεt + ψs+1εt−1 + ψs+2εt−2 + ... The level of the variable at date t + s is simply the sum of the change between t and t + s: Yt+s = (Yt+s − Yt+s−1) + (Yt+s−1 − Yt+s−2) + ... + (Yt+1 − Yt) + Yt (7) = 4Yt+s + 4Yt+s−1 + ... + 4Yt+1 + Yt . (8) Therefore the s period ahead forecast error for the unit root process is Yt+s − Yˆ t+s|t = {4Yt+s + 4Yt+s−1 + ... + 4Yt+1 + Yt} −{4Yˆ t+s|t + 4Yˆ t+s−1|t + ... + 4Yˆ t+1|t + Yt} = {εt+s + ψ1εt+s−1 + ... + ψs−1εt+1} +{εt+s−1 + ψ1εt+s−2 + ... + ψs−2εt+1} + ... + {εt+1} = εt+s + [1 + ψ1]εt+s−1 + [1 + ψ1 + ψ2]εt+s−2 + ... + [1 + ψ1 + ψ2 + ... + ψs−1]εt+1, 7

with MSE EY+-Y+2={1+1+m1]2+[1+n1+2+…+[1+v1 The MSE again increase with the length of the forecasting horizon s, though value as s goes to infinity. See Figures 15.2 on p. 441 of Hamilton The model of a TSP and the model of DsP have totally different views about how the world evolves in future. In the former the forecast error is bounded even in the infinite horizon but in the latter the error become unbounded as the hori- zon extends One result is very important to understanding the asymptotic statistical prop- erties to be presented in the subsequent chapter. The(deterministic) trend intro- duced by a nonzero drift a, (at is O(T)) asymptotically dominates the increas. ing variability arising over time due to the unit root component. (2i-0Et-iis O(T1/2).)This means that data from a unit root with positive drift are certain to exhibit an upward trend if observed for a sufficiently long period 2.2.3 Impulse Response Another difference between TSP and unit root process is the persistence of in- novations. Consider the consequences for Xi+s if Et were to increase by one unit with e's for all other dates unaffected. For the TSP process(4), this impulse response is given by 0Xt△=p ct For a trend-stationary process, then, the effect of any stochastic disturbance eventually wears off 0

with MSE E[Yt+s − Yˆ t+s|t ] 2 = {1 + [1 + ψ1] 2 + [1 + ψ1 + ψ2] 2 + ... + [1 + ψ1 + ψ2 + ... + ψs−1] 2 }σ 2 . The MSE again increase with the length of the forecasting horizon s, though in contrast to the trend-stationary case. The MSE does not converge to any fixed value as s goes to infinity. See Figures 15.2 on p. 441 of Hamilton. The model of a TSP and the model of DSP have totally different views about how the world evolves in future. In the former the forecast error is bounded even in the infinite horizon, but in the latter the error become unbounded as the hori￾zon extends. One result is very important to understanding the asymptotic statistical prop￾erties to be presented in the subsequent chapter. The (deterministic) trend intro￾duced by a nonzero drift α, (αt is O(T)) asymptotically dominates the increas￾ing variability arising over time due to the unit root component. ( Pt−1 i=0 εt−i is O(T 1/2 ).) This means that data from a unit root with positive drift are certain to exhibit an upward trend if observed for a sufficiently long period. 2.2.3 Impulse Response Another difference between TSP and unit root process is the persistence of in￾novations. Consider the consequences for Xt+s if εt were to increase by one unit with ε 0 s for all other dates unaffected. For the TSP process (4), this impulse response is given by ∂Xt+s ∂εt = ψs. For a trend-stationary process, then, the effect of any stochastic disturbance eventually wears off: lims→∞ ∂Xt+s ∂εt = 0. 8

By contrast, for a unit root process, the effect of Et on Yi+s is seen from( 8) d (4)to be 0△Yt+s,O△Yt+s-1 a△Yt vs+s-1+…+t1+1( since 2△Yt+= vs from(4) An innovation Et has a permanent effect on the level of y that is captured by aY Example The following ARIMA(4, 1, 0)model was estimated for Yt △Yt=0.555+0.312△Y-1+0.122△Yt-2-0.116△Yt-3-0.081△yt-4+Et For this specification, the permanent effect of a one-unit change in Et on the level of Yt is estimated to be v(1) (1)(1-0.312-0.122+0.116+0.081) =1.31 2.2.4 Transformations to Achieve Stationarity A final difference between trend-stationary and unit root process that deserves comment is the transformation of the data needed to generate a stationary time series. If the process is really trend stationary as in(6), the appropriate treatment is to subtract at from Xt to produce a stationary representation. By contrast if the data were really generated by the unit root process(5), subtracting at from Yt, would succeed in removing the time-dependence of the mean but not the variance as seen in(5) There have been several papers that have studied the consequence of overdif fer encing and underdif ferencing 1. If the process is really TSP as in(6), difference it would b AXt=u+at-u-a(t-1)+v(L(1- LEt=a+u(L)Et (9)

By contrast, for a unit root process, the effect of εt on Yt+s is seen from (8) and (4) to be ∂Yt+s ∂εt = ∂4Yt+s ∂εt + ∂4Yt+s−1 ∂εt + ... + ∂4Yt+1 ∂εt + ∂Yt ∂εt = ψs + ψs−1 + ... + ψ1 + 1 (since ∂4Yt+s ∂εt = ψs from (4)) An innovation εt has a permanent effect on the level of Y that is captured by lims→∞ ∂Yt+s ∂εt = 1 + ψ1 + ψ2 + ... = ψ(1). Example: The following ARIMA(4, 1, 0) model was estimated for Yt : 4Yt = 0.555 + 0.3124Yt−1 + 0.1224Yt−2 − 0.1164Yt−3 − 0.0814yt−4 + εˆt . For this specification, the permanent effect of a one-unit change in εt on the level of Yt is estimated to be ψ(1) = 1 φ(1) = 1 (1 − 0.312 − 0.122 + 0.116 + 0.081) = 1.31. 2.2.4 Transformations to Achieve Stationarity A final difference between trend-stationary and unit root process that deserves comment is the transformation of the data needed to generate a stationary time series. If the process is really trend stationary as in (6), the appropriate treatment is to subtract αt from Xt to produce a stationary representation. By contrast, if the data were really generated by the unit root process (5), subtracting αt from Yt , would succeed in removing the time-dependence of the mean but not the variance as seen in (5). There have been several papers that have studied the consequence of overdiffer − encing and underdifferencing: 1. If the process is really TSP as in (6), difference it would be 4Xt = µ + αt − µ − α(t − 1) + ψ(L)(1 − L)εt = α + ψ ∗ (L)εt . (9) 9

In this representation, this look like a DSP however, a unit root has been intro- duced into the moving average representation, "(L) which violates the definition f I(d) process as in(4). This is the case of overdifferencing If the process is really DSP as in(6), and we treat it as TSP, we have a case of underdifferencing

In this representation, this look like a DSP however, a unit root has been intro￾duced into the moving average representation, ψ ∗ (L) which violates the definition of I(d) process as in (4). This is the case of overdifferencing. 2. If the process is really DSP as in (6), and we treat it as TSP, we have a case of underdifferencing. 10

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共12页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有