当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

国立中山大学:《计量经济学》(英文版) Chapter 20 Processes with Deterministic Trends

资源类别:文库,文档格式:PDF,文档页数:21,文件大小:159.72KB,团购合买
Ch. 20 Processes with Deterministic Trends 1 Traditional Asymptotic Results of OlS Suppose a linear regression model with stochastic regressor given by Y=x!3+e,t=1,2,…,T,;B∈R or in matrix form y=xB+E We are interested in the asymptotic properties such as consistency and limiting
点击下载完整版文档(PDF)

Ch. 20 Processes with Deterministic Trends 1 Traditional Asymptotic Results of OlS Suppose a linear regression model with stochastic regressor given by Y=x!3+e,t=1,2,…,T,;B∈R or in matrix form y=xB+E We are interested in the asymptotic properties such as consistency and limiting distribution of the OLS estimator of B; B=(X'X)X'y as T=oo,under imple traditional assumptions 1.1 Independent Identically Distributed Observation 1.1.1 Consistency To prove consistency of B, we use Kolmogorov's laws of large number of Ch Rewrite B-B=(XX)XE we have the following result Theorem In addition to(1), suppose that (1).I(x4, Et))(+1)x1 is an i.i.d. sequences; (2) (a)E(x∈t)=0

Ch. 20 Processes with Deterministic Trends 1 Traditional Asymptotic Results of OLS Suppose a linear regression model with stochastic regressor given by Yt = x 0 tβ + εt , t = 1, 2, ..., T, ; β ∈ R k , (1) or in matrix form: y = Xβ + ε. We are interested in the asymptotic properties such as consistency and limiting distribution of the OLS estimator of β; βˆ = (X0X) −1X0y as T → ∞, under simple traditional assumptions. 1.1 Independent Identically Distributed Observation 1.1.1 Consistency To prove consistency of βˆ, we use Kolmogorov’s laws of large number of Ch 4. Rewrite βˆ − β = (X0X) −1X0 ε =  X0X T −1  X0ε T  = PT t=1 xtx 0 t T !−1 PT t=1 xtεt T ! , we have the following result. Theorem: In addition to (1), suppose that (1). {(x 0 t , εt) 0}(k+1)×1 is an i.i.d. sequences; (2). (a) E(xtεt) = 0; 1

(b) EXTeL<∞,i=1,2,…,k; (a)EX|2<∞,i=1,2,…k; (b)M=E(xtxt) is positive definite Then→. R 1. Assumption(2a)is talking about of the mean of this i i.d. sequences(Xtet, i= 1, 2,.,k), see Proposition 3.3 of White, 2001, p. 32) and(2b)is about its first moment exist 2. Assumption(3a) guarantee its(Xti Xti) first moment exist by Cauchy-Schwarz inequality and (3b)is talking about of the mean of this i i d (Xti Xti, i=1, 2, k;j= 1,2,…,k) sequence. An existence of the first moment is what is need for Lln of i.i. d. sequence See p. 15 of Ch 4 It is obvious that from these assumptions we have XtEt E and →E∠t=1Xtx M Therefore 月一B

(b) E|Xtiεt | < ∞, i = 1, 2, ..., k; (3). (a) E|Xti| 2 < ∞, i = 1, 2, ..., k; (b) M ≡ E(xtx 0 t ) is positive definite; Then βˆ a.s −→ β. Remark: 1. Assumption (2a) is talking about of the mean of this i.i.d. sequences (Xtiεt , i = 1, 2, ..., k), see Proposition 3.3 of White, 2001, p.32) and (2b) is about its first moment exist. 2. Assumption (3a) guarantee its (XtiXtj ) first moment exist by Cauchy-Schwarz inequality and (3b) is talking about of the mean of this i.i.d. (XtiXtj , i = 1, 2, .., k; j = 1, 2, ..., k) sequence. An existence of the first moment is what is need for LLN of i.i.d. sequence. See p.15 of Ch.4. Proof: It is obvious that from these assumptions we have  X0ε T  = PT t=1 xtεt T ! a.s −→ E PT t=1 xtεt T ! = 0 and  X0X T  = PT t=1 xtx 0 t T ! a.s −→ E PT t=1 xtx 0 t T ! = M. (2) Therefore βˆ − β a.s −→ M−10 = 0, or βˆ a.s −→ β. 2

1.1.2 Asymptotic Normality To prove asymptotic normality of B, we use Koln LLn and lindeber Levy's central limit theorem of Ch 4. Rewrite VT(B-B) we have the following result Theorem In addition to(1), suppose (i).I(x/, Et)) is an i i.d. sequences (a)e(xtEt)=0 (b)E|XxnP2<∞,i=1,2,…,k; (c)Vr=Var(T-1/X'e)=V is positive definite (a)M=E(xtxt)is positive definite (b)EXP2<∞,t=1,2,…k Then D-1/VT(B-B)N(O, I), where D=M-IVM-1 Remark: 1. Assumption(ii. a) is talking about of the mean of this ii d sequences (XtiEt, i 1, 2,.,k),(ii. b) is about its second moment exist which is needed for the appli cation of Lindeberg - Levy's central limit theorem(see p. 22 of Ch. 4)and (ii. c)is to standardize the random vector T-1/2(XE)so that the asymptotic distribution is unit multivariate normal 2. Assumption (iii a)is talking about of the mean of this i i.d.(Xti Xti, i 1 2, .,k)sequence and(iii b) guarantee its first moment exist by Cauchy-Schwarz inequality. An existence of the first moment is what is need for LLn of i.i. d. sequence. See p. 15 of Ch 4 Proof:

1.1.2 Asymptotic Normality To prove asymptotic normality of βˆ, we use Kolmogorov’s LLN and Lindeberg￾Le´vy’s central limit theorem of Ch 4. Rewrite √ T(βˆ − β) =  X0X T −1 √ T  X0ε T  = PT t=1 xtx 0 t T !−1 √ T PT t=1 xtεt T ! , we have the following result. Theorem: In addition to (1), suppose (i). {(x 0 t , εt) 0} is an i.i.d. sequences; (ii). (a) E(xtεt) = 0; (b) E|Xtiεt | 2 < ∞, i = 1, 2, ..., k; (c) VT ≡ V ar(T −1/2X0ε) = V is positive definite; (iii). (a) M ≡ E(xtx 0 t ) is positive definite; (b) E|Xti| 2 < ∞, i = 1, 2, ..., k; Then D−1/2 √ T(βˆ − β) L−→ N(0, I), where D ≡ M−1VM−1 . Remark: 1. Assumption (ii.a) is talking about of the mean of this i.i.d. sequences (Xtiεt , i = 1, 2, ..., k), (ii.b) is about its second moment exist which is needed for the appli￾cation of Lindeberg-Le´vy’s central limit theorem (see p.22 of Ch. 4) and (ii.c) is to standardize the random vector T −1/2 (X0ε) so that the asymptotic distribution is unit multivariate normal. 2. Assumption (iii.a) is talking about of the mean of this i.i.d. (XtiXtj , i = 1, 2, .., k; j = 1, 2, ..., k) sequence and (iii.b) guarantee its first moment exist by Cauchy-Schwarz inequality. An existence of the first moment is what is need for LLN of i.i.d. sequence. See p.15 of Ch.4. Proof: 3

It is obvious that from these assumptions we have =T-lXE-N(O, Var(T-/XE=N(o, v) and t=1 Therefore VT(-B)M-1.N(0,V) N(0,M-VM-1), (MVM-)-12√⑦(B-B)2N(0,I 1.2 Independent Heterogeneously Distributed Observa- tion 1.2.1 Consistency To prove consistency of B, we use revised Markov laws of large number of Ch 4 B-B=(X'X)-XE ∑1xEt we have the following result Theorem In addition to(1), suppose

It is obvious that from these assumptions we have √ T  X0ε T  = T −1/2X0 ε L−→ N(0, V ar(T −1/2X0 ε) ≡ N(0, V) and  X0X T  = PT t=1 xtx 0 t T ! a.s −→ E PT t=1 xtx 0 t T ! = M. Therefore √ T(βˆ − β) L−→ M−1 · N(0, V) ≡ N(0,M−1VM−1 ), or (M−1VM−1 ) −1/2 √ T(βˆ − β) L−→ N(0, I). 1.2 Independent Heterogeneously Distributed Observa￾tion 1.2.1 Consistency To prove consistency of βˆ, we use revised Markov laws of large number of Ch 4. Rewrite βˆ − β = (X0X) −1X0 ε =  X0X T −1  X0ε T  = PT t=1 xtx 0 t T !−1 PT t=1 xtεt T ! , we have the following result. Theorem: In addition to (1), suppose 4

(i.I(x, Et)' is an independent sequences (a) e(x, Et)=0; (b)E|Xe+60,i=1,2,…,k (a) MT=E(X'X/T) is positive definite (b)EX2+0 Remark 1. Assumption (ii. a)is talking about of the mean of this independent sequences (XtEt, i=1, 2, ., k)and (ii. b)is about its(1+8) moment exist 2. Assumption (iii a)is talking about of the limits of almost sure convergence of AT and (ii. b) guarantee its(1+8)moment exist of (Xti Xtj, i= 1, 2,,h;j= 1, 2, ...,k) by Cauchy-Schwarz inequality An existence of the(1+o) moment is what is need for LLN of independent sequence. See p. 15 of Ch 4 It is obvious that from these assumptions we have XtE XtEt 0 and E Therefore

(i). {(x 0 t , εt) 0} is an independent sequences; (ii). (a) E(xtεt) = 0; (b) E|Xtiεt | 1+δ 0, i = 1, 2, ..., k; (iii). (a) MT ≡ E(X0X/T) is positive definite; (b) E|X2 ti| 1+δ 0, i = 1, 2, ..., k; Then βˆ a.s −→ β. Remark: 1. Assumption (ii.a) is talking about of the mean of this independent sequences (Xtiεt , i = 1, 2, ..., k) and (ii.b) is about its (1 + δ) moment exist. 2. Assumption (iii.a) is talking about of the limits of almost sure convergence of X0X T and (iii.b) guarantee its (1 + δ) moment exist of (XtiXtj , i = 1, 2, .., k; j = 1, 2, ..., k) by Cauchy-Schwarz inequality. An existence of the (1 + δ) moment is what is need for LLN of independent sequence. See p.15 of Ch.4. Proof: It is obvious that from these assumptions we have  X0ε T  = PT t=1 xtεt T ! a.s −→ E PT t=1 xtεt T ! = 0 and  X0X T  = PT t=1 xtx 0 t T ! a.s −→ E PT t=1 xtx 0 t T ! = MT. Therefore βˆ − β a.s −→ M−1 T 0 = 0, or βˆ a.s −→ β. 5

1.2.2 Asymptotic Normality To prove asymptotic normality of B, we use revised Markov's LLN and Liapounov and Lindeberg- Feller's central limit theorem of Ch 4. Rewrite VT(B-B) we have the following result Theorem In addition to(1), suppose (i).I(x/, Et) is an independent sequences (a)E(x Et=0; (b)E|X+60,i=1,2,…,k; (c)Vr=Var(T-1/X'e)is positive definite (a)M=e(X'X/T) is positive definite (b)E|X+0,i=1,2,…,k; Then D= VT(B-B)N(O, I), where DT=M-]= 1. Assumption (ii. a)is talking about of the mean of this independent sequences (XtiEt, i=1, 2,. ),(ii. b)is about its(2+8)moment exist which is needed for the application of Liapounov's central limit theorem(see p 23 of Ch. 4 )and(ii. c) is to standardize the random vector T-1/(X'e)so that the asymptotic distribu- tion is unit multivariate normal 2. Assumption (ii. a) is talking about of the limits of almost sure convergence of X7x and(ii. b) guarantee its(1+8)moment exist of (Xt Xti, i= 1, 2, ,k;j= 1, 2,. by Cauchy-Schwarz inequality. An existence of the(1+8) moment is what is need for lln of independent sequence. See p. 15 of Ch 4 Proof:

1.2.2 Asymptotic Normality To prove asymptotic normality of βˆ, we use revised Markov’s LLN and Liapounov and Lindeberg-Feller’s central limit theorem of Ch 4. Rewrite √ T(βˆ − β) =  X0X T −1 √ T  X0ε T  = PT t=1 xtx 0 t T !−1 √ T PT t=1 xtεt T ! , we have the following result. Theorem: In addition to (1), suppose (i). {(x 0 t , εt) 0} is an independent sequences; (ii). (a) E(xtεt) = 0; (b) E|Xtiεt | 2+δ 0, i = 1, 2, ..., k; (c) VT ≡ V ar(T −1/2X0ε) is positive definite; (iii). (a) M ≡ E(X0X/T) is positive definite; (b) E|X2 ti| 1+δ 0, i = 1, 2, ..., k; Then D −1/2 T √ T(βˆ − β) L−→ N(0, I), where DT ≡ M−1 T VTM−1 T . Remark: 1. Assumption (ii.a) is talking about of the mean of this independent sequences (Xtiεt , i = 1, 2, ..., k), (ii.b) is about its (2 + δ) moment exist which is needed for the application of Liapounov’s central limit theorem (see p.23 of Ch. 4) and (ii.c) is to standardize the random vector T −1/2 (X0ε) so that the asymptotic distribu￾tion is unit multivariate normal. 2. Assumption (iii.a) is talking about of the limits of almost sure convergence of X0X T and (iii.b) guarantee its (1 + δ) moment exist of (XtiXtj , i = 1, 2, .., k; j = 1, 2, ..., k) by Cauchy-Schwarz inequality. An existence of the (1 + δ) moment is what is need for LLN of independent sequence. See p.15 of Ch.4. Proof: 6

It is obvious that from these assumptions we have GT=TXe(O, Var(T-1PX'e)=N(O,VT) and XtX t=1 t Mr Therefore T(A-B)2M71N(0,V ≡N(0,M7VrM7) MVIM)-/VT(B-B)N(O,I From results above, the asymptotic normality of OLS estimator depend cru cial on the existence of at least second moments of the regressors Xti and from that we have lln such that E(=1x As we ive seen from last chapter that a I(1) variables does not have a finite sec- ond moments, therefore when the regressor is a unit root process, then tradi- tional asymptotic results for OLS estimator would not apply. However, there is a case that the regressor is not stochastic, but it violate the condition that Xx s E(Zixexi)=Mr=O(1), as we will see in the following that the asymptotic normality still valid though the rate convergence to the normality changes

It is obvious that from these assumptions we have √ T  X0ε T  = T −1/2X0 ε L−→ N(0, V ar(T −1/2X0 ε) ≡ N(0, VT ) and  X0X T  = PT t=1 xtx 0 t T ! a.s −→ E PT t=1 xtx 0 t T ! = MT . Therefore √ T(βˆ − β) L−→ M−1 T N(0, VT ) ≡ N(0,M−1 T VTM−1 T ), or (M−1 T VTM−1 T ) −1/2 √ T(βˆ − β) L−→ N(0, I). From results above, the asymptotic normality of OLS estimator depend cru￾cial on the existence of at least second moments of the regressors Xti and from that we have LLN such that X0X T a.s −→ E PT t=1 xtx 0 t T  = MT = O(1). As we have seen from last chapter that a I(1) variables does not have a finite sec￾ond moments, therefore when the regressor is a unit root process, then tradi￾tional asymptotic results for OLS estimator would not apply. However, there is a case that the regressor is not stochastic, but it violate the condition that X0X T a.s −→ E PT t=1 xtx 0 t T  = MT = O(1), as we will see in the following that the asymptotic normality still valid though the rate convergence to the normality changes. 7

2 Processes with Deterministic Time Trends The coefficients of regression models involving unit roots or deterministic time trends are typically estimated by OLS. However, the asymptotic distributions of the coefficient estimates cannot be calculated in the same way as are those for regression models involving stationary variables. Among other difficulties, the estimates of different parameters will in general have different asymptotic rate of convergence. 2.1 Asymptotic Distribution of ols Estimators of the Simple Time Trend model Consider the OLS estimation of the parameters of a simple time trend Y=a+st for Et a white noise process. If Et NN(0, 02), the model (3) satisfies the classical assumption and the standard Ols t or F statistics would have exact small-sample t or F distributions. On the other hand, if Et is non-Gaussian, then a slightly different technique for finding the asymptotic distribution of the OLS estimates of a and o would to be used from that employed in last section Write 3)in the form of the standard regression model, where Let Br denote the OLS estimate of B based on a sample of size T, the deviation of Br from the true value can be expressed as Br-B XtX

2 Processes with Deterministic Time Trends The coefficients of regression models involving unit roots or deterministic time trends are typically estimated by OLS. However, the asymptotic distributions of the coefficient estimates cannot be calculated in the same way as are those for regression models involving stationary variables. Among other difficulties, the estimates of different parameters will in general have different asymptotic rate of convergence. 2.1 Asymptotic Distribution of OLS Estimators of the Simple Time Trend Model Consider the OLS estimation of the parameters of a simple time trend, Yt = α + δt + εt , (3) for εt a white noise process. If εt ∼ N(0, σ 2 ), the model (3) satisfies the classical assumption and the standard OLS t or F statistics would have exact small-sample t or F distributions. On the other hand, if εt is non-Gaussian, then a slightly different technique for finding the asymptotic distribution of the OLS estimates of α and δ would to be used from that employed in last section. Write (3) in the form of the standard regression model, Yt = x 0 tβ + εt , where x 0 t ≡ 1 t β ≡  α δ  . Let βˆ T denote the OLS estimate of β based on a sample of size T, the deviation of βˆ T from the true value can be expressed as (βˆ T − β) = "X T t=1 xtx 0 t #−1 "X T t=1 xtεt # . (4) 8

To find the asymptotic distribution of(Br-B), the approach in last section was to multiply(4)by VT, resulting in T(ax-B)=(1/) (1/V⑦∑xet (5) t=1 The usual assumption was that(1/T)> xtx converge in probability to non- singular matrix M while(1/VT)2L x,Et converges in distribution to a N(, v) random val riables, implying that VT(Br-B)N(O, (M-IVM-1)) For xt given in (5), we note that implying that T T(T+1)/ O(T1)O(T2) ∑t∑t2 T(T+1)/2T(T+1)(2T+1)/6 O(TO(TS) In contrast to the usual results as(2), the matrix(1/T)2tixixt in(5) diverges. To obtain converge and nondegenerate limiting distribution, we can T1/20 0T3/2 and obtains /2 3/2 0T-3/2 t=1 T-1∑1T-2∑t according to(6 Turning next to the second term in(4)and premultiplying it by rT Et (1/VT∑et XtE 0 T t=1 (1/T∑(t/T)et

To find the asymptotic distribution of (βˆ T − β), the approach in last section was to multiply (4) by √ T, resulting in √ T(βˆ T − β) = " (1/T) X T t=1 xtx 0 t #−1 " (1/ √ T) X T t=1 xtεt # . (5) The usual assumption was that (1/T) PT t=1 xtx 0 t converge in probability to non￾singular matrix M while (1/ √ T) PT t=1 xtεt converges in distribution to a N(0, V) random variables, implying that √ T(βˆ T − β) L−→ N(0,(M−1VM−1 )). For xt given in (5), we note that 1 T v+1 X T t=1 t v → 1 v + 1 , (6) implying that X T t=1 xtx 0 t =  P1 P P t t Pt 2  =  T T(T + 1)/2 T(T + 1)/2 T(T + 1)(2T + 1)/6  ≡  O(T 1 ) O(T 2 ) O(T 2 ) O(T 3 )  . (7) In contrast to the usual results as (2), the matrix (1/T) PT t=1 xtx 0 t in (5) diverges. To obtain converge and nondegenerates limiting distribution, we can think of premultiplying and postmultiplying hPT t=1 xtx 0 t i by the matrix Υ−1 T =  T 1/2 0 0 T 3/2 −1 , and obtains ( Υ−1 T "X T t=1 xtx 0 t # Υ−1 T ) =  [T −1/2 0 0 T −3/2   P1 P P t t Pt 2   [T −1/2 0 0 T −3/2  =  T −1 P1 T −2 Pt T −2 Pt T −3 Pt 2  → Q, where Q =  1 1 2 1 2 1 3  (8) according to (6). Turning next to the second term in (4) and premultiplying it by Υ−1 T , Υ−1 T "X T t=1 xtεt # =  T −1/2 0 0 T −3/2   P P εt tεt  =  (1/ √ T) Pεt (1/ √ T) P(t/T)εt  . (9) 9

We now prove the asymptotic normality of 9) under standard assumption moment. Then the first element of the vector in()satisfies , and about Et. Suppose that Et is i.i. d with mean zero, variance a finite fourth (1/V⑦∑et-NO by the Lindenberg- Levy CLt. For the second element of the vector in( 9), observe that((t/T)Et is a martin- gale difference sequence that satisfies the definition on p 13 of Ch. 4. Specifically, its variance Is 2=E(t/)2=a2(t2/T2 where (1/∑=(1/∑→2/3 Furthermore, to apply clt of a martingale difference sequence, we need to show that(1/)Et[(t/T)Et]2+02/3 as the condition(iii)on page 26 of Ch. 4 To prove this, notices that E(/m∑m2-(1m∑q)=E(/n∑m2-(/m∑m2 E((1/m)∑(t/)2(=2-a2) (1/)2∑(t/m)4E(e2-a2 t=1 E(e2-02)2(1/m∑+)→0 according to(6) and fourth moment of Et exist by assumption (1/)∑(t/m)2-(1/m)∑2m0 t=1 t=1 which also imply that

We now prove the asymptotic normality of (9) under standard assumption about εt . Suppose that εt is i.i.d. with mean zero, variance σ 2 , and finite fourth moment. Then the first element of the vector in (9) satisfies (1/ √ T) Xεt L−→ N(0, σ 2 ) by the Linderberg-Le´vy CLT. For the second element of the vector in (9), observe that {(t/T)εt} is a martin￾gale difference sequence that satisfies the definition on p.13 of Ch. 4. Specifically, its variance is σ 2 t = E[(t/T)εt ] 2 = σ 2 · (t 2 /T2 ), where (1/T) X T t=1 σ 2 t = σ 2 (1/T3 ) X T t=1 t 2 → σ 2 /3. Furthermore, to apply CLT of a martingale difference sequence, we need to show that (1/T) PT t=1[(t/T)εt ] 2 p −→ σ 2/3 as the condition (iii) on page 26 of Ch. 4. To prove this, notices that E (1/T) X T t=1 [(t/T)εt ] 2 − (1/T) X T t=1 σ 2 t !2 = E (1/T) X T t=1 [(t/T)εt ] 2 − (1/T) X T t=1 (t/T) 2σ 2 !2 = E (1/T) X T t=1 (t/T) 2 (ε 2 t − σ 2 ) !2 = (1/T) 2X T t=1 (t/T) 4E(ε 2 t − σ 2 ) 2 = E(ε 2 t − σ 2 ) 2 1/T6X T t=1 t 4 ! → 0, according to (6) and fourth moment of εt exist by assumption. This imply that (1/T) X T t=1 [(t/T)εt ] 2 − (1/T) X T t=1 σ 2 t m.s −→ 0, which also imply that (1/T) X T t=1 [(t/T)εt ] 2 p −→ σ 2 /3. 10

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共21页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有