CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION Chapter 6 Large Sample Inference and Prediction 6.1 Large sample inference Consider the null hypothesis where R is a x K matrix. Wald test for this null hypothesis is defined as W=(Rb-ryoR(X'X)R(Rb-r asn→∞ This result does NOT require a normality assumption. This result follows from (Bb-r)2N(0,02BQ-1B) Writing XX W=Vn(Rb -)oR and applying these, we have WN(,0RQ-IR)ORQ-IR N(O,02RQ-IR) N(0,D)N(0,D)=x2(刀) Ho: Bk For this null hypothesis, we have t s2(X'X)kk Writing (X'X)k re find that O
CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION 1 Chapter 6 Large Sample Inference and Prediction 6.1 Large sample inference Consider the null hypothesis H0 : Rβ = r. where R is a J × K matrix. Wald test for this null hypothesis is defined as W = (Rb − r) ′ σˆ 2R (X ′X) −1 R ′ −1 (Rb − r) as n → ∞, W d→ χ 2 (J). This result does NOT require a normality assumption. This result follows from 1. √ n (Rb − r) d→ N (0, σ2RQ−1R′ ). 2. σˆ 2 p→ σ 2 . 3. X′X n −1 p→ Q−1 . Writing W = √ n (Rb − r) ′ σˆ 2R X′X n −1 R ′ −1 √ n (Rb − r) and applying these, we have W d→ N 0, σ2RQ−1R ′ σ 2RQ−1R ′ −1 N 0, σ2RQ−1R ′ = N (0, IJ ) ′ N (0, IJ ) = χ 2 (J) Consider H0 : βk = β 0 k . For this null hypothesis, we have t = bk − β 0 k s 2 (X′X) −1 kk . Writing t = √ n bk − β 0 k s 2 (X′X) −1 kk , we find that t d→ N 0, σ2Q −1 kk σ 2Q −1 kk = N (0, 1)
CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION 6.2 Testing nonlinear restrictions Since cc()+ (3-) ac(B) V the test we use ( (=P)=ar()(=)B=3 Example 1 A Long-Run MPC lnCt=0.003142+0.07495lnYt+0.09246lnCt R2=0.999712 s=0.00874 Estimated asymptotic Cov(b, c=-0.0003298 H0:6 0.07495 d =0.99403 1 1-0.9246 ad 1 0b1 e(1-c2=131834 The estimated asymptotic variance of d is 9Est Asy Var[b +92Est Asy Var[c+2gbgeEst Asy Cou [b, c 13.26262×0.028732+13.18342×0.028592+2(13.2626)(13.1834)(-0.000298 0.17192. Thus 之、0.99403-1 0.0144. 0.17192
CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION 2 6.2 Testing nonlinear restrictions H0 : c (β) = q. Since c βˆ ≃ c (β) + ∂c (β) ∂β ′ βˆ − β , V ar c βˆ ≃ ∂c (β) ∂β ′ V ar βˆ ∂c (β) ∂β , the test we use is Z = c βˆ − q ∂c(β) ∂β ′ |β=βˆ V ar βˆ ∂c(β) ∂β |β=βˆ . As n → ∞, Z d→ N (0, 1). Example 1 A Long—Run MPC. lnCt s.e. = =a 0.003142 (0.01055) + =b 0.07495 (0.02873) ln Yt+ =c 0.09246 (0.02859) ln Ct−1 R 2 = 0.999712 s = 0.00874 Estimated asymptotic Cov [b, c] = −0.0003298. H0 : δ = β 1 − γ = 1. d = b 1 − c = 0.07495 1 − 0.9246 = 0.99403 gb = ∂d ∂b = 1 1 − c = 13.2626 gc = ∂d ∂c = b (1 − c) 2 = 13.1834. The estimated asymptotic variance of d is g 2 bEst.Asy.V ar [b] + g 2 cEst.Asy.V ar [c] + 2gbgcEst.Asy.Cov [b, c] = 13.26262 × 0.028732 + 13.18342 × 0.028592 + 2 (13.2626) (13.1834) (−0.0003298) = 0.17192. Thus, Z = 0.99403 − 1 √ 0.17192 = −0.0144.
CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION 6. 3 Prediction Suppose that we want to predict The minimum variance linear unbiased estimator of E(yX)=X B is TUb The forecast error is The prediction variance is Var(ex, x)=2+Var(3-b)'xoLx,ro XX The prediction variance can be estimated by using s2 in place of o2. A confidence interval for y would be formed using ±byse(°) This formula is based on the assumption of normality for the regression errors Measures for assessing the predictive accuracy of forecasting models are 1. Root mean squared error RMSE n:#of periods being forecasted 2. Mean absolute error MAE=∑- 3. Theil U-statistic △=(m)E(△m-△)2 ()∑(△m)2
CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION 3 6.3 Prediction Suppose that we want to predict y 0 = X 0′β + ε 0 . The minimum variance linear unbiased estimator of E (y 0 |X0 ) = X0′β is yˆ 0 = X 0′ b. The forecast error is e 0 = y 0 − yˆ 0 = (β − b) ′ X 0 + ε 0 . The prediction variance is V ar e 0 |X, X0 = σ 2 + V ar (β − b) ′ X 0 |X, X0 = σ 2 + X 0′ σ 2 (X ′X) −1 X 0 . The prediction variance can be estimated by using s 2 in place of σ 2 . A confidence interval for y 0 would be formed using yˆ 0 ± tλ/2 · se e 0 . This formula is based on the assumption of normality for the regression errors. Measures for assessing the predictive accuracy of forecasting models are 1. Root mean squared error RMSE = 1 n0 i (yi − yˆi) 2 n 0 : # of periods being forecasted 2. Mean absolute error MAE = 1 n0 i |yi − yˆi | 3. Theil U−statistic U = 1 n0 i (yi − yˆi) 2 1 n0 i y 2 i U∆ = 1 n0 i (∆yi − ∆ˆyi) 2 1 n0 i (∆yi) 2