Economics 528 In che HKUST Spring 2004 Review Problems for Midterm 1 Least Squares 1. In the linear regression model y=XB+E there is a need for changing the unit of measurement for the dependent variable y So y*= cy(c is a constant)is now used as a dependent variable (a) Does this practice change R2? (b)What happens to R if the unit of measurement is changed only for the regressor? 2. Consider the linear regression model =Q+x+=,~td(2),≠0 (a) Is the Ols estimator of B affected by the nonzero mean of Ei? (b)Can the least squares estimator of a estimate it accurately? 3. Prove that the OlS estimators of B, in the following linear models are identical 1+t2 B1+ where yt and at are de-trended yt and at obtained by regressing gt and t on t and setting 3 and at equal to the respective residuals 4. Discuss the validity of the following stat ements (a) Sum of residuals is always zero (b)If a regression produces R greater than 0.5, the regression is a reliable one
Economics 528 HKUST In Choi Spring 2004 Review Problems for Midterm 1 Least Squares 1. In the linear regression model y = Xβ + ε, there is a need for changing the unit of measurement for the dependent variable y. So y ∗ = cy (c is a constant) is now used as a dependent variable. (a) Does this practice change R2 ? (b) What happens to R2 if the unit of measurement is changed only for the regressor? 2. Consider the linear regression model yi = α + β ′Xi + εi , εi ∼ iid µ, σ2 , µ = 0 (a) Is the OLS estimator of β affected by the nonzero mean of εi? (b) Can the least squares estimator of α estimate it accurately? 3. Prove that the OLS estimators of β1 in the following linear models are identical yt = xtβ1 + tβ2 + εt y ∗ t = x ∗ tβ1 + εt where y ∗ t and x ∗ t are de—trended yt and xt obtained by regressing yt and xt on t and setting y ∗ t and x ∗ t equal to the respective residuals. 4. Discuss the validity of the following statements. (a) Sum of residuals is always zero. (b) If a regression produces R2 greater than 0.5, the regression is a reliable one.
(c)In a regression model +E, switching the independent and dependent varia bles and running a least pr ralid est imator of 1 d)r tends te 5. Inst ead of regressor matrix X, its rotation definded Z= XA with a K x K matrix A is used as a regressor. Show that the residuals from this regression are the same as those from the regression using regressor X 2 Finite sample properties of the OLS estimator 1. Suppose that the regression model is gi=a+BT i +Ei every E: has densit f(a)=exp(x Note that E(Ei)=A for all i. Show that the least squares estimat or of B is unbiased but that the lse of a is not Suppose that you uncessarily included a constant term in a bivariate linear regression model. In other words, the true model is i=B. C;+ Ei, whereas you est imated the model a+Bxz+∈z (a) Is the Ols estimator of B unbiased (b)Is the Ols estimator of B more efficient than that from the true model? 3.(Extension of 2). The true model is But the model 2z2+Bx1+ was estimated
2 (c) In a regression model yi = αxi + εi , switching the independent and dependent variables and running a least squares provide a valid estimator of 1 α . (d) R¯2 tends to favor larger models. 5. Instead of regressor matrix X, its rotation definded Z = XA with a K × K matrix A is used as a regressor. Show that the residuals from this regression are the same as those from the regression using regressor X. 2 Finite sample properties of the OLS estimator 1. Suppose that the regression model is yi = α + βxi + εi , every εi has density f (x) = exp − x λ /λ, x ≥ 0. Note that E (εi) = λ for all i. Show that the least squares estimator of β is unbiased but that the LSE of α is not. 2. Suppose that you uncessarily included a constant term in a bivariate linear regression model. In other words, the true model is yi = βxi + εi , whereas you estimated the model yi = α + βxi + εi . (a) Is the OLS estimator of β unbiased? (b) Is the OLS estimator of β more efficient than that from the true model? 3. (Extension of 2). The true model is yi = β ′ xi + εi . But the model yi = α ′ zi + β ′ xi + εi was estimated.
(a) Is the OlS est imator of B from this model more efficient than that from (b)Is the Ols estimator of c unbiased 4. Consider a simple linear regression model n(3e)=a+Bt+Et, Et wiid(, 02),t One consider estimat ing B using (ym)-In(y1 (a) Is B linear and unbiased? (b)Is B more efficient than the OLS estimator of B (c) Does the variance of B decrease as n incr (d)Can B be interpreted as the growth rate of 3t? 5. In Example 4.3 of our text, earnings equation is estimated as follows In earnings=3.24+0.20 age-0823 age+0.07 edu-035 kids (1.76) 08 0.01) (0.15) R2=0.04:n=428 The numbers in paranthesis are standard errors (a)Are all the coefficients stat istically significant at the 5% level? Assume a normal distribution for the errors b)Construct 95% confidence interval for the coefficients (c) Are the coefficient s jointly significant at the 5% level? Assume normality for the errors (Hint: The F-test is defined by F R2/(K-1 6. Prove the independence of the OLS estimator and the unbiased estimator of error variance in a standard linear regression model wit h normally distributed errors
3 (a) Is the OLS estimator of β from this model more efficient than that from the true model? (b) Is the OLS estimator of α unbiased? 4. Consider a simple linear regression model ln (yt) = α + βt + εt , εt ∼ iid 0, σ2 , t = 1, · · · , n. One consider estimating β using β¯ = ln (yn) − ln (y1) n − 1 . (a) Is β¯ linear and unbiased? (b) Is β¯ more efficient than the OLS estimator of β? (c) Does the variance of β¯ decrease as n increases? (d) Can β¯ be interpreted as the growth rate of yt? 5. In Example 4.3 of our text, earnings equation is estimated as follows ln earnings =3.24 (1.76) + 0.20 (0.08) age− 0.823 (0.001) age2+ 0.07 (0.03) edu− 0.35 (0.15) kids R 2 = 0.04; n = 428 The numbers in paranthesis are standard errors. (a) Are all the coefficients statistically significant at the 5% level? Assume a normal distribution for the errors. (b) Construct 95% confidence interval for the coefficients. (c) Are the coefficients jointly significant at the 5% level? Assume normality for the errors. Hint: The F − test is defined by F = R2/(K−1) (1−R2)/(n−K) . 6. Prove the independence of the OLS estimator and the unbiased estimator of error variance in a standard linear regression model with normally distributed errors.
3 Large sample properties of the lse Consider the linear regression model M=t+Et,Et~id(0,2),t=1,……,m (a) Show that the least squares estimat or of B, b is consistent (b) Derive the asy mptot ic distribution of b 2. For the linear regression model d an estimator is considered. Assume Xt) is a sequence of const ants. (a) What assumptions are required for the consist ency of b (b) Derive the asymptot ic distribution of b 3. Suppose that Xi have a binomial distribut ion b(m, p) and that X1 ndependent (a)What is the probability limit of X=I21X? b)What is the limiting distribution of vn(X-mp 4. Show that the mean square convergence implies the convergence in proba bility 5. Consider the linear regression model iid hen( Xi is a sequence of constants. What assumpt ions are required for the consistency of the LsE of 6? Are these assumptions reasonable for empirical analySIS 6. Consider the linear regression model Is the Ols estimat or of B consistent?
4 3 Large sample properties of the LSE 1. Consider the linear regression model yt = βt + εt , εt ∼ iid 0, σ2 , t = 1, · · · , n. (a) Show that the least squares estimator of β, b is consistent. (b) Derive the asymptotic distribution of b. 2. For the linear regression model yt = βxt + εt , εt ∼ iid 0, σ2 , t = 1, · · · , n, an estimator ¯b = n t=1 yt n t=1 xt is considered. Assume {Xt} is a sequence of constants. (a) What assumptions are required for the consistency of ¯b. (b) Derive the asymptotic distribution of ¯b. 3. Suppose that Xi have a binomial distribution b (m, p) and that X1, · · · , Xn are independent. (a) What is the probability limit of X¯ = 1 n n i=1 Xi? (b) What is the limiting distribution of √ n X¯ − mp ? 4. Show that the mean square convergence implies the convergence in probability. 5. Consider the linear regression model yi = β xi + εi , εi ∼ iid 0, σ2 , i = 1, · · · , n when {Xi} is a sequence of constants. What assumptions are required for the consistency of the LSE of β? Are these assumptions reasonable for empirical analysis? 6. Consider the linear regression model yt = β t + εt , εt ∼ iid 0, σ2 , t = 1, · · · , n. Is the OLS estimator of β consistent?