正在加载图片...
Implementing Statistical Criteria to Select Return Forecasting Models model selection criteria were employed:the adjusted R2,Akaike's infor- mation criterion [AIC;Akaike (1974)],Schwarz's criterion [a Bayesian information criterion,BIC;Schwarz (1978)],the Fisher information crite- rion [FIC;Wei(1992)],the posterior information criterion [PIC;Phillips and Ploberger (1996)],Rissanen's predictive least squares criterion [PLS; Rissanen (1986a)],2 and our adjustment to correct well-known biases of the latter,PLS-MDC. Appendix A provides formal definitions of each of these criteria.The adjusted R2,AIC,and BIC were chosen on the basis of their popularity; FIC,PIC,PLS,and PLS-MDC were chosen because of their robustness in the face of unit-root nonstationarities. PLS-MDC is new and hence needs to be motivated further.It is based on a technique to estimate the dimension of the state vector in Markov models,referred to as Markov dimension criterion (MDC).MDC chooses the dimension of the state vector by investigating the out-of-sample mean square prediction error of rolling regressions that are run on the basis of various subsets of past information. In conjunction with PLS,MDC provides a correction for small-sample biases.PLS chooses models on the basis of the out-of-sample mean square prediction error of one rolling regression that uses all past observations. Rissanen (1986a)suggested this selection criterion,but observed that it is biased in small samples in favor of picking the model with the least possible variables [see also the evidence in Wei (1992)].The underfitting is due to the noise introduced by the error in the predictions of early observations in the sample.These predictions are unreliable because they are based on very few prior observations (remember that model estimates in PLS are computed only from prior observations).The fewer parameters to be esti- mated,however,the lower the prediction noise of those early observations. Because of the lower noise level,PLS tends to prefer models with fewer parameters,that is,with less explanatory variables. In PLS-MDC,we consider the performance of the same models where parameter estimates are not only based on all previous observations,but on different subsamples as well,where we drop observations that reach a certain age.In other words,while PLS is based on expanding-window estimation, PLS-MDC also considers estimates based on windows of fixed size.PLS- MDC effectively penalizes models where excluded variables are still heavily correlated with future prediction errors,indicating that the prediction vector was chosen to be too small.Since a formal discussion of PLS-MDC distracts from the main points of this article,it is delegated to an appendix.The interested reader can consult Appendix B. 2 PLS is based on Rissanen's earlier idea of minimum descriptive length [Rissanen (1986b)]:see also Kavalieris (1989). 409
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有