Quantitative macroeconomic modeling fell out of favor during the 1970s for two related reasons.First,some of the existing models,like the Wharton Econometric model and the Brookings Model,failed spectacularly to fore- cast the stagflation of the 1970s..Second,leading macroeconomists leveled harsh criticisms of these frameworks.Lucas (1976),and Sargent (1981),for example,argued that the absence of an optimization-based approach to the development of the structural equations meant that the estimated model co- efficients were likely not invariant to shifts in policy regimes or other types of structural changes.Similarly,Sims (1980)argued that the absence of convincing identifying assumptions to sort out the vast simultaneity among macroeconomic variables meant that one could have little confidence that the parameter estimates would be stable across different regimes.These power- ful critiques made clear why econometric models fit largely on statistical relationships from a previous era did not survive the structural changes of 1970s. In the 1980s and 1990s,many central banks continued to use reduced form statistical models to produce forecasts of the economy that presumed no structural change,but they did so knowing that these models could not be used with any degree of confidence to generate forecasts of the results of policy changes.Thus,monetary policy-makers turned to a combination of instinct,judgment,and raw hunches to assess the implications of different policy paths for the economy. Within the last decade,however,quantitative macroeconomic frameworks for monetary policy evaluation have made a comeback.What facilitated the development of these frameworks were two independent literatures that 1Quantitative macroeconomic modeling fell out of favor during the 1970s for two related reasons. First, some of the existing models, like the Wharton Econometric model and the Brookings Model, failed spectacularly to forecast the stagáation of the 1970s..Second, leading macroeconomists leveled harsh criticisms of these frameworks. Lucas (1976), and Sargent (1981), for example, argued that the absence of an optimization-based approach to the development of the structural equations meant that the estimated model coe¢ cients were likely not invariant to shifts in policy regimes or other types of structural changes. Similarly, Sims (1980) argued that the absence of convincing identifying assumptions to sort out the vast simultaneity among macroeconomic variables meant that one could have little conÖdence that the parameter estimates would be stable across di§erent regimes. These powerful critiques made clear why econometric models Öt largely on statistical relationships from a previous era did not survive the structural changes of 1970s. In the 1980s and 1990s, many central banks continued to use reduced form statistical models to produce forecasts of the economy that presumed no structural change, but they did so knowing that these models could not be used with any degree of conÖdence to generate forecasts of the results of policy changes. Thus, monetary policy-makers turned to a combination of instinct, judgment, and raw hunches to assess the implications of di§erent policy paths for the economy. Within the last decade, however, quantitative macroeconomic frameworks for monetary policy evaluation have made a comeback. What facilitated the development of these frameworks were two independent literatures that 1