Press "Enter" to skip to content

What do we know?

Here is an excerpt from a December 5th blog post Evidence in Macroeconomics by Paul Krugman. It’s a nice summary of our readings on how to test the size of the multiplier, the challenge posed by endogeneity, and hence the value of “natural experiments.”

Anyway, where Noah [Smith, link] goes too far is in asserting that this kind of thing means that we basically know nothing [in critiquing David Beckworth here]. Um, no — good economists have been aware of this problem [endogeneity] for a long time, and serious work on both monetary and fiscal policy takes it into account. How? By looking for natural experiments – cases of large changes in policy (so that policy is the dominant factor in what happens) that are clearly not a response to the state of the business cycle.

That’s why Milton Friedman and Anna Schwartz’s monetary history wasn’t just about correlations; it relied on a narrative method to attempt to show that the monetary movements it stressed were more or less exogenous. (You can quarrel with some of their judgements, but the method was sound). It’s why Romer and Romer, in their classic paper on the real effects of monetary policy, relied on a study of Fed minutes to identify major changes in policy.

In the same vein, this is why serious analysis of fiscal policy relies a lot on wartime booms and busts in government spending, which are clearly not responses to unemployment.

And it’s why, if you want a read on the effects of fiscal policy in recent years, you want to look not at the fairly small events here but at austerity in Europe. The austerity programs have two great virtues from an economic research point of view (they are, of course, terrible from a human point of view). First, they are huge — in Greece, we’re looking at austerity measures amounting to 16 percent of GDP, the equivalent for the US of $2.5 trillion every year. Second … they’re relatively exogenous. Only relatively; you do have to worry that austerity ends up being imposed only in economies that would be in trouble anyway….

I still like the approach of Giancarlo Corsetti, Andre Meier and Gernot J. Müller in their IMF working paper (What Determines Government Spending Multipliers?). As you may recall, they use panel data with a VAR framework to predict [endogenous] fiscal policy, akin to comparing monetary policy using an (empirical) Taylor Rule. They then use the deviation from the predicted value as a measure of unanticipated policy, such as the ARRA (the 2009 stimulus legislation) — the econometric approach to the more fastidious methods of Friedman & Schwartz and Romer & Romer.

Unlike natural experiments, that provides a lot more data, and gives the potential to test the impact of policy in different portions of the business cycle, e.g., expansionary vs contractionary policy. Dummy variables still allow treatment of exceptional crises (Japan from 1991), Krugman’s “big” events. However, the Corsetti et al. paper convinced me that analyzing “small” events still provides real value.

Of course even with multicountry, multiyear data, degrees of freedom remain limited, and there is no escaping the econometric issues of an autoregressive framework. Yet the “small” events approach gives sensible answers, and is much more open to replication to check for robustness.

Anyway, I trust that the class now has a better understanding of the hurdles that confront empirical work in macroeconomics, as well as the tensions between very different visions of how to formally model an economy.

Mike Smitka, aka the prof