Fantastic and well-deserved news this morning with the Clark Medal being awarded to Emi Nakamura, who has recently moved from Columbia to Berkeley. Incredibly, Nakamura’s award is the first Clark to go to a macroeconomist in the 21st century. The Great Recession, the massive changes in global trade patterns, the rise of monetary areas like the Eurozone, the “savings glut” and its effect on interest rates, the change in openness to hot financial flows: it has been a wild twenty years for the macroeconomy in the two decades since Andrei Schleifer won the Clark. It’s hard to imagine what could be more important for an economist to understand than these patterns.
Something unusual has happened in macroeconomics over the past twenty years: it has become more like Industrial Organization! A brief history may be useful. The term macroeconomics is due to Ragnar Frisch, in his 1933 article on the propagation of economic shocks. He writes,
“The macro-dynamic analysis…tries to give an account of the whole economic system taken in its entirety. Obviously in this case it is impossible to carry through the analysis in great detail. Of course, it is always possible to give even a macro-dynamic analysis in detail if we confine ourselves to a purely formal theory. Indeed, it is always possible by a suitable system of subscripts and superscripts, etc., to introduce practically all factors which we may imagine…Such a theory, however, would have only a rather limited interest. It would hardly be possible to study such fundamental problems as the exact time shape of the solution, [etc.]. These latter problems are just the essential problems in business cycle analysis. In order to attack these problems on a macro-dynamic basis…we must deliberately disregard a considerable amount of the details of the picture.
And so we did. The Keynesians collapsed the microfoundations of the macroeconomy into a handful of relevant market-wide parameters. The Lucas Critique argued that we can collapse some things – many agents into a representative agent, for instance – but we ought always begin our analysis with the fundamental parameters of tastes, constraints, and technologies. The neoclassical synthesis combined these raw parameters with nominal ridigities – sticky prices, limited information, and so on. But Frisch’s main point nonetheless held strong: to what use are these deeper theoretical parameters if we cannot estimate their value and their effect on the macroeconomy? As Einstein taught us, the goal of the scientist should be to make things as simple as possible, but no simpler.
What has changed recently in macroeconomics is twofold. First, computational power now makes it possible to estimate or calibrate very complex dynamic and stochastic models, with forward looking agents, with price paths in and out of equilibrium, with multiple frictions – it is in this way that macro begins to look like industrial organization, with microeconomic parameters at the base. But second, and again analogous to IO, the amount of data available to the researcher has grown enormously. We now have price scanner data that tells us exactly when and how prices change, how those changes propagate across supply chains and countries, how they interact with taxes, and so on. Frisch’s problem has in some sense been solved: we no longer have the same trade-off between usefulness and depth when studying the macroeconomy.
Nakamura is best known for using this deep combination of data and theory to understand how exactly firms set prices. Price rigidities play a particularly important role in theories of the macroeconomy that potentially involve inefficiency. Consider a (somewhat bowdlerized) version of real business cycle theory. Here, shocks hit the economy: for instance, an oil cartel withholds supply for political reasons. Firms must react to this “real” supply-side shock by reorganizing economic activity. The real shock then propagates across industries. The role of monetary policy in such a world is limited: a recession simply reflects industries reacting to real change in the economic environment.
When prices are “sticky”, however, that is no longer true. The speed by which real shocks propagate, and the distortion sticky prices introduce, can be affected by monetary policy, since firms will react to changes in expected inflation by changing the frequency in which they update prices. Famously, Golosov and Lucas in the JPE argued, theoretically and empirically, that the welfare effects of “sticky prices” or “menu costs” are not terribly large. Extracting these welfare effects is quite sensitive to a number of features in the data and in the theory. To what extent is there short-term price dispersion rather than an exogenous chance for all firms in an industry to change their prices? Note that price dispersion is difficult to maintain unless we have consumer search costs – otherwise, everyone buys from the cheapest vendor – so price dispersion adds a non-trivial technical challenge. How much do prices actually change – do we want to sweep out short-term sales, for example? When inflation is higher, do firms adjust prices equally often but with bigger price jumps (consider the famous doubling of the price of Coca-Cola), or do they adjust prices more often keeping the percentage change similar to low-inflation environments? How much heterogeneity is there is the price-setting practice across industries, and to what extent do these differences affect the welfare consequences of prices given the links across industries?
Namakura has pushed us very far into answering these questions. She has built insane price datasets, come up with clever identification strategies to separate pricing models, and used these tools to vastly increase our understanding of the interaction between price rigidities and the business cycle. Her “Five Facts” paper uses BLS microdata to show that sales were roughly half of the “price changes” earlier researchers has found, that prices change more rapidly when inflation is higher, and that there is huge heterogeneity across industries in price change behavior. Taking that data back to the 1970s, Nakamura and coauthors also show that high inflation environments do not cause more price dispersion: rather, firms update their prices more often. Bob Lucas in his Macroeconomic Priorities made a compelling argument that business cycle welfare costs are much smaller than the costs of inflation and inflation costs are themselves much smaller than the costs of tax distortions. As Nakamura points out, if you believe this, no wonder you prioritize price stability and tax policy! (Many have quibbled with Lucas’ basic argument, but even adding heterogeneous agents, it is tough to get business cycles to have large economic consequences; see, e.g., Krusell et al RED 2009.) Understanding better the true costs of inflation, via the feedback of monetary expansion on pricesetting, goes a great deal toward helping policymakers calibrate the costs and benefits of price stability vis-a-vis other macroeconomic goals.
Though generally known as an empirical macroeconomist, Nakamura also has a number of papers, many with her husband Jon Steinsson, on the theory of price setting. For example, why are prices both sticky and also involve sales? In a clever paper in the JME, Nakamura and Steinsson model a firm pricing to habit-forming consumers. If the firm does not constrain itself, it has the incentive to raise prices once consumers form their habit for a given product (as a Cheez-It fan, I understand the model well – my willingness to pay for a box shipped up from the US to the Cheez-It-free land of Canada is absurdly high). To avoid this time inconsistency problems, firms would like to commit to a price path with some flexibility to respond to changes in demand. An equilibrium in this relational contract-type model involves a price cap with sales when demand falls: rigid prices plus sales, as we see in the data! In a second theoretical paper with Steinsson and Alisdair McKay, Nakamura looks into how much communication about future nominal interest rates can affect behavior. In principle, a ton: if you tell me the Fed will keep the real interest rate low for many years (low rates in the future raise consumption in the future which raises inflation in the future which lowers real rates today), I will borrow away. Adding borrowing constraints and income risk, however, means that I will never borrow too much money: I might get a bad shock tomorrow and wind up on the street. Giving five years of forward guidance about interest rates rather than a year, therefore, doesn’t really affect my behavior that much: the desire to have precautionary savings is what limits my borrowing, not the interest rate.
Nakamura’s prize is a well-deserved award, going to a leader in the shift in macro toward a more empirical, more deeply “microeconomic” in its theory, style of macro. Her focus is keenly targeted toward some of the key puzzles relevant to macroeconomic policymakers. There is no way to cover such a broad field in one post – this is not one of those awards given for a single paper – but luckily Nakamura has two great easily-readable summaries of her core work. First, in the Annual Review of Economics, she lays out the new empirical facts on price changes, the attempts to identify the link between monetary policy and price changes, and the implications for business cycle theory. Second, in the Journal of Economic Perspectives, she discusses how macroeconomists have attempted to more credibly identify theoretical parameters. In particular, external validity is so concerning in macro – remember the Lucas Critique! – that the essence of the problem involves combining empirical variation for identification with theory mapping that variation into broader policy guidance. I hesitate to stop here since Nakamura has so many influential papers, but let us take just more quick tasters that are well worth your more deep exploration. On the government spending side, she uses local spending shocks and a serious model to figure out the national fiscal multiplier from government spending. Second, she recently has linked the end of large-scale increases in female moves from home production to the labor force has caused recessions to last longer.
Great piece! Just a spelling fix: Andrei Shleifer not Schleifer.
“But second, and again analogous to IO, the amount of data available to the researcher has grown enormously. We now have price scanner data that tells us exactly when and how prices change”
This data has been available for ages. It was certainly available to Lucas. That it has not been analyzed before is both a tribute to Nakamura and a comment on the disgraceful state of the economics profession’s preference for ideology over science