Category Archives: Macroeconomics

“Wall Street and Silicon Valley: A Delicate Interaction,” G.-M. Angeletos, G. Lorenzoni & A. Pavan (2012)

The Keynesian Beauty Contest – is there any better example of an “old” concept in economics that, when read in its original form, is just screaming out for a modern analysis? You’ve got coordination problems, higher-order beliefs, signal extraction about underlying fundamentals, optimal policy response by a planner herself informationally constrained: all of these, of course, problems that have consumed micro theorists over the past few decades. The general problem of irrational exuberance when we start to model things formally, though, is that it turns out to be very difficult to generate “irrational” actions by rational, forward-looking agents. Angeletos et al have a very nice model that can generate irrational-looking asset price movements even when all agents are perfectly rational, based on the idea of information frictions between the real and financial sector.

Here is the basic plot. Entrepreneurs get an individual signal and a correlated signal about the “real” state of the economy (the correlation in error about fundamentals may be a reduced-form measure of previous herding, for instance). The entrepreneurs then make a costly investment. In the next period, some percentage of the entrepreneurs have to sell their asset on a competitive market. This may represent, say, idiosyncratic liquidity shocks, but really it is just in the model to abstract away from the finance sector learning about entrepreneur signals based on the extensive margin choice of whether to sell or not. The price paid for the asset depends on the financial sector’s beliefs about the real state of the economy, which come from a public noisy signal and the trader’s observations about how much investment was made by entrepreneurs. Note that the price traders pay is partially a function of trader beliefs about the state of the economy derived from the total investment made by entrepreneurs, and the total investment made is partially a function of the price at which entrepreneurs expect to be able to sell capital should a liquidity crisis hit a given firm. That is, higher order beliefs of both the traders and entrepreneurs about what the other aggregate class will do determine equilibrium investment and prices.

What does this imply? Capital investment is higher in the first stage if either the state of the world is believed to be good by entrepreneurs, or if the price paid in the following period for assets is expected to be high. Traders will pay a high price for an asset if the state of the world is believed to be good. These traders look at capital investment and essentially see another noisy signal about the state of the world. When an entrepreneur sees a correlated signal that is higher than his private signal, he increases investment due to a rational belief that the state of the world is better, but then increases it even more because of an endogenous strategic complementarity among the entrepreneurs, all of whom prefer higher investment by the class as a whole since that leads to more positive beliefs by traders and hence higher asset prices tomorrow. Of course, traders understand this effect, but a fixed point argument shows that even accounting for the aggregate strategic increase in investment when the correlated signal is high, aggregate capital can be read by traders precisely as a noisy signal of the actual state of the world. This means that when when entrepreneurs invest partially on the basis of a signal correlated among their class (i.e., there are information spillovers), investment is based too heavily on noise. An overweighting of public signals in a type of coordination game is right along the lines of the lesson in Morris and Shin (2002). Note that the individual signals for entrepreneurs are necessary to keep the traders from being able to completely invert the information contained in capital production.

What can a planner who doesn’t observe these signals do? Consider taxing investment as a function of asset prices, where high taxes appear when the market gets particularly frothy. This is good on the one hand: entrepreneurs build too much capital following a high correlated signal because other entrepreneurs will be doing the same and therefore traders will infer the state of the world is high and pay high prices for the asset. Taxing high asset prices lowers the incentive for entrepreneurs to shade capital production up when the correlated signal is good. But this tax will also lower the incentive to produce more capital when the actual state of the world, and not just the correlated signal, is good. The authors discuss how taxing capital and the financial sector separately can help alleviate that concern.

Proving all of this formally, it should be noted, is quite a challenge. And the formality is really a blessing, because we can see what is necessary and what is not if a beauty contest story is to explain excess aggregate volatility. First, we require some correlation in signals in the real sector to get the Morris-Shin effect operating. Second, we do not require the correlation to be on a signal about the real world; it could instead be correlation about a higher order belief held by the financial sector! The correlation merely allows entrepreneurs to figure something out about how much capital they as a class will produce, and hence about what traders in the next period will infer about the state of the world from that aggregate capital production. Instead of a signal that correlates entrepreneur beliefs about the state of the world, then, we could have a correlated signal about higher-order beliefs, say, how traders will interpret how entrepreneurs interpret how traders interpret capital production. The basic mechanism will remain: traders essentially read from aggregate actions of entrepreneurs a noisy signal about the true state of the world. And all this beauty contest logic holds in an otherwise perfectly standard Neokeynesian rational expectations model!

2012 working paper (IDEAS version). This paper used to go by the title “Beauty Contests and Irrational Exuberance”; I prefer the old name!

Dale Mortensen as Micro Theorist

Northwestern’s sole Nobel Laureate in economics, Dale Mortensen, passed overnight; he remained active as a teacher and researcher over the past few years, though I’d be hearing word through the grapevine about his declining health over the past few months. Surely everyone knows Mortensen the macroeconomist for his work on search models in the labor market. There is something odd here, though: Northwestern has really never been known as a hotbed of labor research. To the extent that researchers rely on their coworkers to generate and work through ideas, how exactly did Mortensen became such a productive and influential researcher?

Here’s an interpretation: Mortensen’s critical contribution to economics is as the vector by which important ideas in micro theory entered real world macro; his first well-known paper is literally published in a 1970 book called “Microeconomic Foundations of Employment and Inflation Theory.” Mortensen had the good fortune to be a labor economist working in the 1970s and 1980s at a school with a frankly incredible collection of microeconomic theorists; during those two decades, Myerson, Milgrom, Loury, Schwartz, Kamien, Judd, Matt Jackson, Kalai, Wolinsky, Satterthwaite, Reinganum and many others were associated with Northwestern. And this was a rare condition! Game theory is everywhere today, and pioneers in that field (von Neumann, Nash, Blackwell, etc.) were active in the middle of the century. Nonetheless, by the late 1970s, game theory in the social sciences was close to dead. Paul Samuelson, the great theorist, wrote essentially nothing using game theory between the early 1950s and the 1990s. Quickly scanning the American Economic Review from 1970-1974, I find, at best, one article per year that can be called game-theoretic.

What is the link between Mortensen’s work and developments in microeconomic theory? The essential labor market insight of search models (an insight which predates Mortensen) is that the number of hires and layoffs is substantial even in the depth of a recession. That is, the rise in the unemployment rate cannot simply be because the marginal revenue of the potential workers is always less than the cost, since huge numbers of the unemployed are hired during recessions (as others are fired). Therefore, a model which explains changes in churn rather than changes in the aggregate rate seems qualitatively important if we are to develop policies to address unemployment. This suggests that there might be some use in a model where workers and firms search for each other, perhaps with costs or other frictions. Early models along this line by Mortensen and others were generally one-sided and hence non-strategic: they had the flavor of optimal stopping problems.

Unfortunately, Diamond in a 1971 JET pointed out that Nash equilibrium in two-sided search leads to a conclusion that all workers are paid their reservation wage: all employers pay the reservation wage, workers believe this to be true hence do not engage in costly search to switch jobs, hence the belief is accurate and nobody can profitably deviate. Getting around the “Diamond Paradox” involved enriching the model of who searches when and the extent to which old offers can be recovered; Mortensen’s work with Burdett is a nice example. One also might ask whether laissez faire search is efficient or not: given the contemporaneous work of micro theorists like Glenn Loury on mathematically similar problems like the patent race, you might imagine that efficient search is unlikely.

Beyond the efficiency of matches themselves is the question of how to split surplus. Consider a labor market. In the absence of search frictions, Shapley (first with Gale, later with Shubik) had shown in the 1960s and early 1970s the existence of stable two-sided matches even when “wages” are included. It turns out these stable matches are tightly linked to the cooperative idea of a core. But what if this matching is dynamic? Firms and workers meet with some probability over time. A match generates surplus. Who gets this surplus? Surely you might imagine that the firm should have to pay a higher wage (more of the surplus) to workers who expect to get good future offers if they do not accept the job today. Now we have something that sounds familiar from non-cooperative game theory: wage is based on the endogenous outside options of the two parties. It turns out that noncooperative game theory had very little to say about bargaining until Rubinstein’s famous bargaining game in 1982 and the powerful extensions by Wolinsky and his coauthors. Mortensen’s dynamic search models were a natural fit for those theoretic developments.

I imagine that when people hear “microfoundations”, they have in mind esoteric calibrated rational expectations models. But microfoundations in the style of Mortensen’s work is much more straightforward: we simply cannot understand even the qualitative nature of counterfactual policy in the absence of models that account for strategic behavior. And thus the role for even high micro theory, which investigates the nature of uniqueness of strategic outcomes (game theory) and the potential for a planner to improve welfare through alternative rules (mechanism design). Powerful tools indeed, and well used by Mortensen.

“The Great Diversification and Its Unraveling,” V. Carvalho and X. Gabaix (2013)

I rarely post about macro papers here, but I came across this interesting result by Carvalho and Gabaix in the new AER. Particularly in the mid-2000s, it was fashionable to talk about a “Great Moderation” – many measures of economic volatility fell sharply right around 1983 and stayed low. Many authors studied the potential causes. Was it a result of better monetary policy (as seemed to be the general belief when I working at the Fed) or merely good luck? Ben Bernanke summarized the rough outline of this debate in a 2004 speech.

The last few years have been disheartening for promoters of good policy, since many measures of economic volatility have again soared since the start of the financial crisis. So now we have two facts to explain: why did volatility decline, and why did it rise again? Dupor, among others, has pointed out the difficulty of generating aggregate fluctuations from sectoral shocks: as the number of sectors increase, then independence of shocks will generate little aggregate volatility as the number of sectors grows large. Recent work by the authors of the present paper get around that concern by noting the granularity of important sectors (Gabaix), or the network structure of economic linkages (Carvalho).

In this paper, the authors show theoretically that a measure of “fundamental volatility” in total factor productivity should be linked to volatility in GDP, and that that measure of volatility is essentially composed of three factors: the ratio of sectoral gross output to value added, a diversification effect where volatility declines if the economy where value-added shares of the economy are spread across more sectors, and a compositional effect, where the economy contains fewer sectors with high gross output to GDP. They then compare their measure of fundamental volatility, constructed using data on 88 sectors, to overall GDP volatility, and find that it fits the data well, both in the US and overseas.

What, then, caused the shifts in fundamental volatility? The decline in the early 1980s appears to be heavily driven by a declining share of the economy in machinery, primary metals and the like. These sectors are heavily integrated into other areas of the economy as users and producers of intermediate inputs, so it is no surprise that a decline in their share of the economy will reduce overall volatility. The rise in the 2000s appears to result almost wholly from the increasing importance of the financial sector, an individually volatile sector. Given that the importance of this sector had been rising since the late 1990s, a measure of fundamental volatility (or, better, a firm-level measure, which is difficult to do given currently existing data) could have provided an “early warning” that the Great Moderation would soon come to an end.

August 2012 working paper (IDEAS version). Final paper in AER 103(5), 2013.

Financial Crisis Reading Lists

The Journal of Economic Literature is really a great service to economists. It is a journal that publishes up-to-date literature reviews and ideas for future research in many small subfields that would otherwise be impenetrable. A recent issue published two articles trying to help us non-macro and finance guys catch up on the “facts” of the 2007 financial crisis. The first is Gorton and Metrick, “Getting up to Speed on the Global Financial Crisis: A One Weekend Reader’s Guide,” and the second is Andrew Lo’s “Reading About the Financial Crisis: A Twenty-One Book Review.”

There is a lot of popular confusion about what caused the financial crisis, what amplified it, and what the responses have been. Roughly, we can all agree that first there was a massive rise in house prices, not only in the US; second, there was new, enormous pools of institutional investments looking for safe returns, with many of these pools being operated by risk-averse Asian governments; third, house prices peaked in the US and elsewhere in 2006; fourth, in August 2007, problems with mortgage-related bonds led to interbank repo funding problems, requiring massive liquidity help from central banks; fifth, in September 2008, Lehman Brothers filed for bankruptcy, leading a money market fund to “break the buck” and causing massive flight away from assets related to investment banks or assets not explicitly backed by strong governments; sixth, a sovereign debt problem has arisen in a number of periphery countries since, particularly in Europe.

Looking through the summaries provided in these two reading lists, I only see four really firm additional facts. First, as Andrew Lo has pointed out many times, leverage at investment banks was not terribly high by global standards. Second, arguments that the crisis was caused by investment banks packaging worthless securities and then fooling buyers, while containing a grain of truth, does not explain the crisis: indeed, the bigger problem was how many of these worthless securities were still on bank’s own balance sheets in 2008, explicitly or implicitly through CDOs and other instruments. Third, rising total leverage across an economy as a whole is strongly related to banking crises, a point made best in Reinhart and Rogoff’s work, but also in a new AER by Schularick and Taylor. Fourth, the crisis in the financial sector transmitted to the real economy principally via restrictions on credit to real economy firms. Campello, Graham and Harvey, in a 2010 JFE, used a large-scale 2008 survey of global CFOs to show how firms who were credit constrained before the financial crisis were much more likely to have to cut back on hiring and investment spending, regardless of their profitability or the usefulness of their investment opportunities. That is, savings fled to safety because of the uncertain health of banking intermediaries, which led banks to cut back commercial lending, which led to a recession in the real economy.

What’s interesting is how little the mortgage market, per se, had to do with the crisis. I was at the Fed before the crisis, and remember coauthoring in early 2007 an internal memo about the economic effects of a downturn in the housing sector. The bubble in housing prices was obvious to (almost) everyone at the Fed. But the size of the mortgage market (in terms of wealth) and the construction and home-improvement sector (in terms of employment) was simply not that big; certainly, the massive stock losses after the dot-com bubble had more real effects because of declines in total wealth. I can only imagine that everyone on Wall Street also knew this. What was unexpected was the way in which these particular losses in wealth harmed the financial health of banks; in particular, the location of losses because of the huge number of novel derivatives was really opaque. And we know, both then because of the very-popular-at-the-Fed theoretical work of Diamond and Dybvig, and now because of the empirical work of Reinhart and Rogoff, that bank runs, whether in the proper or in the “shadow” banking system, have real effects that are very difficult to contain.

If you’ve got some free time this weekend, particularly if you’re not a macroeconomist, it’s worth looking through the references in the Lo and Gorton/Metrick papers.

Lo’s “Twenty-One Book Review” (2012) (IDEAS version). Gorton and Metrick’s “Getting Up to Speed” (IDEAS version).

“The Credit Crisis as a Problem in the Sociology of Knowledge,” D. Mackenzie (2011)

(Tip of the hat for pointing out Mackenzie’s article to Dan Hirschman)

The financial crisis, it is quite clear by now, will be the worst worldwide economic catastrophe since the Great Depression. There are many explanations involving mistaken or misused economic theory, rapaciousness, political decisions, ignorance, and many more; two interesting examples here are Alp Simsek’s job market paper from a couple years ago on the impact of overly optimistic potential buyers who need to get loans from sedate lenders (one takeaway for me was that financial problems can’t be driven by the ignorant masses, as they have no money), and Coven, Jurek and Stafford’s brilliant 2009 AER on catastrophe bonds (summary here) which points out how ridiculous it is to legally define risk in terms of default risk, since we have known for decades in theory that Arrow-Debreu securities’ values depend both on the payoffs in future states and on the relative prices in those states. A bond whose default occurs in catastrophic states ought be much more expensive than the same bond whose default is negatively correlated with background risk.

But the catastrophe also involves a sociological component. Markets are made: they don’t arise from thin air. Certain markets don’t exist for reasons of repulsion, as Al Roth has mentioned in the context of organ sales. Other markets don’t exist because the value of the proposed good in that market is not clear. Removing uncertainty and clarifying the nature of a good is a important precondition, and one that economic sociologists, including Donald Mackenzie, have discussed at great length in their work. The evaluation of new products, perhaps not surprisingly, depends both on analogies to forms a firm has seen before, and on the particular parts of the firm who handle the evaluation.

Consider the ABS CDO – a collateralized debt obligation where the underlying debt are securitized assets, most commonly mortgages. The ABS CDO market grew enormously in the 2000s, and was not understood at nearly the same level as traditional CDO or ABS evaluation, topics on which there are hundreds of research papers. ABS and CDO teams tended to be quite separate in investment banks and ratings agencies, with the CDO team generally well trained in derivatives and the highly quantitative evaluation procedures of such products. For ABSs, particularly US mortgages, the implicit government guarantee against default meant that prepayment risk was the most important factor when pricing such securities. CDOs, often based on corporate debt, were used to treating correlation between various corporations in a given CDO as the most important metric.

Mackenzie gives exhaustive individual detail, but roughly, he does not blame the massive default rates on even AAA-rated ABS CDOs on greed or malfeasance. Rather, he describes how evaluation of ABS CDOs by ratings agencies used to dealing with either an ABS or a CDO, but not both, could lead to a utter misunderstanding of risk. While it is perfectly possible to “drill down” a complex derivative into its constituent parts, then subject the individual derivative to a stress test against some macroeconomic hypothetical, this was rarely done, particularly by individual investors. Mackenzie also gives a brief story of why these assets, revealed in 2008 to be superbly high risk, were being held by the banks at all instead of sold off to hedge funds and pensions. Apparently, the assets held were generally ones with very low return and very low perceived risk which were created as a byproduct of the bundling that created the ABS CDOs. That is, arbitrage was created when individual ABSs were bundled into an ABS CDO, the mezzanine and other tranches aside from the most senior AAA tranche were sold off, and the “basically risk-free” senior tranches were held by the bank as they would be difficult to sell directly. The evaluation of the risk, of course, was mistaken.

This is a very interesting descriptive presentation of what happened in 07 and 08.

http://www.socialwork.ed.ac.uk/__data/assets/pdf_file/0019/36082/CrisisRevised.pdf (Final version from the May 2011 American Journal of Sociology)

Follow

Get every new post delivered to your Inbox.

Join 205 other followers

%d bloggers like this: