Angus Deaton, 2015 Nobel Winner: A Prize for Structural Analysis?

Angus Deaton, the Scottish-born, Cambridge-trained Princeton economist, best known for his careful work on measuring the changes in wellbeing of the world’s poor, has won the 2015 Nobel Prize in economics. His data collection is fairly easy to understand, so I will leave larger discussion of exactly what he has found to the general news media; Deaton’s book “The Great Escape” provides a very nice summary of what he has found as well, and I think a fair reading of his development preferences are that he much prefers the currently en vogue idea of just giving cash to the poor and letting them spend it as they wish.

Essentially, when one carefully measures consumption, health, or generic characteristics of wellbeing, there has been tremendous improvement indeed in the state of the world’s poor. National statistics do not measure these ideas well, because developing countries do not tend to track data at the level of the individual. Indeed, even in the United States, we have only recently begun work on localized measures of the price level and hence the poverty rate. Deaton claims, as in his 2010 AEA Presidential Address (previously discussed briefly on two occasions on AFT), that many of the measures of global inequality and poverty used by the press are fundamentally flawed, largely because of the weak theoretical justification for how they link prices across regions and countries. Careful non-aggregate measures of consumption, health, and wellbeing, like those generated by Deaton, Tony Atkinson, Alwyn Young, Thomas Piketty and Emmanuel Saez, are essential for understanding how human welfare has changed over time and space, and is a deserving rationale for a Nobel.

The surprising thing about Deaton, however, is that despite his great data-collection work and his interest in development, he is famously hostile to the “randomista” trend which proposes that randomized control trials (RCT) or other suitable tools for internally valid causal inference are the best way of learning how to improve the lives of the world’s poor. This mode is most closely associated with the enormously influential J-PAL lab at MIT, and there is no field in economics where you are less likely to see traditional price theoretic ideas than modern studies of development. Deaton is very clear on his opinion: “Randomized controlled trials cannot automatically trump other evidence, they do not occupy any special place in some hierarchy of evidence, nor does it make sense to refer to them as “hard” while other methods are “soft”… [T]he analysis of projects needs to be refocused towards the investigation of potentially generalizable mechanisms that explain why and in what contexts projects can be expected to work.” I would argue that Deaton’s work is much closer to more traditional economic studies of development than to RCTs.

To understand this point of view, we need to go back to Deaton’s earliest work. Among Deaton’s most famous early papers was his well-known development of the Almost Ideal Demand System (AIDS) in 1980 with Muellbauer, a paper chosen as one of the 20 best published in the first 100 years of the AER. It has long been known that individual demand equations which come from utility maximization must satisfy certain properties. For example, a rational consumer’s demand for food should not depend on whether the consumer’s equivalent real salary is paid in American or Canadian dollars. These restrictions turn out to be useful in that if you want to know how demand for various products depend on changes in income, among many other questions, the restrictions of utility theory simplify estimation greatly by reducing the number of free parameters. The problem is in specifying a form for aggregate demand, such as how demand for cars depends on the incomes of all consumers and prices of other goods. It turns out that, in general, aggregate demand generated by utility-maximizing households does not satisfy the same restrictions as individual demand; you can’t simply assume that there is a “representative consumer” with some utility function and demand function equal to each individual agent. What form should we write for aggregate demand, and how congruent is that form with economic theory? Surely an important question if we want to estimate how a shift in taxes on some commodity, or a policy of giving some agricultural input to some farmers, is going to affect demand for output, its price, and hence welfare!

Let q(j)=D(p,c,e) say that the quantity of j consumed, in aggregate is a function of the price of all goods p and the total consumption (or average consumption) c, plus perhaps some random error e. This can be tough to estimate: if D(p,c,e)=Ap+e, where demand is just a linear function of relative prices, then we have a k-by-k matrix to estimate, where k is the number of goods. Worse, that demand function is also imposing an enormous restriction on what individual demand functions, and hence utility functions, look like, in a way that theory does not necessarily support. The AIDS of Deaton and Muellbauer combine the fact that Taylor expansions approximately linearize nonlinear functions and that individual demand can be aggregated even when heterogeneous across individuals if the restrictions of Muellbauer’s PIGLOG papers are satisfied to show a functional form for aggregate demand D which is consistent with aggregated individual rational behavior and which can sometimes be estimated via OLS. They use British data to argue that aggregate demand violates testable assumptions of the model and hence factors like credit constraints or price expectations are fundamental in explaining aggregate consumption.

This exercise brings up a number of first-order questions for a development economist. First, it shows clearly the problem with estimating aggregate demand as a purely linear function of prices and income, as if society were a single consumer. Second, it gives the importance of how we measure the overall price level in figuring out the effects of taxes and other policies. Third, it combines theory and data to convincingly suggest that models which estimate demand solely as a function of current prices and current income are necessarily going to give misleading results, even when demand is allowed to take on very general forms as in the AIDS model. A huge body of research since 1980 has investigated how we can better model demand in order to credibly evaluate demand-affecting policy. All of this is very different from how a certain strand of development economist today might investigate something like a subsidy. Rather than taking obversational data, these economists might look for a random or quasirandom experiment where such a subsidy was introduced, and estimate the “effect” of that subsidy directly on some quantity of interest, without concern for how exactly that subsidy generated the effect.

To see the difference between randomization and more structural approaches like AIDS, consider the following example from Deaton. You are asked to evaluate whether China should invest more in building railway stations if they wish to reduce poverty. Many economists trained in a manner influenced by the randomization movement would say, well, we can’t just regress the existence of a railway on a measure of city-by-city poverty. The existence of a railway station depends on both things we can control for (the population of a given city) and things we can’t control for (subjective belief that a town is “growing” when the railway is plopped there). Let’s find something that is correlated with rail station building but uncorrelated with the random component of how rail station building affects poverty: for instance, a city may lie on a geographically-accepted path between two large cities. If certain assumptions hold, it turns out that a two-stage “instrumental variable” approach can use that “quasi-experiment” to generate the LATE, or local average treatment effect. This effect is the average benefit of a railway station on poverty reduction, at the local margin of cities which are just induced by the instrument to build a railway station. Similar techniques, like difference-in-difference and randomized control trials, under slightly different assumptions can generate credible LATEs. In development work today, it is very common to see a paper where large portions are devoted to showing that the assumptions (often untestable) of a given causal inference model are likely to hold in a given setting, then finally claiming that the treatment effect of X on Y is Z. That LATEs can be identified outside of a purely randomized contexts is incredibly important and valuable, and the economists and statisticians who did the heavy statistical lifting on this so-called Rubin model will absolutely and justly win an Economics Nobel sometime soon.

However, this use of instrumental variables would surely seem strange to the old Cowles Commission folks: Deaton is correct that “econometric analysis has changed its focus over the years, away from the analysis of models derived from theory towards much looser specifications that are statistical representations of program evaluation. With this shift, instrumental variables have moved from being solutions to a well-defined problem of inference to being devices that induce quasi-randomization.” The traditional use of instrumental variables was that after writing down a theoretically justified model of behavior or aggregates, certain parameters – not treatment effects, but parameters of a model – are not identified. For instance, price and quantity transacted are determined by the intersection of aggregate supply and aggregate demand. Knowing, say, that price and quantity was (a,b) today, and is (c,d) tomorrow, does not let me figure out the shape of either the supply or demand curve. If price and quantity both rise, it may be that demand alone has increased pushing the demand curve to the right, or that demand has increased while the supply curve has also shifted to the right a small amount, or many other outcomes. An instrument that increases supply without changing demand, or vice versa, can be used to “identify” the supply and demand curves: an exogenous change in the price of oil will affect the price of gasoline without much of an effect on the demand curve, and hence we can examine price and quantity transacted before an after the oil supply shock to find the slope of supply and demand.

Note the difference between the supply and demand equation and the treatment effects use of instrumental variables. In the former case, we have a well-specified system of supply and demand, based on economic theory. Once the supply and demand curves are estimated, we can then perform all sorts of counterfactual and welfare analysis. In the latter case, we generate a treatment effect (really, a LATE), but we do not really know why we got the treatment effect we got. Are rail stations useful because they reduce price variance across cities, because they allow for increasing returns to scale in industry to be utilized, or some other reason? Once we know the “why”, we can ask questions like, is there a cheaper way to generate the same benefit? Is heterogeneity in the benefit important? Ought I expect the results from my quasiexperiment in place A and time B to still operate in place C and time D (a famous example being the drug Opren, which was very successful in RCTs but turned out to be particularly deadly when used widely by the elderly)? Worse, the whole idea of LATE is backwards. We traditionally choose a parameter of interest, which may or may not be a treatment effect, and then choose an estimation technique that can credible estimate that parameter. Quasirandom techniques instead start by specifying the estimation technique and then hunt for a quasirandom setting, or randomize appropriately by “dosing” some subjects and not others, in order to fit the assumptions necessary to generate a LATE. If is often the case that even policymakers do not care principally about the LATE, but rather they care about some measure of welfare impact which rarely is immediately interpretable even if the LATE is credibly known!

Given these problems, why are random and quasirandom techniques so heavily endorsed by the dominant branch of development? Again, let’s turn to Deaton: “There has also been frustration with the World Bank’s apparent failure to learn from its own projects, and its inability to provide a convincing argument that its past activities have enhanced economic growth and poverty reduction. Past development practice is seen as a succession of fads, with one supposed magic bullet replacing another—from planning to infrastructure to human capital to structural adjustment to health and social capital to the environment and back to infrastructure—a process that seems not to be guided by progressive learning.” This is to say, the conditions necessary to estimate theoretical models are so stringent that development economists have been writing noncredible models, estimating them, generating some fad of programs that is used in development for a few years until it turns out not to be silver bullet, then abandoning the fad for some new technique. Better, the randomistas argue, to forget about external validity for now, and instead just evaluate the LATEs on a program-by-program basis, iterating what types of programs we evaluate until we have a suitable list of interventions that we feel confident work. That is, development should operate like medicine.

We have something of an impasse here. Everyone agrees that on many questions theory is ambiguous in the absence of particular types of data, hence more and better data collection is important. Everyone agrees that many parameters of interest for policymaking require certain assumptions, some more justifiable than others. Deaton’s position is that the parameters of interest to economists by and large are not LATEs, and cannot be generated in a straightforward way from LATEs. Thus, following Nancy Cartwright’s delightful phrasing, if we are to “use” causes rather than just “hunt” for what they are, we have no choice but to specify the minimal economic model which is able to generate the parameters we care about from the data. Glen Weyl’s attempt to rehabilitate price theory and Raj Chetty’s sufficient statistics approach are both attempts to combine the credibility of random and quasirandom inference with the benefits of external validity and counterfactual analysis that model-based structural designs permit.

One way to read Deaton’s prize, then, is as an award for the idea that effective development requires theory if we even hope to compare welfare across space and time or to understand why policies like infrastructure improvements matter for welfare and hence whether their beneficial effects will remain when moved to a new context. It is a prize which argues against the idea that all theory does is propose hypotheses. For Deaton, going all the way back to his work with AIDS, theory serves three roles: proposing hypotheses, suggesting which data is worthwhile to collect, and permitting inference on the basis of that data. A secondary implication, very clear in Deaton’s writing, is that even though the “great escape” from poverty and want is real and continuing, that escape is almost entirely driven by effects which are unrelated to aid and which are uninfluenced by the type of small bore, partial equilibrium policies for which randomization is generally suitable. And, indeed, the best development economists very much understand this point. The problem is that the media, and less technically capable young economists, still hold the mistaken belief that they can infer everything they want to infer about “what works” solely using the “scientific” methods of random- and quasirandomization. For Deaton, results that are easy to understand and communicate, like the “dollar-a-day” poverty standard or an average treatment effect, are less virtuous than results which carefully situate numbers in the role most amenable to answering an exact policy question.

Let me leave you three side notes and some links to Deaton’s work. First, I can’t help but laugh at Deaton’s description of his early career in one of his famous “Notes from America”. Deaton, despite being a student of the 1984 Nobel laureate Richard Stone, graduated from Cambridge essentially unaware of how one ought publish in the big “American” journals like Econometrica and the AER. Cambridge had gone from being the absolute center of economic thought to something of a disconnected backwater, and Deaton, despite writing a paper that would win a prize for writing one of the best papers in Econometrica published in the late 1970s, had essentially no understanding of the norms of publishing in such a journal! When the history of modern economics is written, the rise of a handful of European programs and their role in reintegrating economics on both sides of the Atlantic will be fundamental. Second, Deaton’s prize should be seen as something of a callback to the ’84 prize to Stone and Meade, two of the least known Nobel laureates. I don’t think it is an exaggeration to say that the majority of new PhDs from even the very best programs will have no idea who those two men are, or what they did. But as Deaton mentions, Stone in particular was one of the early “structural modelers” in that he was interested in estimating the so-called “deep” or behavioral parameters of economic models in a way that is absolutely universal today, as well as being a pioneer in the creation and collection of novel economic statistics whose value was proposed on the basis of economic theory. Quite a modern research program! Third, of the 19 papers in the AER “Top 20 of all time” whose authors were alive during the era of the economics Nobel, 14 have had at least one author win the prize. Should this be a cause for hope for the living outliers, Anne Krueger, Harold Demsetz, Stephen Ross, John Harris, Michael Todaro and Dale Jorgensen?

For those interested in Deaton’s work beyond what this short essay, his methodological essay, quoted often in this post, is here. The Nobel Prize technical summary, always a great and well-written read, can be found here.

“Inventing Prizes: A Historical Perspective on Innovation Awards and Technology Policy,” B. Z. Khan (2015)

B. Zorina Khan is an excellent and underrated historian of innovation policy. In her new working paper, she questions the shift toward prizes as an innovation inducement mechanism. The basic problem economists have been grappling with is that patents are costly in terms of litigation, largely due to their uncertainty, that patents impose deadweight loss by granting inventors market power (as noted at least as far back as Nordhaus 1969), and that patent rights can lead to an anticommons which in some cases harms follow-on innovation (see Scotchmer and Green and Bessen and Maskin for the theory, and papers like Heidi Williams’ genome paper for empirics).

There are three main alternatives to patents, as I see them. First, you can give prizes, determined ex-ante or ex-post. Second, you can fund R&D directly with government, as the NIH does for huge portions of medical research. Third, you can rely on inventors accruing rents to cover the R&D without any government action, such as by keeping their invention secret, relying on first mover advantage, or having market power in complementary goods. We have quite a bit of evidence that the second, in biotech, and the third, in almost every other field, is the primary driver of innovative activity.

Prizes, however, are becoming more and more common. There are X-Prizes for space and AI breakthroughs, advanced market commitments for new drugs with major third world benefits, Kremer’s “patent buyout” plan, and many others. Setting the prize amount right is of course a challenging project (one that Kremer’s idea partially fixes), and in this sense prizes run “less automatically” than the patent system. What Khan notes is that prizes have been used frequently in the history of innovation, and were frankly common in the late 18th and 19th century. How useful were they?

Unfortunately, prizes seem to have suffered many problems. Khan has an entire book on The Democratization of Invention in the 19th century. Foreign observers, and not just Tocqueville, frequently noted how many American inventors came from humble backgrounds, and how many “ordinary people” were dreaming up new products and improvements. This frenzy was often, at the time, credited to the uniquely low-cost and comprehensive U.S. patent system. Patents were simple enough, and inexpensive enough, to submit that credit for and rights to inventions pretty regularly flowed to people who were not politically well connected, and for inventions that were not “popular”.

Prizes, as opposed to patents, often incentivized the wrong projects and rewarded the wrong people. First, prizes were too small to be economically meaningful; when the well-named Hippolyte Mège-Mouriès made his developments in margarine and garnered the prize offered by Napoleon III, the value of that prize was far less than the value of the product itself. In order to shift effort with prizes, the prize designer needs to know both enough about the social value of the proposed invention to set the prize amount high enough, and enough about the value of alternatives that the prize doesn’t distort effort away from other inventions that would be created while relying solely on trade secrecy and first mover advantage (I discuss this point in much greater depth in my Direction of Innovation paper with Jorge Lemus). Providing prizes to only some inventions may either generate no change in behavior at all because the prize is too small compared with the other benefits of inventing, or cause inefficient distortions in behavior. Even though, say, a malaria vaccine would be very useful, an enormous prize for a malaria vaccine will distort health researcher effort away from other projects in a way that is tough to calculate ex-ante without a huge amount of prize designer knowledge.

There is a more serious problem with prizes. Because the cutoff for a prize is less clear cut, there is more room for discretion and hence a role for wasteful lobbying and personal connection to trump “democratic invention”. Khan notes that even though the French buyout of Daguerre’s camera patent is cited as a classic example of a patent buyout in the famous Kremer QJE article, it turns out that Daguerre never actually held any French patent at all! What actually happened was that Daguerre lobbied the government for a sinecure in order to make his invention public, but then patented it abroad anyway! There are many other political examples, such as the failure of the uneducated clockmaker John Harrison to be granted a prize for his work on longitude due partially to the machinations of more upper class competitors who captured the ear of the prize committee. Examining a database of great inventors on both sides of the Atlantic, Khan found that prizes were often linked to factors like overcoming hardship, having an elite education, or regional ties. That is, the subjectivity of prizes may be stronger than the subjectivity of patents.

So then, we have three problems: prize designers don’t know enough about the relative import of various ideas to set price amounts optimally, prizes in practice are often too small to have much effect, and prizes lead to more lobbying and biased rewards than patents. We shouldn’t go too far here; prizes still may be an important part of the innovation policy toolkit. But the history Khan lays out certainly makes me more sanguine that they are a panacea.

One final point. I am unconvinced that patents really help the individual or small inventor very much either. I did a bit of hunting: as far as I can tell, there is not a single billionaire who got that way primarily by selling their invention. Many people developed their invention in a firm, but non-entrepreneurial invention, for which the fact that patents create a market for knowledge is supposedly paramount, doesn’t seem to be making anyone super rich. This is even though there are surely a huge number of inventions each worth billions. A good defense of patents as our main innovation policy should really grapple better with this fact.

July 2015 NBER Working Paper (RePEc IDEAS). I’m afraid the paper is gated if you don’t have an NBER subscription, and I was unable to find an ungated copy.

“Buying Locally,” G. J. Mailath, A. Postlewaite & L. Samuelson (2015)

Arrangements where agents commit to buy only from selected vendors, even when there are more preferred products at better prices from other vendors, are common. Consider local currencies like “Ithaca Hours”, which can only be used at other participating stores and which are not generally convertible, or trading circles among co-ethnics even when trust or unobserved product quality is not important. The intuition people have for “buying locally” is to, in some sense, “keep the profits in the community”; that is, even if you don’t care at all about friendly local service or some other utility-enhancing aspect of the local store, you should still patronize it. The fruit vendor, should buy from the local bookstore even when her selection is subpar, and the book vendor should in turn patronize you even when fruits are cheaper at the supermarket.

At first blush, this seems odd to an economist. Why would people voluntarily buy something they don’t prefer? What Mailath and his coauthors show is that, actually, the noneconomist intuition is at least partially correct when individuals are both sellers and buyers. Here’s the idea. Let there be a local fruit vendor, a supermarket, a local bookstore and a chain bookstore. Since the two markets are not perfectly competitive, firms earn a positive rent with each sale. Assume that, tomorrow, the fruit vendor, the local book merchant, and each of the chain managers draw a random preference. Each food seller is equally likely to need a book sold by either the local or chain store, and likewise each bookstore employee is equally likely to need a piece of fruit sold either by the local vendor or the supermarket; you might think of these preferences as reflecting prices, or geographical distance, or product variety, etc. In equilibrium, prices of each book and each fruit are set equally, and each vendor expects to accrue half the sales.

Now imagine that the local bookstore owner and fruit vendor commit in advance not to patronize the other stores, regardless of which preference is drawn tomorrow. Assume for now that they also commit not to raise prices because of this agreement (this assumption will not be important, it turns out). Now the local stores expect to make 3/4 of all sales, since they still get the purchases of the chain managers with probability .5. Since the markup does not change, and there is a constant profit on each sale, then profits improve. And here is the sustainability part: as long as the harm from buying the “wrong product” is not too large, the benefit for the vendor-as-producer of selling more products exceeds the harm to the vendor-as-consumer of buying a less-than-optimal product.

That tradeoff can be made explicit, but the implication is quite general: as the number of firms you can buy at grows large, the benefit to belonging to a buy local arrangement falls. The harm of having to buy from a local producer is big because it is very unlikely the local producer is your first choice, and the price firms set in equilibrium falls because competition is stronger, hence there is less to gain for the vendor-as-producer from belonging to the buy local agreement. You will only see “buy local” style arrangements, like Ithaca Hours, or social shaming, in communities where vendors-as-consumers already purchase most of what they want from vendors-as-producers in the same potential buy local group.

One thing that isn’t explicit in the paper, perhaps because it is too trivial despite its importance, is how buy local arrangements affect welfare. Two possibilities exist. First, if in-group and out-of-group sellers have the same production costs, then “buy local” arrangements simply replace the producer surplus of out-of-group sellers with deadweight loss and some, perhaps minor, surplus for in group members. They are privately beneficial yet socially harmful. However, an intriguing possibility is that “buy local” arrangements may not harm social welfare at all, even if they are beneficial to in-group members. How is that? In-group members are pricing above marginal cost due to market power. A “buy local” agreement increases the quantity of sales they make. If the in-group member has lower costs than out of group members, the total surplus generated by shifting transactions to the in-group seller may be positive, even though there is some deadweight loss created when consumers do not buy their first choice good (in particular, this is true whenever the average willingness-to-pay differential for people who switch to the in-group seller once the buy local group is formed exceeds the average marginal cost differential between in-group and out-of-group sellers.)

May 2015 working paper (RePEc IDEAS version)

On the economics of the Neolithic Revolution

The Industrial and Neolithic Revolutions are surely the two fundamental transitions in the economic history of mankind. The Neolithic involved permanent settlement of previously nomadic, or at best partially foraging, small bands. At least seven independent times, bands somewhere in the world adopted settled agriculture. The new settlements tended to see an increase in inequality, the beginning of privately held property, a number of new customs and social structures, and, most importantly, an absolute decrease in welfare as measured in terms of average height and an absolute increase in the length and toil of working life. Of course, in the long run, settlement led to cities which led to the great inventions that eventually pushed mankind past the Malthusian bounds into our wealthy present, but surely no nomad of ten thousand years ago could have projected that outcome.

Now this must sound strange to any economist, as we can’t help but think in terms of rational choice. Why would any band choose to settle when, as far as we can tell, settling made them worse off? There are only three types of answers compatible with rational choice: either the environment changed such that the nomads who adopted settlement would have been even worse off had they remained nomadic, settlement was a Pareto-dominated equilibrium, or our assumption that the nomads were maximizing something correlated with height is wrong. All might be possible: early 20th century scholars ascribed the initial move to settlement to humans being forced onto oases in the drying post-Ice Age Middle East, evolutionary game theorists are well aware that fitness competitions can generate inefficient Prisoner’s Dilemmas, and humans surely care about reproductive success more than they care about food intake per se.

So how can we separate these potential explanations, or provide greater clarity as to the underlying Neolithic transition mechanism? Two relatively new papers, Andrea Matranga’s “Climate-Driven Technical Change“, and Kim Sterelny’s Optimizing Engines: Rational Choice in the Neolithic”, discuss intriguing theories about what may have happened in the Neolithic.

Matranga writes a simple Malthusian model. The benefit of being nomadic is that you can move to places with better food supply. The benefit of being sedentary is that you use storage technology to insure yourself against lean times, even if that insurance comes at the cost of lower food intake overall. Nomadism, then, is better than settling when there are lots of nearby areas with uncorrelated food availability shocks (since otherwise why bother to move?) or when the potential shocks you might face across the whole area you travel are not that severe (in which case why bother to store food?). If fertility depends on constant access to food, then for Malthusian reasons the settled populations who store food will grow until everyone is just at subsistence, whereas the nomadic populations will eat a surplus during times when food is abundant.

It turns out that global “seasonality” – or the difference across the year in terms of temperature and rainfall – was extraordinarily high right around the time agriculture first popped up in the Fertile Crescent. Matranga uses some standard climatic datasets to show that six of the seven independent inventions of agriculture appear to have happened soon after increases in seasonality in their respective regions. This is driven by an increase in seasonality and not just an increase in rainfall or heat: agriculture appears in the cold Andes and in the hot Mideast and in the moderate Chinese heartland. Further, adoption of settlement once your neighbors are farming is most common when you live on relatively flat ground, with little opportunity to change elevation to pursue food sources as seasonality increases. Biological evidence (using something called “Harris lines” on your bones) appears to support to idea that nomads were both better fed yet more subject to seasonal shocks than settled peoples.

What’s nice is that Matranga’s hypothesis is consistent with agriculture appearing many times independently. Any thesis that relies on unique features of the immediate post-Ice Age – such as the decline in megafauna like the Woolly Mammoth due to increasing population, or the oasis theory – will have a tough time explaining the adoption of agriculture in regions like the Andes or China thousands of years after it appeared in the Fertile Crescent. Alain Testart and colleagues in the anthropology literature have made similar claims about the intersection of storage technology and seasonality being important for the gradual shift from nomadism to partial foraging to agriculture, but the Malthusian model and the empirical identification in Matranga will be much more comfortable for an economist reader.

Sterelny, writing in the journal Philosophy of Science, argues that rational choice is a useful framework to explain not only why backbreaking, calorie-reducing agriculture was adopted, but also why settled societies appeared willing to tolerate inequality which was much less common in nomadic bands, and why settled societies exerted so much effort building monuments like Gobekli Tepe, holding feasts, and participating in other seemingly wasteful activity.

Why might inequality have arisen? Settlements need to be defended from thieves, as they contain stored food. Hence settlement sizes may be larger than the size of nomadic bands. Standard repeated games with imperfect monitoring tell us that when repeated interactions become less common, cooperation norms become hard to sustain. Hence collective action can only be sustained through mechanisms other than dyadic future punishment; this is especially true if farmers have more private information about effort and productivity than a band of nomadic hunters. The rise of enforceable property rights, as Bowles and his coauthors have argued, is just such a mechanism.

What of wasteful monuments like Gobekli Tepe? Game theoretic deliberate choice provides two explanations for such seeming wastefulness. First, just as animals consume energy in ostentatious displays in order to signal their fitness (as the starving animal has no energy to generate such a display), societies may construct totems and temples in order to signal to potential thieves that they are strong and not worth trifling with. In the case of Gobekli Tepe, this doesn’t appear to be the case, as there isn’t much archaeological evidence of particular violence around the monument. A second game theoretic rationale, then, is commitment by members of a society. As Sterelny puts it, the reason a gang makes a member get a face tattoo is that, even if the member leaves the gang, the tattoo still puts that member at risk of being killed by the gang’s enemies. Hence the tattoo commits the member not to defect. Settlements around Gobekli Tepe may have contributed to its building in order to commit their members to a set of norms that the monument embodied, and hence permit trade and knowledge transfer within this in-group. I would much prefer to see a model of this hypothesis, but the general point doesn’t seem impossible. At least, Sterelny and Matranga together provide a reasonably complete possible explanation, based on rational behavior and nothing more, of the seemingly-strange transition away from nomadism that made our modern life possible.

Kim Sterelny, Optimizing Engines: Rational Choice in the Neolithic?, 2013 working paper. Final version published in the July 2015 issue of Philosophy of Science. Andrea Matranga, “Climate-driven Technical Change: Seasonality and the Invention of Agriculture”, February 2015 working paper, as yet unpublished. No RePEc IDEAS page is available for either paper.

“Bonus Culture: Competitive Pay, Screening and Multitasking,” R. Benabou & J. Tirole (2014)

Empirically, bonus pay as a component of overall renumeration has become more common over time, especially in highly competitive industries which involve high levels of human capital; think of something like management of Fortune 500 firms, where the managers now have their salary determined globally rather than locally. This doesn’t strike most economists as a bad thing at first glance: as long as we are measuring productivity correctly, workers who are compensated based on their actual output will both exert the right amount of effort and have the incentive to improve their human capital.

In an intriguing new theoretical paper, however, Benabou and Tirole point out that many jobs involve multitasking, where workers can take hard-to-measure actions for intrinsic reasons (e.g., I put effort into teaching because I intrinsically care, not because academic promotion really hinges on being a good teacher) or take easy-to-measure actions for which there might be some kind of bonus pay. Many jobs also involve screening: I don’t know who is high quality and who is low quality, and although I would optimally pay people a bonus exactly equal to their cost of effort, I am unable to do so since I don’t know what that cost is. Multitasking and worker screening interact among competitive firms in a really interesting way, since how other firms incentivize their workers affects how workers will respond to my contract offers. Benabou and Tirole show that this interaction means that more competition in a sector, especially when there is a big gap between the quality of different workers, can actually harm social welfare even in the absence of any other sort of externality.

Here is the intuition. For multitasking reasons, when different things workers can do are substitutes, I don’t want to give big bonus payments for the observable output, since if I do the worker will put in too little effort on the intrinsically valuable task: if you pay a trader big bonuses for financial returns, she will not put as much effort into ensuring all the laws and regulations are followed. If there are other finance firms, though, they will make it known that, hey, we pay huge bonuses for high returns. As a result, workers will sort, with all of the high quality traders will move to the high bonus firm, leaving only the low quality traders at the firm with low bonuses. Bonuses are used not only to motivate workers, but also to differentially attract high quality workers when quality is otherwise tough to observe. There is a tradeoff, then: you can either have only low productivity workers but get the balance between hard-to-measure tasks and easy-to-measure tasks right, or you can retain some high quality workers with large bonuses that make those workers exert too little effort on hard-to-measure tasks. When the latter is more profitable, all firms inefficiently begin offering large, effort-distorting bonuses, something they wouldn’t do if they didn’t have to compete for workers.

How can we fix things? One easy method is with a bonus cap: if the bonus is capped at the monopsony optimal bonus, then no one can try to screen high quality workers away from other firms with a higher bonus. This isn’t as good as it sounds, however, because there are other ways to screen high quality workers (such as offering lower clawbacks if things go wrong) which introduce even worse distortions, hence bonus caps may simply cause less efficient methods to perform the same screening and same overincentivization of the easy-to-measure output.

When the individual rationality or incentive compatibility constraints in a mechanism design problem are determined in equilibrium, based on the mechanisms chosen by other firms, we sometimes called this a “competing mechanism”. It seems to me that there are quite a number of open questions concerning how to make these sorts of problems tractable; a talented young theorist looking for a fun summer project might find it profitable to investigate this as-yet small literature.

Beyond the theoretical result on screening plus multitasking, Tirole and Benabou also show that their results hold for market competition more general than just perfect competition versus monopsony. They do this through a generalized version of the Hotelling line which appears to have some nice analytic properties, at least compared to the usual search-theoretic models which you might want to use when discussing imperfect labor market competition.

Final copy (RePEc IDEAS version), forthcoming in the JPE.

The Economics of John Nash

I’m in the midst of a four week string of conferences and travel, and terribly backed up with posts on some great new papers, but I can’t let the tragic passing today of John Nash go by without comment. When nonacademics ask what I do, I often say that I work in a branch of applied math called game theory; if you say you are economist, the man on the street expects you to know when unemployment will go down, or which stocks they should buy, or whether monetary expansion will lead to inflation, questions which the applied theorist has little ability to answer in a satisfying way. But then, if you mention game theory, without question the most common way your interlocutor knows the field is via Russell Crowe’s John Nash character in A Beautiful Mind, so surely, and rightfully, no game theorist has greater popular name recognition.

Now Nash’s contributions to economics are very small, though enormously influential. He was a pure mathematician who took only one course in economics in his studies; more on this fortuitous course shortly. The contributions are simple to state: Nash founded the theory of non-cooperative games, and he instigated an important, though ultimately unsuccessful, literature on bargaining. Nash essentially only has two short papers on each topic, each of which is easy to follow for a modern reader, so I will generally discuss some background on the work rather than the well-known results directly.

First, non-cooperative games. Robert Leonard has a very interesting intellectual history of the early days of game theory, the formal study of strategic interaction, which begins well before Nash. Many like to cite von Neumann’s “Zur Theorie der Gesellschaftsspiele” (“A Theory of Parlor Games”), from whence we have the minimax theorem, but Emile Borel in the early 1920’s, and Ernst Zermelo with his eponymous theorem a decade earlier, surely form relevant prehistory as well. These earlier attempts, including von Neumann’s book with Morganstern, did not allow general investigation of what we now call noncooperative games, or strategic situations where players do not attempt to collude. The most famous situation of this type is the Prisoner’s Dilemma, a simple example, yet a shocking one: competing agents, be they individuals, firms or countries, may (in a sense) rationally find themselves taking actions which both parties think is worse than some alternative. Given the U.S. government interest in how a joint nuclear world with the Soviets would play out, analyzing situations of that type was not simply a “Gesellschaftsspiele” in the late 1940s; Nash himself was funded by the Atomic Energy Commission, and RAND, site of a huge amount of important early game theory research, was linked to the military.

Nash’s insight was, in retrospect, very simple. Consider a soccer penalty kick, where the only options are to kick left and right for the shooter, and to simultaneously dive left or right for the goalie. Now at first glance, it seems like there can be no equilibrium: if the shooter will kick left, then the goalie will jump to that side, in which case the shooter would prefer to shoot right, in which case the goalie would prefer to switch as well, and so on. In real life, then, what do we expect to happen? Well, surely we expect that the shooter will sometimes shoot left and sometimes right, and likewise the goalie will mix which way she dives. That is, instead of two strategies for each player, we have a continuum of mixed strategies, where a mixed strategy is simply a probability distribution over the strategies “Left, Right”. This idea of mixed strategies “convexifies” the strategy space so that we can use fixed point strategies to guarantee that an equilibrium exists in every finite-strategy noncooperative game under expected utility (Kakutani’s Fixed Point in the initial one-page paper in PNAS which Nash wrote his very first year of graduate school, and Brouwer’s Fixed Point in the Annals of Math article which more rigorously lays out Nash’s noncooperative theory). Because of Nash, we are able to analyze essentially whatever nonstrategic situation we want under what seems to be a reasonable solution concept (I optimize given my beliefs about what others will do, and my beliefs are in the end correct). More importantly, the fixed point theorems Nash used to generate his equilibria are now so broadly applied that no respectable economist should now get a PhD without understanding how they work.

(A quick aside: it is quite interesting to me that game theory, as opposed to Walrasian/Marshallian economics, does not derive from physics or other natural sciences, but rather from a program at the intersection of formal logic and mathematics, primarily in Germany, primarily in the early 20th century. I still have a mind to write a proper paper on this intellectual history at some point, but there is a very real sense in which economics post-Samuelson, von Neumann and Nash forms a rather continuous methodology with earlier social science in the sense of qualitative deduction, whereas it is our sister social sciences which, for a variety of reasons, go on writing papers without the powerful tools of modern logic and the mathematics which followed Hilbert. When Nash makes claims about the existence of equilibria due to Brouwer, the mathematics is merely the structure holding up and extending ideas concerning the interaction of agents in noncooperative systems that would have been totally familiar to earlier generations of economists who simply didn’t have access to tools like the fixed point theorems, in the same way that Samuelson and Houthakker’s ideas on utility are no great break from earlier work aside from their explicit incorporation of deduction on the basis of relational logic, a tool unknown to economists in the 19th century. That is, I claim the mathematization of economics in the mid 20th century represents no major methodological break, nor an attempt to ape the natural sciences. Back to Nash’s work in the direct sense.)

Nash only applies his theory to one game: a simplified version of poker due to his advisor called Kuhn Poker. It turned out that the noncooperative solution was not immediately applicable, at least to the types of applied situations where it is now commonplace, without a handful of modifications. In my read of the intellectual history, noncooperative games was a bit of a failure outside the realm of pure math in its first 25 years because we still needed Harsanyi’s purification theorem and Bayesian equilibria to understand what exactly was going on with mixed strategies, Reinhard Selten’s idea of subgame perfection to reasonably analyze games with multiple stages, and the idea of mechanism design of Gibbard, Vickers, Myerson, Maskin, and Satterthwaite (among others) to make it possible to discuss how institutions affect outcomes which are determined in equilibrium. It is not simply economists that Nash influenced; among many others, his work directly leads to the evolutionary games of Maynard Smith and Price in biology and linguistics, the upper and lower values of his 1953 results have been used to prove other mathematical results and to discuss what is meant as truth in philosophy, and Nash is widespread in the analysis of voting behavior in political science and international relations.

The bargaining solution is a trickier legacy. Recall Nash’s sole economics course, which he took as an undergraduate. In that course, he wrote a term paper, eventually to appear in Econometrica, where he attempted to axiomatize what will happen when two parties bargain over some outcome. The idea is simple. Whatever the bargaining outcome is, we want it to satisfy a handful of reasonable assumptions. First, since ordinal utility is invariant to affine transformations of a utility function, the bargaining outcome should not be affected by these types of transformations: only ordinal preferences should matter. Second, the outcome should be Pareto optimal: the players would have to mighty spiteful to throw away part of the pie rather than give it to at least one of them. Third, given their utility functions players should be treated symmetrically. Fourth (and a bit controversially, as we will see), Nash insisted on Independence of Irrelevant Alternatives, meaning that if f(T) is the set of “fair bargains” when T is the set of all potential bargains, then if the potential set of bargains is smaller yet still contains f(T), say S strictly contained by T where f(T) is in S, then f(T) must remain the barganing outcome. It turns out that under these assumptions, there is a unique outcome which maximizes (u(x)-u(d))*(v(x)-v(d)), where u and v are each player’s utility functions, x is the vector of payoffs under the eventual bargain, and d the “status-quo” payoff if no bargain is made. This is natural in many cases. For instance, if two identical agents are splitting a dollar, then 50-50 is the only Nash outcome. Uniqueness is not at all obvious: recall the Edgeworth box and you will see that individual rationality and Pareto optimality alone leave many potential equilibria. Nash’s result is elegant and surprising, and it is no surprise that Nash’s grad school recommendation letter famously was only one sentence long: “This man is a genius.”

One problem with Nash bargaining, however. Nash was famously bipolar in real life, but there is an analogous bipolar feel to the idea of Nash equilibrium and the idea of Nash bargaining: where exactly are threats in Nash’s bargain theory? That is, Nash bargaining as an idea completely follows from the cooperative theory of von Neumann and Morganstern. Consider two identical agents splitting a dollar once more. Imagine that one of the agents already has 30 cents, so that only 70 of the cents are actually in the middle of the table. The Nash solution is that the person who starts with the thirty cents eventually winds up with 65 cents, and the other person with 35. But play this out in your head.

Player 1: “I, already having the 30 cents, should get half of what remains. It is only fair, and if you don’t give me 65 I will walk away from this table and we will each get nothing more.”

Player 2: “What does that have to do with it? The fair outcome is 50 cents each, which leaves you with more than your originally thirty, so you can take your threat and go jump off a bridge.”

That is, 50/50 might be a reasonable solution here, right? This might make even more sense if we take a more concrete example: bargaining over wages. Imagine the prevailing wage for CEOs in your industry is $250,000. Two identical CEOs will generate $500,000 in value for the firm if hired. CEO Candidate One has no other job offer. CEO Candidate Two has an offer from a job with similar prestige and benefits, paying $175,000. Surely we can’t believe that the second CEO will wind up with higher pay, right? It is a completely noncredible threat to take the $175,000 offer, hence it shouldn’t affect the bargaining outcome. A pet peeve of mine is that many applied economists are still using Nash bargaining – often in the context of the labor market! – despite this well-known problem.

Nash was quite aware of this, as can be seen by his 1953 Econometrica, where he attempts to give a noncooperative bargaining game that reaches the earlier axiomatic outcome. Indeed, this paper inspired an enormous research agenda called the Nash Program devoted to finding noncooperative games which generate well-known or reasonable-sounding cooperative solution outcomes. In some sense, the idea of “implementation” in mechanism design, where we investigate whether there exists a game which can generate socially or coalitionally preferred outcomes noncooperatively, can be thought of as a successful modern branch of the Nash program. Nash’s ’53 noncooperative game simply involves adding a bit of noise into the set of possible outcomes. Consider splitting a dollar again. Let a third party tell each player to name how many cents they want. If the joint requests are feasible, then the dollar is split (with any remainder thrown away), else each player gets nothing. Clearly every split of the dollar on the Pareto frontier is a Nash equilibrium, as is each player requesting the full dollar and getting nothing. However, if there is a tiny bit of noise about whether there is exactly one dollar, or .99 cents, or 1.01 cents, etc., when deciding whether to ask for more money, I will have to weigh the higher payoff if the joint demand is feasible against the payoff zero if my increased demand makes the split impossible and hence neither of us earn anything. In a rough sense, Nash shows that as the distribution of noise becomes degenerate around the true bargaining frontier, players will demand exactly their Nash bargaining outcome. Of course it is interesting that there exists some bargaining game that generates the Nash solution, and the idea that we should study noncooperative games which implement cooperate solution concepts is without a doubt seminal, but this particular game seems very strange to me, as I don’t understand what the source of the noise is, why it becomes degenerate, etc.

On the shoulders of Nash, however, bargaining progressed a huge amount. Three papers in particular are worth your time, although hopefully you have seen these before: Kalai and Smorodinsky 1975 who retaining the axiomatic approach but drop IIA, Rubinstein’s famous 1982 Econometrica on noncooperative bargaining with alternative offers, and Binmore, Rubinstein and Wolinsky on implementation of bargaining solutions which deals with the idea of threats as I did above.

You can read all four Nash papers in the original literally during your lunch hour; this seems to me a worthy way to tip your cap toward a man who literally helped make modern economics possible.

“The Power of Communication,” D. Rahman (2014)

(Before getting to Rahman’s paper, a quick note on today’s Clark Medal, which went to Roland Fryer, an economist at Harvard who is best known for his work on the economics of education. Fryer is no question a superstar, and is unusual in leaving academia temporarily while still quite young to work for the city of New York on improving their education policy. His work is a bit outside my interests, so I will leave more competent commentary to better informed writers.

The one caveat I have, however, is the same one I gave last year: the AEA is making a huge mistake in essentially changing this prize from “Best Economist Under 40” to “Best Applied Microeconomist Under 40”. Of the past seven winners, the only one who isn’t obviously an applied microeconomist is Levin, and yet even he describes himself as “an applied economist with interests in industrial organization, market design and the economics of technology.” It’s not that Saez, Duflo, Levin, Finkelstein, Chetty, Gentzkow and Fryer are doing bad work – their research is all of very high quality and by no means “cute-onomics” – but simply that the type of research they do is a very small subset of what economists work on. This style of work is particularly associated with the two Cambridge schools, and it’s no surprise that all of the past seven winners either did their PhD or postdoc in Cambridge. Where are the macroeconomists, when Europe is facing unemployment rates upwards of 30% in some regions? Where are the finance and monetary folks, when we just suffered the worst global recession since the 1930s? Where are the growth economists, when we have just seen 20 years of incredible economic growth in the third world? Where are the historians? Where are the theorists, microeconomic and econometric, on whose backs the applied work winning the prizes are built? Something needs to change.)

Enough bellyaching. Let’s take a look at Rahman’s clever paper, which might be thought as “when mediators are bad for society”; I’ll give you another paper shortly about “when mediators are good”. Rahman’s question is simple: can firms maintain collusion without observing what other firms produce? You might think this would be tricky if the realized price only imperfectly reflects total production. Let the market price p be a function of total industry production q plus an epsilon term. Optimally, we would jointly produce the monopoly quantity and split the rents. However, the epsilon term means that simply observing the market price doesn’t tell my firm whether the other firm cheated and produced too much.

What can be done? Green and Porter (1984), along with Abreu, Pearce and Stacchetti two years later, answered that collusion can be sustained: just let the equilibrium involve a price war if the market price drops below a threshold. Sannikov and Skrzypacz provided an important corollary, however: if prices can be monitored continuously, then collusion unravels. Essentially, if actions to increase production can be taken continuously, the price wars required to prevent cheating must be so frequent that join profit from sometimes colluding and sometimes fighting price wars is worse than joint profit than from just playing static Cournot.

Rahman’s trick saves collusion even when, as is surely realistic, cheaters can act in continuous time. Here is how it works. Let there be a mediator – an industry organization or similar – who can talk privately to each firm. Colluding firms alternate who is producing at any given time, with the one producing firm selling the monopoly level of output. The firms who are not supposed to produce at time t obviously have an incentive to cheat and produce a little bit anyway. Once in a while, however, the mediator tells the firm who is meant to produce in time t to produce a very large amount. If the price turns out high, the mediator gives the firm that was meant to produce a very large amount less time in the future to act as the monopolist, whereas if the price turns out low, the mediator gives that firm more monopolist time in the future. The latter condition is required to incentivize the producing firm to actually ramp up production when told to do so. Either a capacity constraint, or a condition on the demand function, is required to keep the producing firm from increasing production too much.

Note that if a nonproducing firm cheats and produce during periods you were meant to be producing 0, and the mediator happens to secretly ask the temporary monopolist firm to produce a large amount, you are just increasing the probability that the other firm gets to act as the monopolist in the future while you just get to produce zero. Even better, since the mediator only occasionally asks the producing firm to overproduce, and other firms don’t know when this time might be, the nonproducing firms are always wary of cheating. That is, the mediator’s ability to make private recommendations permits more scope for collusion than firms who only options are to punish based on continuously-changing public prices, because there are only rare yet unknown times when cheating could be detected. What’s worse for policymakers, the equilibrium here which involves occasional overproduction shows that such overproduction is being used to help maintain collusion, not to deviate from it; add overproduction to Green-Porter price wars as phenomena which look like collusion breaking down but are instead collusion being maintained.

Final working paper (RePEc IDEAS). Final version published in AER 2014. If you don’t care about proof details, the paper is actually a very quick read. Perhaps no surprise, but the results in this paper are very much related to those in Rahman’s excellent “Who will Monitor the Monitor?” which was discussed on this site four years ago.

“Editor’s Introduction to The New Economic History and the Industrial Revolution,” J. Mokyr (1998)

I taught a fun three hours on the Industrial Revolution in my innovation PhD course this week. The absolutely incredible change in the condition of mankind that began in a tiny corner of Europe in an otherwise unremarkable 70-or-so years is totally fascinating. Indeed, the Industrial Revolution and its aftermath are so important to human history that I find it strange that we give people PhDs in social science without requiring at least some study of what happened.

My post today draws heavily on Joel Mokyr’s lovely, if lengthy, summary of what we know about the period. You really should read the whole thing, but if you know nothing about the IR, there are really five facts of great importance which you should be aware of.

1) The world was absurdly poor from the dawn of mankind until the late 1800s, everywhere.
Somewhere like Chad or Nepal today fares better on essentially any indicator of development than England, the wealthiest place in the world, in the early 1800s. This is hard to believe, I know. Life expectancy was in the 30s in England, infant mortality was about 150 per 1000 live births, literacy was minimal, and median wages were perhaps 3 to 4 times subsistence. Chad today has a life expectancy of 50, infant mortality of 90 per 1000, a literacy of 35%, and urban median wages of roughly 3 to 4 times subsistence. Nepal fares even better on all counts. The air from the “dark, Satanic mills” of William Blake would have made Beijing blush, “night soil” was generally just thrown on to the street, children as young as six regularly worked in mines, and 60 to 80 hours a week was a standard industrial schedule.

The richest places in the world were never more than 5x subsistence before the mid 1800s

Despite all of this, there was incredible voluntary urbanization: those dark, Satanic mills were preferable to the countryside. My own ancestors were among the Irish that fled the Potato famine. Mokyr’s earlier work on the famine, which happened in the British Isles after the Industrial Revolution, suggest 1.1 to 1.5 million people died from a population of about 7 million. This is similar to the lower end of the range for percentage killed during the Cambodian genocide, and similar to the median estimates of the death percentage during the Rwandan genocide. That is, even in the British Isles, famines that would shock the world today were not unheard of. And even if you wanted to leave the countryside, it may have been difficult to do so. After Napoleon, serfdom remained widespread east of the Elbe river in Europe, passes like the “Wanderbucher” were required if one wanted to travel, and coercive labor institutions that tied workers to specific employers were common. This is all to say that the material state of mankind before and during the Industrial Revolution, essentially anywhere in the world, would be seen as outrageous deprivation to us today; palaces like Versailles are not representative, as should be obvious, of how most people lived. Remember also that we are talking about Europe in the early 1800s; estimates of wages in other “rich” societies of the past are even closer to subsistence.

2) The average person did not become richer, nor was overall economic growth particularly spectacular, during the Industrial Revolution; indeed, wages may have fallen between 1760 and 1830.

The standard dating of the Industrial Revolution is 1760 to 1830. You might think: factories! The railroad! The steam engine! High Britannia! How on Earth could people have become poorer? And yet it is true. Brad DeLong has an old post showing Bob Allen’s wage reconstructions: Allen found British wages lower than their 1720 level in 1860! John Stuart Mill, in his 1870 textbook, still is unsure whether all of the great technological achievements of the Industrial Revolution would ever meaningfully improve the state of the mass of mankind. And Mill wasn’t the only one who noticed, there were a couple of German friends, who you may know, writing about the wretched state of the Working Class in Britain in the 1840s as well.

3) Major macro inventions, and growth, of the type seen in England in the late 1700s and early 1800s happened many times in human history.

The Iron Bridge in Shropshire, 1781, proving strength of British iron

The Industrial Revolution must surely be “industrial”, right? The dating of the IR’s beginning to 1760 is at least partially due to the three great inventions of that decade: the Watt engine, Arkwright’s water frame, and the spinning jenny. Two decades later came Cort’s famous puddling process for making strong iron. The industries affected by those inventions, cotton and iron, are the prototypical industries of England’s industrial height.

But if big macro-inventions, and a period of urbanization, are “all” that defines the Industrial Revolution, then there is nothing unique about the British experience. The Song Dynasty in China saw the gun, movable type, a primitive Bessemer process, a modern canal lock system, the steel curved moldboard plow, and a huge increase in arable land following public works projects. Netherlands in the late 16th and early 17th century grew faster, and eventually became richer, than Britain ever did during the Industrial Revolution. We have many other examples of short-lived periods of growth and urbanization: ancient Rome, Muslim Spain, the peak of the Caliphate following Harun ar-Rashid, etc.

We care about England’s growth and invention because of what followed 1830, not what happened between 1760 and 1830. England was able to take their inventions and set on a path to break the Malthusian bounds – I find Galor and Weil’s model the best for understanding what is necessary to move from a Malthusian world of limited long-run growth to a modern world of ever-increasing human capital and economic bounty. Mokyr puts it this way: “Examining British economic history in the period 1760-1830 is a bit like studying the history of Jewish dissenters between 50 B.C. and 50 A.D. At first provincial, localized, even bizarre, it was destined to change the life of every man and women…beyond recognition.”

4) It is hard for us today to understand how revolutionary ideas like “experimentation” or “probability” were.

In his two most famous books, The Gifts of Athena and The Lever of Riches, Mokyr has provided exhausting evidence about the importance of “tinkerers” in Britain. That is, there were probably something on the order of tens of thousands of folks in industry, many not terribly well educated, who avidly followed new scientific breakthroughs, who were aware of the scientific method, who believed in the existence of regularities which could be taken advantage of by man, and who used systematic processes of experimentation to learn what works and what doesn’t (the development of English porter is a great case study). It is impossible to overstate how unusual this was. In Germany and France, science was devoted mainly to the state, or to thought for thought’s sake, rather than to industry. The idea of everyday, uneducated people using scientific methods somewhere like ar-Rashid’s Baghdad is inconceivable. Indeed, as Ian Hacking has shown, it wasn’t just that fundamental concepts like “probabilistic regularities” were difficult to understand: the whole concept of discovering something based on probabilistic output would not have made sense to all but the very most clever person before the Enlightenment.

The existence of tinkerers with access to a scientific mentality was critical because it allowed big inventions or ideas to be refined until they proved useful. England did not just invent the Newcomen engine, put it to work in mines, and then give up. Rather, England developed that Newcomen engine, a boisterous monstrosity, until it could profitably be used to drive trains and ships. In Gifts of Athena, Mokyr writes that fortune may sometimes favor the unprepared mind with a great idea; however, it is the development of that idea which really matters, and to develop macroinventions you need a small but not tiny cohort of clever, mechanically gifted, curious citizens. Some have given credit to a political system, or to the patent system, for the widespread tinkering, but the qualitative historical evidence I am aware of appears to lean toward cultural explanations most strongly. One great piece of evidence is that contemporaries wrote often about the pattern where Frenchmen invented something of scientific importance, yet the idea diffused and was refined in Britain. Any explanation of British uniqueness must depend on Britain’s ability to refine inventions.

5) The best explanations for “why England? why in the late 1700s? why did growth continue?” do not involve colonialism, slavery, or famous inventions.

First, we should dispose of colonialism and slavery. Exports to India were not particularly important compared to exports to non-colonial regions, slavery was a tiny portion of British GDP and savings, and many other countries were equally well-disposed to profit from slavery and colonialism as of the mid-1700s, yet the IR was limited to England. Expanding beyond Europe, Dierdre McCloskey notes that “thrifty self-discipline and violent expropriation have been too common in human history to explain a revolution utterly unprecedented in scale and unique to Europe around 1800.” As for famous inventions, we have already noted how common bursts of cleverness were in the historic record, and there is nothing to suggest that England was particularly unique in its macroinventions.

To my mind, this leaves two big, competing explanations: Mokyr’s argument that tinkerers and a scientific mentality allowed Britain to adapt and diffuse its big inventions rapidly enough to push the country over the Malthusian hump and into a period of declining population growth after 1870, and Bob Allen’s argument that British wages were historically unique. Essentially, Allen argues that British wages were high compared to its capital costs from the Black Death forward. This means that labor-saving inventions were worthwhile to adopt in Britain even when they weren’t worthwhile in other countries (e.g., his computations on the spinning jenny). If it worthwhile to adopt certain inventions, then inventors will be able to sell something, hence it is worthwhile to invent certain inventions. Once adopted, Britain refined these inventions as they crawled down the learning curve, and eventually it became worthwhile for other countries to adopt the tools of the Industrial Revolution. There is a great deal of debate about who has the upper hand, or indeed whether the two views are even in conflict. I do, however, buy the argument, made by Mokyr and others, that it is not at all obvious that inventors in the 1700s were targeting their inventions toward labor saving tasks (although at the margin we know there was some directed technical change in the 1860s), nor it is even clear that invention overall during the IR was labor saving (total working hours increased, for instance).

Mokyr’s Editor’s Introduction to “The New Economic History and the Industrial Revolution” (no RePEc IDEAS page). He has a followup in the Journal of Economic History, 2005, examining further the role of an Enlightenment mentality in allowing for the rapid refinement and adoption of inventions in 18th century Britain, and hence the eventual exit from the Malthusian trap.

“The Contributions of the Economics of Information to Twentieth Century Economics,” J. Stiglitz (2000)

There have been three major methodological developments in economics since 1970. First, following the Lucas Critique we are reluctant to accept policy advice which is not the result of directed behavior on the part of individuals and firms. Second, developments in game theory have made it possible to reformulate questions like “why do firms exist?”, “what will result from regulating a particular industry in a particular way?”, “what can I infer about the state of the world from an offer to trade?”, among many others. Third, imperfect and asymmetric information was shown to be of first-order importance for analyzing economic problems.

Why is information so important? Prices, Hayek taught us, solve the problem of asymmetric information about scarcity. Knowing the price vector is a sufficient statistic for knowing everything about production processes in every firm, as far as generating efficient behavior is concerned. The simple existence of asymmetric information, then, is not obviously a problem for economic efficiency. And if asymmetric information about big things like scarcity across society does not obviously matter, then how could imperfect information about minor things matter? A shopper, for instance, may not know exactly the price of every car at every dealership. But “Natura non facit saltum”, Marshall once claimed: nature does not make leaps. Tiny deviations from the assumptions of general equilibrium do not have large consequences.

But Marshall was wrong: nature does make leaps when it comes to information. The search model of Peter Diamond, most famously, showed that arbitrarily small search costs lead to firms charging the monopoly price in equilibrium, hence a welfare loss completely out of proportion to the search costs. That is, information costs and asymmetries, even very small ones, can theoretically be very problematic for the Arrow-Debreu welfare properties.

Even more interesting, we learned that prices are more powerful than we’d believed. They convey information about scarcity, yes, but also information about other people’s own information or effort. Consider, for instance, efficiency wages. A high wage is not merely a signal of scarcity for a particular type of labor, but is simultaneously an effort inducement mechanism. Given this dual role, it is perhaps not surprising that general equilibrium is no longer Pareto optimal, even if the planner is as constrained informationally as each agent.

How is this? Decentralized economies may, given information cost constraints, exert too much effort searching, or generate inefficient separating equilibrium that unravel trades. The beautiful equity/efficiency separation of the Second Welfare Theorem does not hold in a world of imperfect information. A simple example on this point is that it is often useful to allow some agents suffering moral hazard worries to “buy the firm”, mitigating the incentive problem, but limited liability means this may not happen unless those particular agents begin with a large endowment. That is, a different endowment, where the agents suffering extreme moral hazard problems begin with more money and are able to “buy the firm”, leads to more efficient production (potentially in a Pareto sense) than an endowment where those workers must be provided with information rents in an economy-distorting manner.

It is a strange fact that many social scientists feel economics to some extent stopped progressing by the 1970s. All the important basic results were, in some sense, known. How untrue this is! Imagine labor without search models, trade without monopolistic competitive equilibria, IO or monetary policy without mechanism design, finance without formal models of price discovery and equilibrium noise trading: all would be impossible given the tools we had in 1970. The explanations that preceded modern game theoretic and information-laden explanations are quite extraordinary: Marshall observed that managers have interests different from owners, yet nonetheless are “well-behaved” in running firms in a way acceptable to the owner. His explanation was to credit British upbringing and morals! As Stiglitz notes, this is not an explanation we would accept today. Rather, firms have used a number of intriguing mechanisms to structure incentives in a way that limits agency problems, and we now possess the tools to analyze these mechanisms rigorously.

Final 2000 QJE (RePEc IDEAS)

“Identifying Technology Spillovers and Product Market Rivalry,” N. Bloom, M. Schankerman & J. Van Reenen (2013)

How do the social returns to R&D differ from the private returns? We must believe there is a positive gap between the two given the widespread policies of subsidizing R&D investment. The problem is measuring the gap: theory gives us a number of reasons why firms may do more R&D than the social optimum. Most intuitively, a lot of R&D contains “business stealing” effects, where some of the profit you earn from your new computer chip comes from taking sales away from me, even if you chip is only slightly better than mine. Business stealing must be weighed against the fact that some of the benefits of knowledge a firm creates is captured by other firms working on similar problems, and the fact that consumers get surplus from new inventions as well.

My read of the literature is that we don’t have know much about how aggregate social returns to research differ from private returns. The very best work is at the industry level, such as Trajtenberg’s fantastic paper on CAT scans, where he formally writes down a discrete choice demand system for new innovations in that product and compares R&D costs to social benefits. The problem with industry-level studies is that, almost by definition, they are studying the social return to R&D in ex-post successful new industries. At an aggregate level, you might think, well, just include the industry stock of R&D in a standard firm production regression. This will control for within-industry spillovers, and we can make some assumption about the steepness of the demand curve to translate private returns given spillovers into returns inclusive of consumer surplus.

There are two problems with that method. First, what is an “industry” anyway? Bloom et al point out in the present paper that even though Apple and Intel do very similar research, as measured by the technology classes they patent in, they don’t actually compete in the product market. This means that we want to include “within-similar-technology-space stock of knowledge” in the firm production function regression, not “within-product-space stock of knowledge”. Second, and more seriously, if we care about social returns, we want to subtract out from the private return to R&D any increase in firm revenue that just comes from business stealing with slightly-improved versions of existing products.

Bloom et al do both in a very interesting way. First, they write down a model where firms get spillovers from research in similar technology classes, then compete with product market rivals; technology space and product market space are correlated but not perfectly so, as in the Apple/Intel example. They estimate spillovers in technology space using measures of closeness in terms of patent classes, and measure closeness in product space based on the SIC industries that firms jointly compete in. The model overidentifies the existence of spillovers: if technological spillovers exist, then you can find evidence conditional on the model in terms of firm market value, firm R&D totals, firm productivity and firm patent activity. No big surprises, given your intuition: technological spillovers to other firms can be seen in every estimated equation, and business stealing R&D, though small in magnitude, is a real phenomenon.

The really important estimate, though, is the level of aggregate social returns compared to private returns. The calculation is non-obvious, and shuttled to an online appendix, but essentially we want to know how increasing R&D by one dollar increases total output (the marginal social return) and how increasing R&D by one dollar increases firm revenue (marginal private return). The former may exceed the latter if the benefits of R&D spill over to other firms, but the latter may exceed the former is lots of R&D just leads to business stealing. Note that any benefits in terms of consumer surplus are omitted. Bloom et al find aggregate marginal private returns on the order of 20%, and social returns on the order of 60% (a gap referred to as “29.2%” instead of “39.2%” in the paper; come on, referees, this is a pretty important thing to not notice!). If it wasn’t for business stealing, the gap between social and private returns would be ten percentage points higher. I confess a little bit of skepticism here; do we really believe that for the average R&D performing firm, the marginal private return on R&D is 20%? Nonetheless, the estimate that social returns exceed private returns is important. Even more important is the insight that the gap between social and private returns depends on the size of the technology spillover. In Bloom et al’s data, large firms tend to do work in technology spaces with more spillovers, while small firms tend to work on fairly idiosyncratic R&D; to greatly simplify what is going on, large firms are doing more general R&D than the very product-specific R&D small firms do. This means that the gap between private and social return is larger for large firms, and hence the justification for subsidizing R&D might be highest for very large firms. Government policy in the U.S. used to implicitly recognize this intuition, shuttling R&D funds to the likes of Bell Labs.

All in all an important contribution, though this is by no means the last word on spillovers; I would love to see a paper asking why firms don’t do more R&D given the large private returns we see here (and in many other papers, for that matter). I am also curious how R&D spillovers compare to spillovers from other types of investments. For instance, an investment increasing demand for product X also increases demand for any complementary products, leads to increased revenue that is partially captured by suppliers with some degree of market power, etc. Is R&D really that special compared to other forms of investment? Not clear to me, especially if we are restricting to more applied, or more process-oriented, R&D. At the very least, I don’t know of any good evidence one way or the other.

Final version, Econometrica 2013 (RePEc IDEAS version); the paper essentially requires reading the Appendix in order to understand what is going on.


Get every new post delivered to your Inbox.

Join 271 other followers

%d bloggers like this: