Category Archives: Development

What Randomization Can and Cannot Do: The 2019 Nobel Prize

It is Nobel Prize season once again, a grand opportunity to dive into some of our field’s most influential papers and to consider their legacy. This year’s prize was inevitable, an award to Abhijit Banerjee, Esther Duflo, and Michael Kremer for popularizing the hugely influential experimental approach to development. It is only fitting that my writeup this year has been delayed due to the anti-government road blockades here in Ecuador which delayed my return to the internet-enabled world – developing countries face many barriers to reaching prosperity, and rarely have I been so personally aware of the effects of place on productivity as I was this week!

The reason for the prize is straightforward: an entire branch of economics, development, looks absolutely different from what it looked like thirty years ago. Development used to be essentially a branch of economic growth. Researchers studied topics like the productivity of large versus small farms, the nature of “marketing” (or the nature of markets and how economically connected different regions in a country are), or the necessity of exports versus industrialization. Studies were almost wholly observational, deep data collections with throwaway references to old-school growth theory. Policy was largely driven by the subjective impression of donors or program managers about projects that “worked”. To be a bit too honest – it was a dull field, and hence a backwater. And worse than dull, it was a field where scientific progress was seriously lacking.

Banerjee has a lovely description of the state of affairs back in the 1990s. Lots of probably-good ideas were funded, informed deeply by history, but with very little convincing evidence that highly-funded projects were achieving their stated aims. Of the World Bank Sourcebook of recommended projects, everything from scholarships to girls to vouchers for poor children to citizens’ report cards were recommended. Did these actually work? Banerjee quotes a program providing computer terminals in rural areas of Madhya Pradesh which explains that due to a lack of electricity and poor connectivity, “only a few of the kiosks have proved to be commercially viable”, then notes, without irony, that “following the success of the initiative,” similar programs would be funded. Clearly this state of affairs is unsatisfactory. Surely we should be able to evaluate the projects we’ve funded already? And better, surely we should structure those evaluations to inform future projects? Banerjee again: “the most useful thing a development economist can do in this environment is stand up for hard evidence.”

And where do we get hard evidence? If by this we mean internal validity – that is, whether the effect we claim to have seen is actually caused by a particular policy in a particular setting – applied econometricians of the “credibility revolution” in labor in the 1980s and 1990s provided an answer. Either take advantage of natural variation with useful statistical properties, like the famed regression discontinuity, or else randomize treatment like a medical study. The idea here is that the assumptions needed to interpret a “treatment effect” are often less demanding than those needed to interpret the estimated parameter of an economic model, hence more likely to be “real”. The problem in development is that most of what we care about cannot be randomized. How are we, for instance, to randomize whether a country adopts import substitution industrialization or not, or randomize farm size under land reform – and at a scale large enough for statistical inference?

What Banerjee, Duflo, and Kremer noticed is that much of what development agencies do in practice has nothing to do with those large-scale interventions. The day-to-day work of development is making sure teachers show up to work, vaccines are distributed and taken up by children, corruption does not deter the creation of new businesses, and so on. By breaking down the work of development on the macro scale to evaluations of development at micro scale, we can at least say something credible about what works in these bite-size pieces. No longer should the World Bank Sourcebook give a list of recommended programs, based on handwaving. Rather, if we are to spend 100 million dollars sending computers to schools in a developing country, we should at least be able to say “when we spent 5 million on a pilot, we designed the pilot so as to learn that computers in that particular setting led to a 12% decrease in dropout rate, and hence a 34%-62% return on investment according to standard estimates of the link between human capital and productivity.” How to run those experiments? How should we set them up? Who can we get to pay for them? How do we deal with “piloting bias”, where the initial NGO we pilot with is more capable than the government we expect to act on evidence learned in the first study? How do we deal with spillovers from randomized experiments, econometrically? Banerjee, Duflo, and Kremer not only ran some of the famous early experiments, they also established the premier academic institution for running these experiments – J-PAL at MIT – and further wrote some of the best known practical guides to experiments in development.

Many of the experiments written by the three winners are now canonical. Let’s start with Michael Kremer’s paper on deworming, with Ted Miguel, in Econometrica. Everyone agreed that deworming kids infected with things like hookworm has large health benefits for the children directly treated. But since worms are spread by outdoor bathroom use and other poor hygiene practices, one infected kid can also harm nearby kids by spreading the disease. Kremer and Miguel suspected that one reason school attendance is so poor in some developing countries is because of the disease burden, and hence that reducing infections among one kid benefits the entire community, and neighboring ones as well, by reducing overall infection. By randomizing mass school-based deworming, and measuring school attendance both at the focal and at neighboring schools, they found that villages as far as 4km away saw higher school attendance (4km rather 6km in the original paper due to a correction of an error in the analysis). Note the good economics here: a change from individual to school-based deworming helps identify spillovers across schools, and some care goes into handling the spatial econometric issue whereby density of nearby schools equals density of nearby population equals differential baseline infection rates at these schools. An extra year of school attendance could therefore be “bought” by a donor for $3.50, much cheaper than other interventions such as textbook programs or additional teachers. Organizations like GiveWell still rate deworming among the most cost-effective educational interventions in the world: in terms of short-run impact, surely this is one of the single most important pieces of applied economics of the 21st century.

The laureates have also used experimental design to learn that some previously highly-regarded programs are not as important to development as you might suspect. Banerjee, Duflo, Rachel Glennerster and Cynthia Kinnan studied microfinance rollout in Hyderabad, randomizing the neighborhoods which received access to a major first-gen microlender. These programs are generally woman-focused, joint-responsibility, high-interest loans a la the Nobel Peace Prize winning Grameen Bank. 2800 households across the city were initially surveyed about their family characteristics, lending behavior, consumption, and entrepreneurship, then followups were performed a year after the microfinance rollout, and then three years later. While women in treated areas were 8.8 percentage points more likely to take a microloan, and existing entrepreneurs do in fact increase spending on their business, there is no long-run impact on education, health, or the likelihood women make important family decisions, nor does it make businesses more profitable. That is, credit constraints, at least in poor neighborhoods in Hyderabad, do not appear the main barrier to development; this is perhaps not very surprising, since higher-productivity firms in India in the 2000s already have access to reasonably well-developed credit markets, and surely they are the main driver of national income (followup work does see some benefits for very high talent, very poor entrepreneurs, but the long run key result remains).

Let’s realize how wild this paper is: a literal Nobel Peace Prize was awarded for a form of lending that had not really been rigorously analyzed. This form of lending effectively did not exist in rich countries at the time they developed, so it is not a necessary condition for growth. And yet enormous amounts of money went into a somewhat-odd financial structure because donors were nonetheless convinced, on the basis of very flimsy evidence, that microlending was critical.

By replacing conjecture with evidence, and showing randomized trials can actually be run in many important development settings, the laureates’ reformation of economic development has been unquestionably positive. Or has it? Before returning to the (truly!) positive aspects of Banerjee, Duflo and Kremer’s research program, we must take a short negative turn. Because though Banerjee, Duflo, and Kremer are unquestionably the leaders of the field of development, and the most influential scholars for young economists working in that field, there is much more controversy about RCTs than you might suspect if all you’ve seen are the press accolades of the method. Donors love RCTs, as they help select the right projects. Journalists love RCTs, as they are simple to explain (Wired, in a typical example of this hyperbole: “But in the realm of human behavior, just as in the realm of medicine, there’s no better way to gain insight than to compare the effect of an intervention to the effect of doing nothing at all. That is: You need a randomized controlled trial.”) The “randomista” referees love RCTs – a tribe is a tribe, after all. But RCTs are not necessarily better for those who hope to understand economic development! The critiques are three-fold.

First, that while the method of random trials is great for impact or program evaluation, it is not great for understanding how similar but not exact replications will perform in different settings. That is, random trials have no specific claim to external validity, and indeed are worse than other methods on this count. Second, it is argued that development is much more than program evaluation, and that the reason real countries grow rich has essentially nothing to do with the kinds of policies studied in the papers we discussed above: the “economist as plumber” famously popularized by Duflo, who rigorously diagnoses small problems and proposes solutions, is a fine job for a World Bank staffer, but a crazy use of the intelligence of our otherwise-leading scholars in development. Third, even if we only care about internal validity, and only care about the internal validity of some effect that can in principle be studied experimentally, the optimal experimental design is generally not an RCT.

The external validity problem is often seen to be one related to scale: well-run partner NGOs are just better at implementing any given policy that, say, a government, so the benefit of scaled-up interventions may be much lower than that identified by an experiment. We call this “piloting bias”, but it isn’t really the core problem. The core problem is that the mapping from one environment or one time to the next depends on many factors, and by definition the experiment cannot replicate those factors. A labor market intervention in a high-unemployment country cannot inform in an internally valid way about a low-unemployment country, or a country with different outside options for urban laborers, or a country with an alternative social safety net or cultural traditions about income sharing within families. Worse, the mapping from a partial equilibrium to a general equilibrium world is not at all obvious, and experiments do not inform as to the mapping. Giving cash transfers to some villagers may make them better off, but giving cash transfers to all villagers may cause land prices to rise, or cause more rent extraction by corrupt governments, or cause all sorts of other changes in relative prices.

You can see this issue in the Scientific Summary of this year’s Nobel. Literally, the introductory justification for RCTs is that, “[t]o give just a few examples, theory cannot tell us whether temporarily employing additional contract teachers with a possibility of re-employment is a more cost-effective way to raise the quality of education than reducing class sizes. Neither can it tell us whether microfinance programs effectively boost entrepreneurship among the poor. Nor does it reveal the extent to which subsidized health-care products will raise poor people’s investment in their own health.”

Theory cannot tell us the answers to these questions, but an internally valid randomized control trial can? Surely the wage of the contract teacher vis-a-vis more regular teachers and hence smaller class sizes matters? Surely it matters how well-trained these contract teachers are? Surely it matters what the incentives for investment in human capital by students in the given location is? To put this another way: run literally whatever experiment you want to run on this question in, say, rural Zambia in grade 4 in 2019. Then predict the cost-benefit ratio of having additional contract teachers versus more regular teachers in Bihar in high school in 2039. Who would think there is a link? Actually, let’s be more precise: who would think there is a link between what you learned in Zambia and what will happen in Bihar which is not primarily theoretical? Having done no RCT, I can tell you that if the contract teachers are much cheaper per unit of human capital, we should use more of them. I can tell you that if the students speak two different languages, there is a greater benefit in having a teacher assistant to translate. I can tell you that if the government or other principal has the ability to undo outside incentives with a side contract, hence are not committed to the mechanism, dynamic mechanisms will not perform as well as you expect. These types of statements are theoretical: good old-fashioned substitution effects due to relative prices, or a priori production function issues, or basic mechanism design.

Things are worse still. It is not simply that an internally valid estimate of a treatment effect often tells us nothing about how that effect generalizes, but that the important questions in development cannot be answered with RCTs. Everyone working in development has heard this critique. But just because a critique is oft-repeated does not mean it is wrong. As Lant Pritchett argues, national development is a social process involving markets, institutions, politics, and organizations. RCTs have focused on, in his reckoning, “topics that account for roughly zero of the observed variation in human development outcomes.” Again, this isn’t to say that RCTs cannot study anything. Improving the function of developing world schools, figuring out why malaria nets are not used, investigating how to reintegrate civil war fighters: these are not minor issues, and it’s good that folks like this year’s Nobelists and their followers provide solid evidence on these topics. The question is one of balance. Are we, as economists are famously wont to do, simply looking for keys underneath the spotlight when we focus our attention on questions which are amenable to a randomized study? Has the focus on internal validity diverted effort from topics that are much more fundamental to the wealth of nations?

But fine. Let us consider that our question of interest can be studied in a randomized fashion. And let us assume that we do not expect piloting bias or other external validity concerns to be first-order. We still have an issue: even on internal validity, randomized control trials are not perfect. They are certainly not a “gold standard”, and the econometricians who push back against this framing have good reason to do so. Two primary issues arise. First, to predict what will happen if I impose a policy, I am concerned that what I have learned in this past is biased (e.g., the people observed to use schooling subsidies are more diligent than those who would go to school if we made these subsidies universal). But I am also concerned about statistical inference: with small sample sizes, even an unbiased estimate will not predict very well. I recently talked with an organization doing recruitment who quasi-randomly recruited at a small number of colleges. On average, they attracted a handful of applicants in each college. They stopped recruiting at the colleges with two or fewer applicants after the first year. But of course random variation means the difference between two and four applicants is basically nil.

In this vein, randomized trials tend to have very small sample sizes compared to observational studies. When this is combined with high “leverage” of outlier observations when multiple treatment arms are evaluated, particularly for heterogeneous effects, randomized trials often predict poorly out of sample even when unbiased (see Alwyn Young in the QJE on this point). Observational studies allow larger sample sizes, and hence often predict better even when they are biased. The theoretical assumptions of a structural model permit parameters to be estimated even more tightly, as we use a priori theory to effectively restrict the nature of economic effects.

We have thus far assumed the randomized trial is unbiased, but that is often suspect as well. Even if I randomly assign treatment, I have not necessarily randomly assigned spillovers in a balanced way, nor have I restricted untreated agents from rebalancing their effort or resources. A PhD student of ours on the market this year, Carlos Inoue, examined the effect of random allocation of a new coronary intervention in Brazilian hospitals. Following the arrival of this technology, good doctors moved to hospitals with the “randomized” technology. The estimated effect is therefore nothing like what would have been found had all hospitals adopted the intervention. This issue can be stated simply: randomizing treatment does not in practice hold all relevant covariates constant, and if your response is just “control for the covariates you worry about”, then we are back to the old setting of observational studies where we need a priori arguments about what these covariates are if we are to talk about the effects of a policy.

The irony is that Banerjee, Duflo and Kremer are often quite careful in how they motivate their work with traditional microeconomic theory. They rarely make grandiose claims of external validity when nothing of the sort can be shown by their experiment. Kremer is an ace theorist in his own right, Banerjee often relies on complex decision and game theory particularly in his early work, and no one can read the care with which Duflo handles issues of theory and external validity and think she is merely punting. Most of the complaints about their “randomista” followers do not fully apply to the work of the laureates themselves.

And none of the critiques above should be taken to mean that experiments cannot be incredibly useful to development. Indeed, the proof of the pudding is in the tasting: some of the small-scale interventions by Banerjee, Duflo, and Kremer have been successfully scaled up! To analogize to a firm, consider a plant manager interested in improving productivity. She could read books on operations research and try to implement ideas, but it surely is also useful to play around with experiments within her plant. Perhaps she will learn that it’s not incentives but rather lack of information that is the biggest reason workers are, say, applying car door hinges incorrectly. She may then redo training, and find fewer errors in cars produced at the plant over the next year. This evidence – not only the treatment effect, but also the rationale – can then be brought to other plants at the same company. All totally reasonable. Indeed, would we not find it insane for a manager to try things out, and make minor changes on the margin, before implementing a huge change to incentives or training? And of course the same goes, or should go, when the World Bank or DFID or USAID spend tons of money trying to solve some development issue.

On that point, what would even a skeptic agree a development experiment can do? First, it is generally better than other methods at identifying internally valid treatment effects, though still subject to the caveats above.

Second, it can fine-tune interventions along margins where theory gives little guidance. For instance, do people not take AIDS drugs because they don’t believe they work, because they don’t have the money, or because they want to continue having sex and no one will sleep with them if they are seen picking up antiretrovirals? My colleague Laura Derksen suspected that people are often unaware that antiretrovirals prevent transmission, hence in locations with high rates of HIV, it may be safer to sleep with someone taking antiretrovirals than the population at large. She shows that informational interventions informing villagers about this property of antiretrovirals meaningfully increases takeup of medication. We learn from her study that it may be important in the case of AIDS prevention to correct this particular set of beliefs. Theory, of course, tells us little about how widespread these incorrect beliefs are, hence about the magnitude of this informational shift on drug takeup.

Third, experiments allow us to study policies that no one has yet implemented. Ignoring the problem of statistical identification in observational studies, there may be many policies we wish to implement which are wholly different in kind from those seen in the past. The negative income tax experiments of the 1970s are a classic example. Experiments give researchers more control. This additional control is of course balanced against the fact that we should expect super meaningful interventions to have already occurred, and we may have to perform experiments at relatively low scale due to cost. We should not be too small-minded here. There are now experimental development papers on topics thought to be outside the bounds of experiment. I’ve previously discussed on this site Kevin Donovan’s work randomizing the placement of roads and bridges connected remote villages to urban centers. What could be “less amenable” to randomization that the literal construction of a road and bridge network?

So where do we stand? It is unquestionable that a lot of development work in practice was based on the flimsiest of evidence. It is unquestionable that armies Banerjee, Duflo, and Kremer have sent into the world via J-PAL and similar institutions have brought much more rigor to understanding program evaluation. Some of these interventions are now literally improving the lives of millions of people with clear, well-identified, nonobvious policy. That is an incredible achievement! And there is something likeable about the desire of the ivory tower to get into the weeds of day-to-day policy. Michael Kremer on this point: “The modern movement for RCTs in development economics…is about innovation, as well as evaluation. It’s a dynamic process of learning about a context through painstaking on-the-ground work, trying out different approaches, collecting good data with good causal identification, finding out that results do not fit pre-conceived theoretical ideas, working on a better theoretical understanding that fits the facts on the ground, and developing new ideas and approaches based on theory and then testing the new approaches.” No objection here.

That said, we cannot ignore that there are serious people who seriously object to the J-PAL style of development. Deaton, who won the Nobel Prize only four years ago, writes the following, in line with our discussion above: “Randomized controlled trials cannot automatically trump other evidence, they do not occupy any special place in some hierarchy of evidence, nor does it make sense to refer to them as “hard” while other methods are “soft”… [T]he analysis of projects needs to be refocused towards the investigation of potentially generalizable mechanisms that explain why and in what contexts projects can be expected to work.” Lant Pritchett argues that despite success persuading donors and policymakers, the evidence that RCTs lead to better policies at the governmental level, and hence better outcomes for people, is far from the case. The barrier to the adoption of better policy is bad incentives, not a lack of knowledge on how given policies will perform. I think these critiques are quite valid, and the randomization movement in development often way overstates what they have, and could have in principle, learned. But let’s give the last word to Chris Blattman on the skeptic’s case for randomized trials in development: “if a little populist evangelism will get more evidence-based thinking in the world, and tip us marginally further from Great Leaps Forward, I have one thing to say: Hallelujah.” Indeed. No one, randomista or not, longs to go back to the day of unjustified advice on development, particularly “Great Leap Forward” type programs without any real theoretical or empirical backing!

A few remaining bagatelles:

1) It is surprising how early this award was given. Though incredibly influential, the earliest published papers by any of the laureates mentioned in the Nobel scientific summary are from 2003 and 2004 (Miguel-Kremer on deworming, Duflo-Saez on retirement plans, Chattopadhyay and Duflo on female policymakers in India, Banerjee and Duflo on health in Rajathstan). This seems shockingly recent for a Nobel – I wonder if there are any other Nobel winners in economics who won entirely for work published so close to the prize announcement.

2) In my field, innovation, Kremer is most famous for his paper on patent buyouts (we discussed that paper on this site way back in 2010). How do we both incentivize new drug production but also get these drugs sold at marginal cost once invented? We think the drugmakers have better knowledge about how to produce and test a new drug than some bureaucrat, so we can’t finance drugs directly. If we give a patent, then high-value drugs return more to the inventor, but at massive deadweight loss. What we want to do is offer inventors some large fraction of the social return to their invention ex-post, in exchange for making production perfectly competitive. Kremer proposes patent auctions where the government pays a multiple of the winning bid with some probability, giving the drug to the public domain. The auction reveals the market value, and the multiple allows the government to account for consumer surplus and deadweight loss as well. There are many practical issues, but I have always found this an elegant, information-based attempt to solve the problem of innovation production, and it has been quite influential on those grounds.

3) Somewhat ironically, Kremer also has a great 1990s growth paper with RCT-skeptics Pritchett, Easterly and Summers. The point is simple: growth rates by country vacillate wildly decade to decade. Knowing the 2000s, you likely would not have predicted countries like Ethiopia and Myanmar as growth miracles of the 2010s. Yet things like education, political systems, and so on are quite constant within-country across any two decade period. This necessarily means that shocks of some sort, whether from international demand, the political system, nonlinear cumulative effects, and so on, must be first-order for growth. A great, straightforward argument, well-explained.

4) There is some irony that two of Duflo’s most famous papers are not experiments at all. Her most cited paper by far is a piece of econometric theory on standard errors in difference-in-difference models, written with Marianne Bertrand. Her next most cited paper is a lovely study of the quasi-random school expansion policy in Indonesia, used to estimate the return on school construction and on education more generally. Nary a randomized experiment in sight in either paper.

5) I could go on all day about Michael Kremer’s 1990s essays. In addition to Patent Buyouts, two more of them appear on my class syllabi. The O-Ring theory is an elegant model of complementary inputs and labor market sorting, where slightly better “secretaries” earn much higher wages. The “One Million B.C.” paper notes that growth must have been low for most of human history, and that it was limited because low human density limited the spread of nonrivalrous ideas. It is the classic Malthus plus endogenous growth paper, and always a hit among students.

6) Ok, one more for Kremer, since “Elephants” is my favorite title in economics. Theoretically, future scarcity increases prices. When people think elephants will go extinct, the price of ivory therefore rises, making extinction more likely as poaching incentives go up. What to do? Hold a government stockpile of ivory and commit to selling it if the stock of living elephants falls below a certain point. Elegant. And I can’t help but think: how would one study this particular general equilibrium effect experimentally? I both believe the result and suspect that randomized trials are not a good way to understand it!

Advertisement

“Eliminating Uncertainty in Market Access: The Impact of New Bridges in Rural Nicaragua,” W. Brooks & K. Donovan (2018)

It’s NBER Summer Institute season, when every bar and restaurant in East Cambridge, from Helmand to Lord Hobo, is filled with our tribe. The air hums with discussions of Lagrangians and HANKs and robust estimators. And the number of great papers presented, discussed, or otherwise floating around inspires.

The paper we’re discussing today, by Wyatt Brooks at Notre Dame and Kevin Donovan at Yale SOM, uses a great combination of dynamic general equilibrium theory and a totally insane quasi-randomized experiment to help answer an old question: how beneficial is it for villages to be connected to the broader economy? The fundamental insight requires two ideas that are second nature for economists, but are incredibly controversial outside our profession.

First, going back to Nobel winner Arthur Lewis if not much earlier, economists have argued that “structural transformation”, the shift out of low-productivity agriculture to urban areas and non-ag sectors, is fundamental to economic growth. Recent work by Hicks et al is a bit more measured – the individuals who benefit from leaving agriculture generally already have, so Lenin-type forced industrialization is a bad idea! – but nonetheless barriers to that movement are still harmful to growth, even when those barriers are largely cultural as in the forthcoming JPE by Melanie Morton and the well-named Gharad Bryan. What’s so bad about the ag sector? In the developing world, it tends to be small-plot, quite-inefficient, staple-crop production, unlike the growth-generating positive-externality-filled, increasing-returns-type sectors (on this point, Romer 1990). There are zero examples of countries becoming rich without their labor force shifting dramatically out of agriculture. The intuition of many in the public, that Gandhi was right about the village economy and that structural transformation just means dreadful slums, is the intuition of people who lack respect for individual agency. The slums may be bad, but look how they fill up everywhere they exist! Ergo, how bad must the alternative be?

The second related misunderstanding of the public is that credit is unimportant. For folks near subsistence, the danger of economic shocks pushing you near that dangerous cutpoint is so fundamental that it leads to all sorts of otherwise odd behavior. Consider the response of my ancestors (and presumably the author of today’s paper’s ancestors, given that he is a Prof. Donovan) when potato blight hit. Potatoes are an input to growing more potatoes tomorrow, but near subsistence, you have no choice but to eat your “savings” away after bad shocks. This obviously causes problems in the future, prolonging the famine. But even worse, to avoid getting in a situation where you eat all your savings, you save more and invest less than you otherwise would. Empirically, Karlan et al QJE 2014 show large demand for savings instruments in Ghana, and Cynthia Kinnan shows why insurance markets in the developing world are incomplete despite large welfare gains. Indeed, many countries, including India, make it illegal to insure oneself against certain types of negative shocks, as Mobarak and Rosenzweig show. The need to save for low probability, really negative, shocks may even lead people to invest in assets with highly negative annual returns; on this, see the wonderfully-titled Continued Existence of Cows Disproves Central Tenets of Capitalism? This is all to say: the rise of credit and insurance markets unlocks much more productive activity, especially in the developing world, and it is not merely the den of exploitative lenders.

Ok, so insurance against bad shocks matters, and getting out of low-productivity agriculture may matter as well. Let’s imagine you live in a tiny village which is often separated from bigger towns, geographically. What would happen if you somehow lowered the cost of reaching those towns? Well, we’d expect goods-trade to radically change – see the earlier post on Dave Donaldson’s work, or the nice paper on Brazilian roads by Morten and Oliveria. But the benefits of reducing isolation go well beyond just getting better prices for goods.

Why? In the developing world, most people have multiple jobs. They farm during the season, work in the market on occasion, do construction, work as a migrant, and so on. Imagine that in the village, most jobs are just farmwork, and outside, there is always the change for day work at a fixed wage. In autarky, I just work on the farm, perhaps my own. I need to keep a bunch of savings because sometimes farms get a bunch of bad shocks: a fire burns my crops, or an elephant stomps on them. Running out of savings risks death, and there is no crop insurance, so I save precautionarily. Saving means I don’t have as much to spend on fertilizer or pesticide, so my yields are lower.

If I can access the outside world, then when my farm gets bad shocks and my savings runs low, I leave the village and take day work to build them back up. Since I know I will have that option, I don’t need to save as much, and hence I can buy more fertilizer. Now, the wage for farmers in the village (including the implicit wage that would keep me on my own farm) needs to be higher since some of these ex-farmers will go work in town, shifting village labor supply left. This higher wage pushes the amount of fertilizer I will buy down, since high wages reduce the marginal productivity of farm improvements. Whether fertilizer use goes up or down is therefore an empirical question, but at least we can say that those who use more fertilizer, those who react more to bad shocks by working outside the village, and those whose savings drops the most should be the same farmers. Either way, the village winds up richer both for the direct reason of having an outside option, and for the indirect reason of being able to reduce precautionary savings. That is, the harm is coming both from the first moment, the average shock to agricultural productivity, but also the second moment, its variance.

How much does this matter is practice? Brooks and Donovan worked with a NGO that physically builds bridges in remote areas. In Nicaragua, floods during the harvest season are common, isolating villages for days at a time when the riverbed along the path to market turns into a raging torrent. In this area, bridges are unnecessary when the riverbed is dry: the land is fairly flat, and the bridge barely reduces travel time when the riverbed isn’t flooded. These floods generally occur exactly during the growing season, after fertilizer is bought, but before crops are harvested, so the goods market in both inputs and outputs is essentially unaffected. And there is nice quasirandom variation: of 15 villages which the NGO selected as needing a bridge, 9 were ruled out after a visit by a technical advisor found the soil and topography unsuitable for the NGO’s relatively inexpensive bridge.

The authors survey villages the year before and the two years after the bridges are built, as well as surveying a subset of villagers with cell phones every two weeks in a particular year. Although N=15 seems worrying for power, the within-village differences in labor market behavior are sufficient that properly bootstrapped estimates can still infer interesting effects. And what do you find? Villages with bridges have many men shift from working in the village to outside in a given week, the percentage of women working outside nearly doubles with most of the women entering the labor force in order to work, the wages inside the village rise while wages outside the village do not, the use of fertilizer rises, village farm profits rise 76%, and the effect of all this is most pronounced on poorer households physically close to the bridge.

All this is exactly in line with the dynamic general equilibrium model sketched out above. If you assumed that bridges were just about market access for goods, you would have missed all of this. If you assumed the only benefit was additional wages outside the village, you would miss a full 1/3 of the benefit: the general equilibrium effect of shifting out workers who are particularly capable working outside the village causes wages to rise for the farm workers who remain at home. These particular bridges show an internal rate of return of nearly 20% even though they do nothing to improve market access for either inputs and outputs! And there are, of course, further utility benefits from reducing risk, even when that risk reduction does not show up in income through the channel of increased investment.

November 2017 working paper, currently R&R at Econometrica (RePEc IDEAS version. Both authors have a number of other really interesting drafts, of which I’ll mention two. Brooks, in a working paper with Joseph Kaposki and Yao Li, identify a really interesting harm of industrial clusters, but one that Adam Smith would have surely identified: they make collusion easier. Put all the firms in an industry in the same place, and establish regular opportunities for their managers to meet, and you wind up getting much less variance in markups than firms which are induced to locate in these clusters! Donovan, in a recent RED with my friend Chris Herrington, calibrates a model to explain why both college attendance and the relative cognitive ability of college grads rose during the 20th century. It’s not as simple as you might think: a decrease in costs, through student loans of otherwise, only affects marginal students, who are cognitively worse than the average existing college student. It turns out you also need a rising college premium and more precise signals of high schoolers’ academic abilities to get both patterns. Models doing work to extract insight from data – as always, this is the fundamental reason why economics is the queen of the social sciences.

“The Development Effects of the Extractive Colonial Economy,” M. Dell & B. Olken (2017)

A good rule of thumb is that you will want to read any working paper Melissa Dell puts out. Her main interest is the long-run path-dependent effect of historical institutions, with rigorous quantitative investigation of the subtle conditionality of the past. For instance, in her earlier work on Peru (Econometrica, 2010), mine slavery in the colonial era led to fewer hacienda style plantations at the end of the era, which led to less political power without those large landholders in the early democratic era, which led to fewer public goods throughout the 20th century, which led to less education and income today in eras that used to have mine slavery. One way to read this is that local inequality is the past may, through political institutions, be a good thing today! History is not as simple as “inequality is the past causes bad outcomes today” or “extractive institutions in the past cause bad outcomes today” or “colonial economic distortions cause bad outcomes today”. But, contra the branch of historians that don’t like to assign causality to any single factor in any given situation, we don’t need to entirely punt on the effects of specific policies in specific places if we apply careful statistical and theoretical analysis.

Dell’s new paper looks at the cultuurstelsel, a policy the Dutch imposed on Java in the mid-19th century. Essentially, the Netherlands was broke and Java was suitable for sugar, so the Dutch required villages in certain regions to use huge portions of their arable land, and labor effort, to produce sugar for export. They built roads and some rail, as well as sugar factories (now generally long gone), as part of this effort, and the land used for sugar production generally became public village land controlled at the behest of local leaders. This was back in the mid-1800s, so surely it shouldn’t affect anything of substance today?

But it did! Take a look at villages near the old sugar plantations, or that were forced to plant sugar, and you’ll find higher incomes, higher education levels, high school attendance rates even back in the late colonial era, higher population densities, and more workers today in retail and manufacturing. Dell and Olken did some wild data matching using a great database of geographic names collected by the US government to match the historic villages where these sugar plants, and these labor requirements, were located with modern village and town locations. They then constructed “placebo” factories – locations along coastal rivers in sugar growing regions with appropriate topography where a plant could have been located but wasn’t. In particular, as in the famous Salop circle, you won’t locate a factory too close to an existing one, but there are many counterfactual equilibria where we just shift all the factories one way or the other. By comparing the predicted effect of distance from the real factory on outcomes today with the predicted effect of distance from the huge number of hypothetical factories, you can isolate the historic local influence of the real factory from other local features which can’t be controlled for.

Consumption right next to old, long-destroyed factories is 14% higher than even five kilometers away, education is 1.25 years longer on average, electrification, road, and rail density are all substantially higher, and industrial production upstream and downstream from sugar (e.g., farm machinery upstream, and processed foods downstream) are also much more likely to be located in villages with historic factories even if there is no sugar production anymore in that region!

It’s not just the factory and Dutch investments that matter, however. Consider the villages, up to 10 kilometers away, which were forced to grow the raw cane. Their elites took private land for this purpose, and land inequality remains higher in villages that were forced to grow cane compared to villages right next door that were outside the Dutch-imposed boundary. But this public land permitted surplus extraction in an agricultural society which could be used for public goods, like schooling, which would later become important! These villages were much more likely to have schools especially before the 1970s, when public schooling in Indonesia was limited, and today are higher density, richer, more educated, and less agricultural than villages nearby which weren’t forced to grow cane. This all has shades of the long debate on “forward linkages” in agricultural societies, where it is hypothesized that agricultural surplus benefits industrialization by providing the surplus necessary for education and capital to be purchased; see this nice paper by Sam Marden showing linkages of this sort in post-Mao China.

Are you surprised by these results? They fascinate me, honestly. Think through the logic: forced labor (in the surrounding villages) and extractive capital (rail and factories built solely to export a crop in little use domestically) both have positive long-run local effects! They do so by affecting institutions – whether villages have the ability to produce public goods like education – and by affecting incentives – the production of capital used up- and downstream. One can easily imagine cases where forced labor and extractive capital have negative long-run effects, and we have great papers by Daron Acemoglu, Nathan Nunn, Sara Lowes and others on precisely this point. But it is also very easy for societies to get trapped in bad path dependent equilibria, for which outside intervention, even ethically shameful ones, can (perhaps inadvertently) cause useful shifts in incentives and institutions! I recall a visit to Babeldaob, the main island in Palau. During the Japanese colonial period, the island was heavily industrialized as part of Japan’s war machine. These factories were destroyed by the Allies in World War 2. Yet despite their extractive history, a local told me many on the island believe that the industrial development of the region was permanently harmed when those factories were damaged. It seems a bit crazy to mourn the loss of polluting, extractive plants whose whole purpose was to serve a colonial master, but the Palauan may have had some wisdom after all!

2017 Working Paper is here (no RePEc IDEAS version). For more on sugar and institutions, I highly recommend Christian Dippel, Avner Greif and Dan Trefler’s recent paper on Caribbean sugar. The price of sugar fell enormously in the late 19th century, yet wages on islands which lost the ability to productively export sugar rose. Why? Planters in places like Barbados had so much money from their sugar exports that they could manipulate local governance and the police, while planters in places like the Virgin Islands became too poor to do the same. This decreased labor coercion, permitting workers on sugar plantations to work small plots or move to other industries, raising wages in the end. I continue to await Suresh Naidu’s book on labor coercion – it is astounding the extent to which labor markets were distorted historically (see, e.g., Eric Foner on Reconstruction), and in some cases still today, by legal and extralegal restrictions on how workers could move on up.

“Does Regression Produce Representative Estimates of Causal Effects?,” P. Aronow & C. Samii (2016)

A “causal empiricist” turn has swept through economics over the past couple decades. As a result, many economists are primarily interested in internally valid treatment effects according to the causal models of Rubin, meaning they are interested in credible statements of how some outcome Y is affected if you manipulate some treatment T given some covariates X. That is, to the extent that full functional form Y=f(X,T) is impossible to estimate because of unobserved confounding variables or similar, it turns out to still be possible to estimate some feature of that functional form, such as the average treatment effect E(f(X,1))-E(f(X,0)). At some point, people like Angrist and Imbens will win a Nobel prize not only for their applied work, but also for clarifying precisely what various techniques are estimating in a causal sense. For instance, an instrumental variable regression under a certain exclusion restriction (let’s call this an “auxiliary assumption”) estimates the average treatment effect along the local margin of people induced into treatment. If you try to estimate the same empirical feature using a different IV, and get a different treatment effect, we all know now that there wasn’t a “mistake” in either paper, but rather than the margins upon which the two different IVs operate may not be identical. Great stuff.

This causal model emphasis has been controversial, however. Social scientists have quibbled because causal estimates generally require the use of small, not-necessarily-general samples, such as those from a particular subset of the population or a particular set of countries, rather than national data or the universe of countries. Many statisticians have gone even further, suggestion that multiple regression with its linear parametric form does not take advantage of enough data in the joint distribution of (Y,X), and hence better predictions can be made with so-called machine learning algorithms. And the structural economists argue that the parameters we actually care about are much broader than regression coefficients or average treatment effects, and hence a full structural model of the data generating process is necessary. We have, then, four different techniques to analyze a dataset: multiple regression with control variables, causal empiricist methods like IV and regression discontinuity, machine learning, and structural models. What exactly do each of these estimate, and how do they relate?

Peter Aronow and Cyrus Samii, two hotshot young political economists, take a look at old fashioned multiple regression. Imagine you want to estimate y=a+bX+cT, where T is a possibly-binary treatment variable. Assume away any omitted variable bias, and more generally assume that all of the assumptions of the OLS model (linearity in covariates, etc.) hold. What does that coefficient c on the treatment indicator represent? This coefficient is a weighted combination of the individual estimated treatment effects, where more weight is given to units whose treatment status is not well explained by covariates. Intuitively, if you are regressing, say, the probability of civil war on participation in international institutions, then if a bunch of countries with very similar covariates all participate, the “treatment” of participation will be swept up by the covariates, whereas if a second group of countries with similar covariates all have different participation status, the regression will put a lot of weight toward those countries since differences in outcomes can be related to participation status.

This turns out to be quite consequential: Aronow and Samii look at one paper on FDI and find that even though the paper used a broadly representative sample of countries around the world, about 10% of the countries weighed more than 50% in the treatment effect estimate, with very little weight on a number of important regions, including all of the Asian tigers. In essence, the sample was general, but the effective sample once you account for weighting was just as limited as some of “nonrepresentative samples” people complain about when researchers have to resort to natural or quasinatural experiments! It turns out that similar effective vs. nominal representativeness results hold even with nonlinear models estimated via maximum likelihood, so this is not a result unique to OLS. Aronow and Samii’s result matters for interpreting bodies of knowledge as well. If you replicate a paper adding in an additional covariate, and get a different treatment effect, it may not reflect omitted variable bias! The difference may simply result from the additional covariate changing the effective weighting on the treatment effect.

So the “externally valid treatment effects” we have been estimating with multiple regression aren’t so representative at all. So when, then, is old fashioned multiple regression controlling for observable covariates a “good” way to learn about the world, compared to other techniques. I’ve tried to think through this is a uniform way; let’s see if it works. First consider machine learning, where we want to estimate y=f(X,T). Assume that there are no unobservables relevant to the estimation. The goal is to estimate the functional form f nonparametrically but to avoid overfitting, and statisticians have devised a number of very clever ways to do this. The proof that they work is in the pudding: cars drive themselves now. It is hard to see any reason why, if there are no unobservables, we wouldn’t want to use these machine learning/nonparametric techniques. However, at present the machine learning algorithms people use literally depend only on data in the joint distribution (X,Y), and not on any auxiliary assumptions. To interpret the marginal effect of a change in T as some sort of “treatment effect” that can be manipulated with policy, if estimated without auxiliary assumptions, requires some pretty heroic assumptions about the lack of omitted variable bias which essentially will never hold in most of the economic contexts we care about.

Now consider the causal model, where y=f(X,U,T) and you interested in what would happen with covariates X and unobservables U if treatment T was changed to a counterfactual. All of these techniques require a particular set of auxiliary assumptions: randomization requires the SUTVA assumption that treatment of one unit does not effect the independent variable of another unit, IV requires the exclusion restriction, diff-in-diff requires the parallel trends assumption, and so on. In general, auxiliary assumptions will only hold in certain specific contexts, and hence by construction the result will not be representative. Further, these assumptions are very limited in that they can’t recover every conditional aspect of y, but rather recover only summary statistics like the average treatment effect. Techniques like multiple regression with covariate controls, or machine learning nonparametric estimates, can draw on a more general dataset, but as Aronow and Samii pointed out, the marginal effect on treatment status they identify is not necessarily effectively drawing on a more general sample.

Structural folks are interested in estimating y=f(X,U,V(t),T), where U and V are unobserved, and the nature of unobserved variables V are affected by t. For example, V may be inflation expectations, T may be the interest rate, y may be inflation today, and X and U are observable and unobservable country characteristics. Put another way, the functional form of f may depend on how exactly T is modified, through V(t). This Lucas Critique problem is assumed away by the auxiliary assumptions in causal models. In order to identify a treatment effect, then, additional auxiliary assumptions generally derived from economic theory are needed in order to understand how V will change in response to a particular treatment type. Even more common is to use a set of auxiliary assumptions to find a sufficient statistic for the particular parameter desired, which may not even be a treatment effect. In this sense, structural estimation is similar to causal models in one way and different in two. It is similar in that it relies on auxiliary assumptions to help extract particular parameters of interest when there are unobservables that matter. It is different in that it permits unobservables to be functions of policy, and that it uses auxiliary assumptions whose credibility leans more heavily on non-obvious economic theory. In practice, structural models often also require auxiliary assumptions which do not come directly from economic theory, such as assumptions about the distribution of error terms which are motivated on the basis of statistical arguments, but in principle this distinction is not a first order difference.

We then have a nice typology. Even if you have a completely universal and representative dataset, multiple regression controlling for covariates does not generally give you a “generalizable” treatment effect. Machine learning can try to extract treatment effects when the data generating process is wildly nonlinear, but has the same nonrepresentativeness problem and the same “what about omitted variables” problem. Causal models can extract some parameters of interest from nonrepresentative datasets where it is reasonable to assume certain auxiliary assumptions hold. Structural models can extract more parameters of interest, sometimes from more broadly representative datasets, and even when there are unobservables that depend on the nature of the policy, but these models require auxiliary assumptions that can be harder to defend. The so-called sufficient statistics approach tries to retain the former advantages of structural models while reducing the heroics that auxiliary assumptions need to perform.

Aronow and Samii is forthcoming in the American Journal of Political Science; the final working paper is at the link. Related to this discussion, Ricardo Hausmann caused a bit of a stir online this week with his “constant adaptation rather than RCT” article. His essential idea was that, unlike with a new medical drug, social science interventions vary drastically depending on the exact place or context; that is, external validity matters so severely that slowly moving through “RCT: Try idea 1”, then “RCT: Try idea 2”, is less successful than smaller, less precise explorations of the “idea space”. He received a lot of pushback from the RCT crowd, but I think for the wrong reason: the constant iteration is less likely to discover underlying mechanisms than even an RCT, as it is still far too atheoretical. The link Hausmann makes to “lean manufacturing” is telling: GM famously (Henderson and Helper 2014) took photos of every square inch of their joint venture plant with NUMMI, and tried to replicate this plant in their other plants. But the underlying reason NUMMI and Toyota worked has to do with the credibility of various relational contracts, rather than the (constantly iterated) features of the shop floor. Iterating without attempting to glean the underlying mechanisms at play is not a rapid route to good policy.

Edit: A handful of embarrassing typos corrected, 2/26/2016

Angus Deaton, 2015 Nobel Winner: A Prize for Structural Analysis?

Angus Deaton, the Scottish-born, Cambridge-trained Princeton economist, best known for his careful work on measuring the changes in wellbeing of the world’s poor, has won the 2015 Nobel Prize in economics. His data collection is fairly easy to understand, so I will leave larger discussion of exactly what he has found to the general news media; Deaton’s book “The Great Escape” provides a very nice summary of what he has found as well, and I think a fair reading of his development preferences are that he much prefers the currently en vogue idea of just giving cash to the poor and letting them spend it as they wish.

Essentially, when one carefully measures consumption, health, or generic characteristics of wellbeing, there has been tremendous improvement indeed in the state of the world’s poor. National statistics do not measure these ideas well, because developing countries do not tend to track data at the level of the individual. Indeed, even in the United States, we have only recently begun work on localized measures of the price level and hence the poverty rate. Deaton claims, as in his 2010 AEA Presidential Address (previously discussed briefly on two occasions on AFT), that many of the measures of global inequality and poverty used by the press are fundamentally flawed, largely because of the weak theoretical justification for how they link prices across regions and countries. Careful non-aggregate measures of consumption, health, and wellbeing, like those generated by Deaton, Tony Atkinson, Alwyn Young, Thomas Piketty and Emmanuel Saez, are essential for understanding how human welfare has changed over time and space, and is a deserving rationale for a Nobel.

The surprising thing about Deaton, however, is that despite his great data-collection work and his interest in development, he is famously hostile to the “randomista” trend which proposes that randomized control trials (RCT) or other suitable tools for internally valid causal inference are the best way of learning how to improve the lives of the world’s poor. This mode is most closely associated with the enormously influential J-PAL lab at MIT, and there is no field in economics where you are less likely to see traditional price theoretic ideas than modern studies of development. Deaton is very clear on his opinion: “Randomized controlled trials cannot automatically trump other evidence, they do not occupy any special place in some hierarchy of evidence, nor does it make sense to refer to them as “hard” while other methods are “soft”… [T]he analysis of projects needs to be refocused towards the investigation of potentially generalizable mechanisms that explain why and in what contexts projects can be expected to work.” I would argue that Deaton’s work is much closer to more traditional economic studies of development than to RCTs.

To understand this point of view, we need to go back to Deaton’s earliest work. Among Deaton’s most famous early papers was his well-known development of the Almost Ideal Demand System (AIDS) in 1980 with Muellbauer, a paper chosen as one of the 20 best published in the first 100 years of the AER. It has long been known that individual demand equations which come from utility maximization must satisfy certain properties. For example, a rational consumer’s demand for food should not depend on whether the consumer’s equivalent real salary is paid in American or Canadian dollars. These restrictions turn out to be useful in that if you want to know how demand for various products depend on changes in income, among many other questions, the restrictions of utility theory simplify estimation greatly by reducing the number of free parameters. The problem is in specifying a form for aggregate demand, such as how demand for cars depends on the incomes of all consumers and prices of other goods. It turns out that, in general, aggregate demand generated by utility-maximizing households does not satisfy the same restrictions as individual demand; you can’t simply assume that there is a “representative consumer” with some utility function and demand function equal to each individual agent. What form should we write for aggregate demand, and how congruent is that form with economic theory? Surely an important question if we want to estimate how a shift in taxes on some commodity, or a policy of giving some agricultural input to some farmers, is going to affect demand for output, its price, and hence welfare!

Let q(j)=D(p,c,e) say that the quantity of j consumed, in aggregate is a function of the price of all goods p and the total consumption (or average consumption) c, plus perhaps some random error e. This can be tough to estimate: if D(p,c,e)=Ap+e, where demand is just a linear function of relative prices, then we have a k-by-k matrix to estimate, where k is the number of goods. Worse, that demand function is also imposing an enormous restriction on what individual demand functions, and hence utility functions, look like, in a way that theory does not necessarily support. The AIDS of Deaton and Muellbauer combine the fact that Taylor expansions approximately linearize nonlinear functions and that individual demand can be aggregated even when heterogeneous across individuals if the restrictions of Muellbauer’s PIGLOG papers are satisfied to show a functional form for aggregate demand D which is consistent with aggregated individual rational behavior and which can sometimes be estimated via OLS. They use British data to argue that aggregate demand violates testable assumptions of the model and hence factors like credit constraints or price expectations are fundamental in explaining aggregate consumption.

This exercise brings up a number of first-order questions for a development economist. First, it shows clearly the problem with estimating aggregate demand as a purely linear function of prices and income, as if society were a single consumer. Second, it gives the importance of how we measure the overall price level in figuring out the effects of taxes and other policies. Third, it combines theory and data to convincingly suggest that models which estimate demand solely as a function of current prices and current income are necessarily going to give misleading results, even when demand is allowed to take on very general forms as in the AIDS model. A huge body of research since 1980 has investigated how we can better model demand in order to credibly evaluate demand-affecting policy. All of this is very different from how a certain strand of development economist today might investigate something like a subsidy. Rather than taking obversational data, these economists might look for a random or quasirandom experiment where such a subsidy was introduced, and estimate the “effect” of that subsidy directly on some quantity of interest, without concern for how exactly that subsidy generated the effect.

To see the difference between randomization and more structural approaches like AIDS, consider the following example from Deaton. You are asked to evaluate whether China should invest more in building railway stations if they wish to reduce poverty. Many economists trained in a manner influenced by the randomization movement would say, well, we can’t just regress the existence of a railway on a measure of city-by-city poverty. The existence of a railway station depends on both things we can control for (the population of a given city) and things we can’t control for (subjective belief that a town is “growing” when the railway is plopped there). Let’s find something that is correlated with rail station building but uncorrelated with the random component of how rail station building affects poverty: for instance, a city may lie on a geographically-accepted path between two large cities. If certain assumptions hold, it turns out that a two-stage “instrumental variable” approach can use that “quasi-experiment” to generate the LATE, or local average treatment effect. This effect is the average benefit of a railway station on poverty reduction, at the local margin of cities which are just induced by the instrument to build a railway station. Similar techniques, like difference-in-difference and randomized control trials, under slightly different assumptions can generate credible LATEs. In development work today, it is very common to see a paper where large portions are devoted to showing that the assumptions (often untestable) of a given causal inference model are likely to hold in a given setting, then finally claiming that the treatment effect of X on Y is Z. That LATEs can be identified outside of a purely randomized contexts is incredibly important and valuable, and the economists and statisticians who did the heavy statistical lifting on this so-called Rubin model will absolutely and justly win an Economics Nobel sometime soon.

However, this use of instrumental variables would surely seem strange to the old Cowles Commission folks: Deaton is correct that “econometric analysis has changed its focus over the years, away from the analysis of models derived from theory towards much looser specifications that are statistical representations of program evaluation. With this shift, instrumental variables have moved from being solutions to a well-defined problem of inference to being devices that induce quasi-randomization.” The traditional use of instrumental variables was that after writing down a theoretically justified model of behavior or aggregates, certain parameters – not treatment effects, but parameters of a model – are not identified. For instance, price and quantity transacted are determined by the intersection of aggregate supply and aggregate demand. Knowing, say, that price and quantity was (a,b) today, and is (c,d) tomorrow, does not let me figure out the shape of either the supply or demand curve. If price and quantity both rise, it may be that demand alone has increased pushing the demand curve to the right, or that demand has increased while the supply curve has also shifted to the right a small amount, or many other outcomes. An instrument that increases supply without changing demand, or vice versa, can be used to “identify” the supply and demand curves: an exogenous change in the price of oil will affect the price of gasoline without much of an effect on the demand curve, and hence we can examine price and quantity transacted before and after the oil supply shock to find the slope of supply and demand.

Note the difference between the supply and demand equation and the treatment effects use of instrumental variables. In the former case, we have a well-specified system of supply and demand, based on economic theory. Once the supply and demand curves are estimated, we can then perform all sorts of counterfactual and welfare analysis. In the latter case, we generate a treatment effect (really, a LATE), but we do not really know why we got the treatment effect we got. Are rail stations useful because they reduce price variance across cities, because they allow for increasing returns to scale in industry to be utilized, or some other reason? Once we know the “why”, we can ask questions like, is there a cheaper way to generate the same benefit? Is heterogeneity in the benefit important? Ought I expect the results from my quasiexperiment in place A and time B to still operate in place C and time D (a famous example being the drug Opren, which was very successful in RCTs but turned out to be particularly deadly when used widely by the elderly)? Worse, the whole idea of LATE is backwards. We traditionally choose a parameter of interest, which may or may not be a treatment effect, and then choose an estimation technique that can credible estimate that parameter. Quasirandom techniques instead start by specifying the estimation technique and then hunt for a quasirandom setting, or randomize appropriately by “dosing” some subjects and not others, in order to fit the assumptions necessary to generate a LATE. If is often the case that even policymakers do not care principally about the LATE, but rather they care about some measure of welfare impact which rarely is immediately interpretable even if the LATE is credibly known!

Given these problems, why are random and quasirandom techniques so heavily endorsed by the dominant branch of development? Again, let’s turn to Deaton: “There has also been frustration with the World Bank’s apparent failure to learn from its own projects, and its inability to provide a convincing argument that its past activities have enhanced economic growth and poverty reduction. Past development practice is seen as a succession of fads, with one supposed magic bullet replacing another—from planning to infrastructure to human capital to structural adjustment to health and social capital to the environment and back to infrastructure—a process that seems not to be guided by progressive learning.” This is to say, the conditions necessary to estimate theoretical models are so stringent that development economists have been writing noncredible models, estimating them, generating some fad of programs that is used in development for a few years until it turns out not to be silver bullet, then abandoning the fad for some new technique. Better, the randomistas argue, to forget about external validity for now, and instead just evaluate the LATEs on a program-by-program basis, iterating what types of programs we evaluate until we have a suitable list of interventions that we feel confident work. That is, development should operate like medicine.

We have something of an impasse here. Everyone agrees that on many questions theory is ambiguous in the absence of particular types of data, hence more and better data collection is important. Everyone agrees that many parameters of interest for policymaking require certain assumptions, some more justifiable than others. Deaton’s position is that the parameters of interest to economists by and large are not LATEs, and cannot be generated in a straightforward way from LATEs. Thus, following Nancy Cartwright’s delightful phrasing, if we are to “use” causes rather than just “hunt” for what they are, we have no choice but to specify the minimal economic model which is able to generate the parameters we care about from the data. Glen Weyl’s attempt to rehabilitate price theory and Raj Chetty’s sufficient statistics approach are both attempts to combine the credibility of random and quasirandom inference with the benefits of external validity and counterfactual analysis that model-based structural designs permit.

One way to read Deaton’s prize, then, is as an award for the idea that effective development requires theory if we even hope to compare welfare across space and time or to understand why policies like infrastructure improvements matter for welfare and hence whether their beneficial effects will remain when moved to a new context. It is a prize which argues against the idea that all theory does is propose hypotheses. For Deaton, going all the way back to his work with AIDS, theory serves three roles: proposing hypotheses, suggesting which data is worthwhile to collect, and permitting inference on the basis of that data. A secondary implication, very clear in Deaton’s writing, is that even though the “great escape” from poverty and want is real and continuing, that escape is almost entirely driven by effects which are unrelated to aid and which are uninfluenced by the type of small bore, partial equilibrium policies for which randomization is generally suitable. And, indeed, the best development economists very much understand this point. The problem is that the media, and less technically capable young economists, still hold the mistaken belief that they can infer everything they want to infer about “what works” solely using the “scientific” methods of random- and quasirandomization. For Deaton, results that are easy to understand and communicate, like the “dollar-a-day” poverty standard or an average treatment effect, are less virtuous than results which carefully situate numbers in the role most amenable to answering an exact policy question.

Let me leave you three side notes and some links to Deaton’s work. First, I can’t help but laugh at Deaton’s description of his early career in one of his famous “Notes from America”. Deaton, despite being a student of the 1984 Nobel laureate Richard Stone, graduated from Cambridge essentially unaware of how one ought publish in the big “American” journals like Econometrica and the AER. Cambridge had gone from being the absolute center of economic thought to something of a disconnected backwater, and Deaton, despite writing a paper that would win a prize as one of the best papers in Econometrica published in the late 1970s, had essentially no understanding of the norms of publishing in such a journal! When the history of modern economics is written, the rise of a handful of European programs and their role in reintegrating economics on both sides of the Atlantic will be fundamental. Second, Deaton’s prize should be seen as something of a callback to the ’84 prize to Stone and ’77 prize to Meade, two of the least known Nobel laureates. I don’t think it is an exaggeration to say that the majority of new PhDs from even the very best programs will have no idea who those two men are, or what they did. But as Deaton mentions, Stone in particular was one of the early “structural modelers” in that he was interested in estimating the so-called “deep” or behavioral parameters of economic models in a way that is absolutely universal today, as well as being a pioneer in the creation and collection of novel economic statistics whose value was proposed on the basis of economic theory. Quite a modern research program! Third, of the 19 papers in the AER “Top 20 of all time” whose authors were alive during the era of the economics Nobel, 14 have had at least one author win the prize. Should this be a cause for hope for the living outliers, Anne Krueger, Harold Demsetz, Stephen Ross, John Harris, Michael Todaro and Dale Jorgensen?

For those interested in Deaton’s work beyond what this short essay, his methodological essay, quoted often in this post, is here. The Nobel Prize technical summary, always a great and well-written read, can be found here.

“Forced Coexistence and Economic Development: Evidence from Native American Reservations,” C. Dippel (2014)

I promised one more paper from Christian Dippel, and it is another quite interesting one. There is lots of evidence, folk and otherwise, that combining different ethnic or linguistic groups artificially, as in much of the ex-colonial world, leads to bad economic and governance outcomes. But that’s weird, right? After all, ethnic boundaries are themselves artificial, and there are tons of examples – Italy and France being the most famous – of linguistic diversity quickly fading away once a state is developed. Economic theory (e.g., a couple recent papers by Joyee Deb) suggests an alternative explanation: groups that have traditionally not worked with each other need time to coordinate on all of the Pareto-improving norms you want in a society. That is, it’s not some kind of intractable ethnic hate, but merely a lack of trust that is the problem.

Dippel uses the history of American Indian reservations to examine the issue. It turns out that reservations occasionally included different subtribal bands even though they almost always were made up of members of a single tribe with a shared language and ethnic identity. For example, “the notion of tribe in Apachean cultures is very weakly developed. Essentially it was only a recognition
that one owed a modicum of hospitality to those of the same speech, dress, and customs.” Ethnographers have conveniently constructed measures of how integrated governance was in each tribe prior to the era of reservations; some tribes had very centralized governance, whereas others were like the Apache. In a straight OLS regression with the natural covariates, incomes are substantially lower on reservations made up of multiple bands that had no pre-reservation history of centralized governance.

Why? First, let’s deal with identification (more on what that means in a second). You might naturally think that, hey, tribes with centralized governance in the 1800s were probably quite socioeconomically advanced already: think Cherokee. So are we just picking up that high SES in the 1800s leads to high incomes today? Well, in regions with lots of mining potential, bands tended to be grouped onto one reservation more frequently, which suggests that resource prevalence on ancestral homelands outside of the modern reservation boundaries can instrument for the propensity for bands to be placed together. Instrumented estimates of the effect of “forced coexistence” is just as strong as the OLS estimate. Further, including tribe fixed effects for cases where single tribes have a number of reservations, a surprisingly common outcome, also generates similar estimates of the effect of forced coexistence.

I am very impressed with how clear Dippel is about what exactly is being identified with each of these techniques. A lot of modern applied econometrics is about “identification”, and generally only identifies a local average treatment effect, or LATE. But we need to be clear about LATE – much more important than “what is your identification strategy” is an answer to “what are you identifying anyway?” Since LATE identifies causal effects that are local conditional on covariates, and the proper interpretation of that term tends to be really non-obvious to the reader, it should go without saying that authors using IVs and similar techniques ought be very precise in what exactly they are claiming to identify. Lots of quasi-random variation generates that variation along a local margin that is of little economic importance!

Even better than the estimates is an investigation of the mechanism. If you look by decade, you only really see the effect of forced coexistence begin in the 1990s. But why? After all, the “forced coexistence” is longstanding, right? Think of Nunn’s famous long-run effect of slavery paper, though: the negative effects of slavery are mediated during the colonial era, but are very important once local government has real power and historically-based factionalism has some way to bind on outcomes. It turns out that until the 1980s, Indian reservations had very little local power and were largely run as government offices. Legal changes mean that local power over the economy, including the courts in commercial disputes, is now quite strong, and anecdotal evidence suggests lots of factionalism which is often based on longstanding intertribal divisions. Dippel also shows that newspaper mentions of conflict and corruption at the reservation level are correlated with forced coexistence.

How should we interpret these results? Since moving to Canada, I’ve quickly learned that Canadians generally do not subscribe to the melting pot theory; largely because of the “forced coexistence” of francophone and anglophone populations – including two completely separate legal traditions! – more recent immigrants are given great latitude to maintain their pre-immigration culture. This heterogeneous culture means that there are a lot of actively implemented norms and policies to help reduce cultural division on issues that matter to the success of the country. You might think of the problems on reservations and in Nunn’s post-slavery states as a problem of too little effort to deal with factionalism rather than the existence of the factionalism itself.

Final working paper, forthcoming in Econometrica. No RePEc IDEAS version. Related to post-colonial divisions, I also very much enjoyed Mobilizing the Masses for Genocide by Thorsten Rogall, a job market candidate from IIES. When civilians slaughter other civilians, is it merely a “reflection of ancient ethnic hatred” or is it actively guided by authority? In Rwanda, Rogall finds that almost all of the killing is caused directly or indirectly by the 50,000-strong centralized armed groups who fanned out across villages. In villages that were easier to reach (because the roads were not terribly washed out that year), more armed militiamen were able to arrive, and the more of them that arrived, the more deaths resulted. This in-person provoking appears much more important than the radio propaganda which Yanigazawa-Drott discusses in his recent QJE; one implication is that post-WW2 restrictions on free speech in Europe related to Nazism may be completely misdiagnosing the problem. Three things I especially liked about Rogall’s paper: the choice of identification strategy is guided by a precise policy question which can be answered along the local margin identified (could a foreign force stopping these centralized actors a la Romeo Dallaire have prevented the genocide?), a theoretical model allows much more in-depth interpretation of certain coefficients (for instance, he can show that most villages do not appear to have been made up of active resistors), and he discusses external cases like the Lithuanian killings of Jews during World War II, where a similar mechanism appears to be at play. I’ll have many more posts on cool job market papers coming shortly!

“International Trade and Institutional Change: Medieval Venice’s Response to Globalization,” D. Puga & D. Trefler

(Before discussing the paper today, I should forward a couple great remembrances of Stanley Reiter, who passed away this summer, by Michael Chwe (whose interests at the intersection of theory and history are close to my heart) and Rakesh Vohra. After leaving Stanford – Chwe mentions this was partly due to a nasty letter written by Reiter’s advisor Milton Friedman! – Reiter established an incredible theory group at Purdue which included Afriat, Vernon Smith and PhD students like Sonnenschein and Ledyard. He then moved to Northwestern where he helped build up the great group in MEDS which is too long to list, but which includes one Nobel winner already in Myerson and, by my reckoning, two more which are favorites to win the prize next Monday.

I wonder if we may be at the end of an era for topic-diverse theory departments. Business schools are all a bit worried about “Peak MBA”, and theorists are surely the first ones out the door when enrollment falls. Economic departments, journals and funders seem to have shifted, in the large, toward more empirical work, for better or worse. Our knowledge both of how economic and social interactions operate in their most platonic form, and our ability to interpret empirical results when considering novel or counterfactual policies, have greatly benefited by the theoretical developments following Samuelson and Hicks’ mathematization of primitives in the 1930s and 40s, and the development of modern game theory and mechanism design in the 1970s and 80s. Would that a new Cowles and a 21st century Reiter appear to help create a critical mass of theorists again!)

On to today’s paper, a really interesting theory-driven piece of economic history. Venice was one of the most important centers of Europe’s “commercial revolution” between the 10th and 15th century; anyone who read Marco Polo as a schoolkid knows of Venice’s prowess in long-distance trade. Among historians, Venice is also well-known for the inclusive political institutions that developed in the 12th century, and the rise of oligarchy following the “Serrata” at the end of the 13th century. The Serrata was followed by a gradual decrease in Venice’s power in long-distance trade and a shift toward manufacturing, including the Murano glass it is still famous for today. This is a fairly worrying history from our vantage point today: as the middle class grew wealthier, democratic forms of government and free markets did not follow. Indeed, quite the opposite: the oligarchs seized political power, and within a few decades of the serrata restricted access to the types of trade that previously drove wealth mobility. Explaining what happened here is both a challenge due to limited data, and of great importance given the public prominence of worries about the intersection of growing inequality and corruption of the levers of democracy.

Dan Trefler, an economic historian here at U. Toronto, and Diego Puga, an economist at CEMFI who has done some great work in economic geography, provide a great explanation of this history. Here’s the model. Venice begins with lots of low-wealth individuals, a small middle and upper class, and political power granted to anyone in the upper class. Parents in each dynasty can choose to follow a risky project – becoming a merchant in a long-distance trading mission a la Niccolo and Maffeo Polo – or work locally in a job with lower expected pay. Some of these low and middle class families will succeed on their trade mission and become middle and upper class in the next generation. Those with wealth can sponsor ships via the colleganza, a type of early joint-stock company with limited liability, and potentially join the upper class. Since long-distance trade is high variance, there is a lot of churn across classes. Those with political power also gather rents from their political office. As the number of wealthy rise in the 11th and 12th century, the returns to sponsoring ships falls due to competition across sponsors in the labor and export markets. At any point, the upper class can vote to restrict future entry into the political class by making political power hereditary. They need to include sufficiently many powerful people in this hereditary class or there will be a revolt. As the number of wealthy increase, eventually the wealthy find it worthwhile to restrict political power so they can keep political rents within their dynasty forever. Though political power is restricted, the economy is still free, and the number of wealthy without power continue to grow, lowering the return to wealth for those with political power due to competition in factor and product markets. At some point, the return is so low that it is worth risking revolt from the lower classes by restricting entry of non-nobles into lucrative industries. To prevent revolt, a portion of the middle classes are brought in to the hereditary political regime, such that the regime is powerful enough to halt a revolt. Under these new restrictions, lower classes stop engaging in long-distance trade and instead work in local industry. These outcomes can all be generated with a reasonable looking model of dynastic occupation choice.

What historical data would be consistent with this theoretical mechanism? We should expect lots of turnover in political power and wealth in the 10th through 13th centuries. We should find examples in the literature of families beginning as long-distance traders and rising to voyage sponsors and political agents. We should see a period of political autocracy develop, followed later by the expansion of hereditary political power and restrictions on lucrative industry entry to those with such power. Economic success based on being able to activate large amounts of capital from within the nobility class will make the importance of inter-family connections more important in the 14th and 15th centuries than before. Political power and participation in lucrative economic ventures will be limited to a smaller number of families after this political and economic closure than before. Those left out of the hereditary regime will shift to local agriculture and small-scale manufacturing.

Indeed, we see all of these outcomes in Venetian history. Trefler and Puga use some nice techniques to get around limited data availability. Since we don’t have data on family incomes, they use the correlation in eigenvector centrality within family marriage networks as a measure of the stability of the upper classes. They code colleganza records – a non-trivial task involving searching thousands of scanned documents for particular Latin phrases – to investigate how often new families appear in these records, and how concentration in the funding of long-distance trade changes over time. They show that all of the families with high eigenvector centrality in the noble marriage market after political closure – a measure of economic importance, remember – were families that were in the top quartile of seat-share in the pre-closure Venetian legislature, and that those families which had lots of political power pre-closure but little commercial success thereafter tended to be unsuccessful in marrying into lucrative alliances.

There is a lot more historical detail in the paper, but as a matter of theory useful to the present day, the Venetian experience ought throw cold water on the idea that political inclusiveness and economic development always form a virtuous circle. Institutions are endogenous, and changes in the nature of inequality within a society following economic development alter the potential for political and economic crackdowns to survive popular revolt.

Final published version in QJE 2014 (RePEc IDEAS). A big thumbs up to Diego for having the single best research website I have come across in five years of discussing papers in this blog. Every paper has an abstract, well-organized replication data, and a link to a locally-hosted version of the final published paper. You may know his paper with Nathan Nunn on how rugged terrain in Africa is associated with good economic outcomes today because slave traders like the infamous Tippu Tip couldn’t easily exploit mountainous areas, but it’s also worth checking out his really clever theoretical disambiguation of why firms in cities are more productive, as well as his crazy yet canonical satellite-based investigation of the causes of sprawl. There is a really cool graphic on the growth of U.S. sprawl at that last link!

“The Rise and Fall of General Laws of Capitalism,” D. Acemoglu & J. Robinson (2014)

If there is one general economic law, it is that every economist worth their salt is obligated to put out twenty pages responding to Piketty’s Capital. An essay by Acemoglu and Robinson on this topic, though, is certainly worth reading. They present three particularly compelling arguments. First, in a series of appendices, they follow Debraj Ray, Krusell and Smith and others in trying to clarify exactly what Piketty is trying to say, theoretically. Second, they show that it is basically impossible to find any effect of the famed r-g on top inequality in statistical data. Third, they claim that institutional features are much more relevant to the impact of economic changes on societal outcomes, using South Africa and Sweden as examples. Let’s tackle these in turn.

First, the theory. It has been noted before that Piketty is, despite beginning his career as a very capable economist theorist (hired at MIT at age 22!), very disdainful of the prominence of theory. He, quite correctly, points out that we don’t even have any descriptive data on a huge number of topics of economic interest, inequality being principal among these. And indeed he is correct! But, shades of the Methodenstreit, he then goes on to ignore theory where it is most useful, in helping to understand, and extrapolate from, his wonderful data. It turns out that even in simple growth models, not only is it untrue that r>g necessarily holds, but the endogeneity of r and our standard estimates of the elasticity of substitution between labor and capital do not at all imply that capital-to-income ratios will continue to grow (see Matt Rognlie on this point). Further, Acemoglu and Robinson show that even relatively minor movement between classes is sufficient to keep the capital share from skyrocketing. Do not skip the appendices to A and R’s paper – these are what should have been included in the original Piketty book!

Second, the data. Acemoglu and Robinson point out, and it really is odd, that despite the claims of “fundamental laws of capitalism”, there is no formal statistical investigation of these laws in Piketty’s book. A and R look at data on growth rates, top inequality and the rate of return (either on government bonds, or on a computed economy-wide marginal return on capital), and find that, if anything, as r-g grows, top inequality shrinks. All of the data is post WW2, so there is no Great Depression or World War confounding things. How could this be?

The answer lies in the feedback between inequality and the economy. As inequality grows, political pressures change, the endogenous development and diffusion of technology changes, the relative use of capital and labor change, and so on. These effects, in the long run, dominate any “fundamental law” like r>g, even if such a law were theoretically supported. For instance, Sweden and South Africa have very similar patterns of top 1% inequality over the twentieth century: very high at the start, then falling in mid-century, and rising again recently. But the causes are totally different: in Sweden’s case, labor unrest led to a new political equilibrium with a high-growth welfare state. In South Africa’s case, the “poor white” supporters of Apartheid led to compressed wages at the top despite growing black-white inequality until 1994. So where are we left? The traditional explanations for inequality changes: technology and politics. And even without r>g, these issues are complex and interesting enough – what could be a more interesting economic problem for an American economist than diagnosing the stagnant incomes of Americans over the past 40 years?

August 2014 working paper (No IDEAS version yet). Incidentally, I have a little tracker on my web browser that lets me know when certain pages are updated. Having such a tracker follow Acemoglu’s working papers pages is, frankly, depressing – how does he write so many papers in such a short amount of time?

“Agricultural Productivity and Structural Change: Evidence from Brazil,” P. Bustos et al (2014)

It’s been a while – a month of exploration in the hinterlands of the former Soviet Union, a move up to Canada, and a visit down to the NBER Summer Institute really put a cramp on my posting schedule. That said, I have a ridiculously long backlog of posts to get up, so they will be coming rapidly over the next few weeks. I saw today’s paper presented a couple days ago at the Summer Institute. (An aside: it’s a bit strange that there isn’t really any media at SI – the paper selection process results in a much better set of presentations than at the AEA or the Econometric Society, which simply have too long of a lag from the application date to the conference, and too many half-baked papers.)

Bustos and her coauthors ask, when can improvements in agricultural productivity help industrialization? An old literature assumed that any such improvement would help: the newly rich agricultural workers would demand more manufactured goods, and since manufactured and agricultural products are complements, rising agricultural productivity would shift workers into the factories. Kiminori Matsuyama wrote a model (JET 1992) showing the problem here: roughly, if in a small open economy productivity goes up in a good you have a Ricardian comparative advantage in, then you want to produce even more of that good. A green revolution which doubles agricultural productivity in, say, Mali, while keeping manufacturing productivity the same, will allow Mali to earn twice as much selling the agriculture overseas. Workers will then pour into the agricultural sector until the marginal product of labor is re-equated in both sectors.

Now, if you think that industrialization has a bunch of positive macrodevelopment spillovers (via endogenous growth, population control or whatever), then this is worrying. Indeed, it vaguely suggests that making villages more productive, an outright goal of a lot of RCT-style microdevelopment studies, may actually be counterproductive for the country as a whole! That said, there seems to be something strange going on empirically, because we do appear to see industrialization in countries after a Green Revolution. What could be going on? Let’s look back at the theory.

Implicitly, the increase in agricultural productivity in Matsuyama was “Hicks-neutral” – it increased the total productivity of the sector without affecting the relative marginal factor productivities. A lot of technological change, however, is factor-biased; to take two examples from Brazil, modern techniques that allow for double harvesting of corn each year increase the marginal productivity of land, whereas “Roundup Ready” GE soy that requires less tilling and weeding increases the marginal productivity of farmers. We saw above that Hicks-neutral technological change in agriculture increases labor in the farm sector: workers choosing where to work means that the world price of agriculture times the marginal product of labor in that sector must be equal to world price of manufacturing times the marginal product of labor in manufacturing. A Hicks-neutral improvement in agricultural productivity raises MPL in that sector no matter how much land or labor is currently being used, hence wage equality across sectors requires workers to leave the factor for the farm.

What of biased technological change? As before, the only thing we need to know is whether the technological change increases the marginal product of labor. Land-augmenting technical change, like double harvesting of corn, means a country can produce the same amount of output with the old amount of farm labor and less land. If one more worker shifts from the factory to the farm, she will be farming less marginal land than before the technological change, hence her marginal productivity of labor is higher than before the change, hence she will leave the factory. Land-augmenting technological change always increases the amount of agricultural labor. What about farm-labor-augmenting technological change like GM soy? If land and labor are not very complementary (imagine, in the limit, that they are perfect substitutes in production), then trivially the marginal product of labor increases following the technological change, and hence the number of farm workers goes up. The situation is quite different if land and farm labor are strong complements. Where previously we had 1 effective worker per unit of land, following the labor-augmenting technology change it is as if we have, say, 2 effective workers per unit of land. Strong complementarity implies that, at that point, adding even more labor to the farms is pointless: the marginal productivity of labor is decreasing in the technological level of farm labor. Therefore, labor-augmenting technology with a strongly complementary agriculture production function shifts labor off the farm and into manufacturing.

That’s just a small bit of theory, but it really clears things up. And even better, the authors find empirical support for this idea: following the introduction to Brazil of labor-augmenting GM soy and land-augmenting double harvesting of maize, agricultural productivity rose everywhere, the agricultural employment share rose in areas that were particularly suitable for modern maize production, and the manufacturing employment share rose in areas that were particularly suitable for modern soy production.

August 2013 working paper. I think of this paper as a nice complement to the theory and empirics in Acemoglu’s Directed Technical Change and Walker Hanlon’s Civil War cotton paper. Those papers ask how changes in factor prices endogenously affect the development of different types of technology, whereas Bustos and coauthors ask how the exogenous development of different types of technology affect the use of various factors. I read the former as most applicable to structural change questions in countries at the technological frontier, and the latter as appropriate for similar questions in developing countries.

“On the Origin of States: Stationary Bandits and Taxation in Eastern Congo,” R. S. de la Sierra (2013)

The job market is yet again in full swing. I won’t be able to catch as many talks this year as I would like to, but I still want to point out a handful of papers that I consider particularly elucidating. This article, by Columbia’s de la Sierra, absolutely fits that category.

The essential question is, why do states form? Would that all young economists interested in development put their effort toward such grand questions! The old Rousseauian idea you learned your first year of college, where individuals come together voluntarily for mutual benefit, seems contrary to lots of historical evidence. Instead, war appears to be a prime mover for state formation; armed groups establish a so-called “monopoly on violence” in an area for a variety of reasons, and proto-state institutions evolve. This basic idea is widespread in the literature, but it is still not clear which conditions within an area lead armed groups to settle rather than to pillage. Further, examining these ideas empirically seems quite problematic for two reasons, first because states themselves are the ones who collect data hence we rarely observe anything before states have formed, and second, because most of the planet has long since been under the rule of a state (with apologies to James Scott!)

De la Sierra brings some economics to this problem. What is the difference between pillaging and sustained state-like forms? The pillager can only extract assets on its way through, while the proto-state can establish “taxes”. What taxes will it establish? If the goal is long-run revenue maximization, Ramsey long ago told us that it is optimal to tax elements that are inelastic. If labor can flee, but the output of the mine can not, then you ought tax the output of the mine highly and set a low poll tax. If labor supply is inelastic but output can be hidden from the taxman, then use a high poll tax. Thus, when will bandits form a state instead of just pillaging? When there is a factor which can be dynamically taxed at such a rate that the discounted tax revenue exceeds what can be pillaged today. Note that the ability to, say, restrict movement along roads, or to expand output through state-owned capital, changes relevant tax elasticities, so at a more fundamental level, capacity by rebels along these margins are also important (and I imagine that extending de la Sierra’s paper will involve the evolutionary development of these types of capacities).

This is really an important idea. It is not that there is a tradeoff between producing and pillaging. Instead, there is a three way tradeoff between producing in your home village, joining an armed group to pillage, and joining an armed group that taxes like a state! The armed group that taxes will, as a result of its desire to increase tax revenue, perhaps introduce institutions that increase production in the area under its control. And to the extent that institutions persist, short-run changes that cause potential bandits to form taxing relationships may actually lead to long-run increases in productivity in a region.

De la Sierra goes a step beyond theory, investigating these ideas empirically in the Congo. Eastern Congo during and after the Second Congo War was characterized by a number of rebel groups that occasionally just pillaged, but occasionally formed stable tax relationships with villages that could last for years. That is, the rebels occasionally implemented something looking like states. The theory above suggests that exogenous changes in the ability to extract tax revenue (over a discounted horizon) will shift the rebels from pillagers to proto-states. And, incredibly, there were a number of interesting exogenous changes that had exactly that effect.

The prices of coltan and gold both suffered price shocks during the war. Coltan is heavy, hard to hide, and must be shipped by plane in the absence of roads. Gold is light, easy to hide, and can simply be carried from the mine on jungle footpaths. When the price of coltan rises, the maximal tax revenue of a state increases since taxable coltan production is relatively inelastic. This is particularly true near airstrips, where the coltan can actually be sold. When the price of gold increases, the maximal tax revenue does not change much, since gold is easy to hide, and hence the optimal tax is on labor rather than on output. An exogenous rise in coltan prices should encourage proto-state formation in areas with coltan, then, while an exogenous rise is gold prices should have little impact on the pillage vs. state tradeoff. Likewise, a government initiative to root out rebels (be they stationary or pillaging) decreases the expected number of years a proto-state can extract rents, hence makes pillaging relatively more lucrative.

How to confirm these ideas, though, when there was no data collected on income, taxes, labor supply, or proto-state existence? Here is the crazy bit – 11 locals were hired in Eastern Congo to travel to a large number of villages, spend a week there querying families and village elders about their experiences during the war, the existence of mines, etc. The “state formation” in these parts of Congo is only a few years in the past, so it is at least conceivable that memories, suitably combined, might actually be reliable. And indeed, the data do seem to match aggregate trends known to monitors of the war. What of the model predictions? They all seem to hold, and quite strongly: the ability to extract more tax revenue is important for proto-state formation, and areas where proto-states existed do appear to have retained higher productive capacity years later perhaps as a result of the proto-institutions those states developed. Fascinating. Even better, because there is a proposed mechanism rather than an identified treatment effect, we can have some confidence that this course is, to some extent, externally valid!

December 2013 working paper (No IDEAS page). You may wonder what a study like this costs (particularly if you are, like me, a theorist using little more than chalk and a chalkboard); I have no idea, but de la Sierra’s CV lists something like a half million dollars of grants, an incredible total for a graduate student. On a personal level, I spent a bit of time in Burundi a number of years ago, including visiting a jungle camp where rebels from the Second Congo War were still hiding. It was pretty amazing how organized even these small groups were in the areas they controlled; there was nothing anarchic about it.

%d bloggers like this: