Category Archives: Development

Angus Deaton, 2015 Nobel Winner: A Prize for Structural Analysis?

Angus Deaton, the Scottish-born, Cambridge-trained Princeton economist, best known for his careful work on measuring the changes in wellbeing of the world’s poor, has won the 2015 Nobel Prize in economics. His data collection is fairly easy to understand, so I will leave larger discussion of exactly what he has found to the general news media; Deaton’s book “The Great Escape” provides a very nice summary of what he has found as well, and I think a fair reading of his development preferences are that he much prefers the currently en vogue idea of just giving cash to the poor and letting them spend it as they wish.

Essentially, when one carefully measures consumption, health, or generic characteristics of wellbeing, there has been tremendous improvement indeed in the state of the world’s poor. National statistics do not measure these ideas well, because developing countries do not tend to track data at the level of the individual. Indeed, even in the United States, we have only recently begun work on localized measures of the price level and hence the poverty rate. Deaton claims, as in his 2010 AEA Presidential Address (previously discussed briefly on two occasions on AFT), that many of the measures of global inequality and poverty used by the press are fundamentally flawed, largely because of the weak theoretical justification for how they link prices across regions and countries. Careful non-aggregate measures of consumption, health, and wellbeing, like those generated by Deaton, Tony Atkinson, Alwyn Young, Thomas Piketty and Emmanuel Saez, are essential for understanding how human welfare has changed over time and space, and is a deserving rationale for a Nobel.

The surprising thing about Deaton, however, is that despite his great data-collection work and his interest in development, he is famously hostile to the “randomista” trend which proposes that randomized control trials (RCT) or other suitable tools for internally valid causal inference are the best way of learning how to improve the lives of the world’s poor. This mode is most closely associated with the enormously influential J-PAL lab at MIT, and there is no field in economics where you are less likely to see traditional price theoretic ideas than modern studies of development. Deaton is very clear on his opinion: “Randomized controlled trials cannot automatically trump other evidence, they do not occupy any special place in some hierarchy of evidence, nor does it make sense to refer to them as “hard” while other methods are “soft”… [T]he analysis of projects needs to be refocused towards the investigation of potentially generalizable mechanisms that explain why and in what contexts projects can be expected to work.” I would argue that Deaton’s work is much closer to more traditional economic studies of development than to RCTs.

To understand this point of view, we need to go back to Deaton’s earliest work. Among Deaton’s most famous early papers was his well-known development of the Almost Ideal Demand System (AIDS) in 1980 with Muellbauer, a paper chosen as one of the 20 best published in the first 100 years of the AER. It has long been known that individual demand equations which come from utility maximization must satisfy certain properties. For example, a rational consumer’s demand for food should not depend on whether the consumer’s equivalent real salary is paid in American or Canadian dollars. These restrictions turn out to be useful in that if you want to know how demand for various products depend on changes in income, among many other questions, the restrictions of utility theory simplify estimation greatly by reducing the number of free parameters. The problem is in specifying a form for aggregate demand, such as how demand for cars depends on the incomes of all consumers and prices of other goods. It turns out that, in general, aggregate demand generated by utility-maximizing households does not satisfy the same restrictions as individual demand; you can’t simply assume that there is a “representative consumer” with some utility function and demand function equal to each individual agent. What form should we write for aggregate demand, and how congruent is that form with economic theory? Surely an important question if we want to estimate how a shift in taxes on some commodity, or a policy of giving some agricultural input to some farmers, is going to affect demand for output, its price, and hence welfare!

Let q(j)=D(p,c,e) say that the quantity of j consumed, in aggregate is a function of the price of all goods p and the total consumption (or average consumption) c, plus perhaps some random error e. This can be tough to estimate: if D(p,c,e)=Ap+e, where demand is just a linear function of relative prices, then we have a k-by-k matrix to estimate, where k is the number of goods. Worse, that demand function is also imposing an enormous restriction on what individual demand functions, and hence utility functions, look like, in a way that theory does not necessarily support. The AIDS of Deaton and Muellbauer combine the fact that Taylor expansions approximately linearize nonlinear functions and that individual demand can be aggregated even when heterogeneous across individuals if the restrictions of Muellbauer’s PIGLOG papers are satisfied to show a functional form for aggregate demand D which is consistent with aggregated individual rational behavior and which can sometimes be estimated via OLS. They use British data to argue that aggregate demand violates testable assumptions of the model and hence factors like credit constraints or price expectations are fundamental in explaining aggregate consumption.

This exercise brings up a number of first-order questions for a development economist. First, it shows clearly the problem with estimating aggregate demand as a purely linear function of prices and income, as if society were a single consumer. Second, it gives the importance of how we measure the overall price level in figuring out the effects of taxes and other policies. Third, it combines theory and data to convincingly suggest that models which estimate demand solely as a function of current prices and current income are necessarily going to give misleading results, even when demand is allowed to take on very general forms as in the AIDS model. A huge body of research since 1980 has investigated how we can better model demand in order to credibly evaluate demand-affecting policy. All of this is very different from how a certain strand of development economist today might investigate something like a subsidy. Rather than taking obversational data, these economists might look for a random or quasirandom experiment where such a subsidy was introduced, and estimate the “effect” of that subsidy directly on some quantity of interest, without concern for how exactly that subsidy generated the effect.

To see the difference between randomization and more structural approaches like AIDS, consider the following example from Deaton. You are asked to evaluate whether China should invest more in building railway stations if they wish to reduce poverty. Many economists trained in a manner influenced by the randomization movement would say, well, we can’t just regress the existence of a railway on a measure of city-by-city poverty. The existence of a railway station depends on both things we can control for (the population of a given city) and things we can’t control for (subjective belief that a town is “growing” when the railway is plopped there). Let’s find something that is correlated with rail station building but uncorrelated with the random component of how rail station building affects poverty: for instance, a city may lie on a geographically-accepted path between two large cities. If certain assumptions hold, it turns out that a two-stage “instrumental variable” approach can use that “quasi-experiment” to generate the LATE, or local average treatment effect. This effect is the average benefit of a railway station on poverty reduction, at the local margin of cities which are just induced by the instrument to build a railway station. Similar techniques, like difference-in-difference and randomized control trials, under slightly different assumptions can generate credible LATEs. In development work today, it is very common to see a paper where large portions are devoted to showing that the assumptions (often untestable) of a given causal inference model are likely to hold in a given setting, then finally claiming that the treatment effect of X on Y is Z. That LATEs can be identified outside of a purely randomized contexts is incredibly important and valuable, and the economists and statisticians who did the heavy statistical lifting on this so-called Rubin model will absolutely and justly win an Economics Nobel sometime soon.

However, this use of instrumental variables would surely seem strange to the old Cowles Commission folks: Deaton is correct that “econometric analysis has changed its focus over the years, away from the analysis of models derived from theory towards much looser specifications that are statistical representations of program evaluation. With this shift, instrumental variables have moved from being solutions to a well-defined problem of inference to being devices that induce quasi-randomization.” The traditional use of instrumental variables was that after writing down a theoretically justified model of behavior or aggregates, certain parameters – not treatment effects, but parameters of a model – are not identified. For instance, price and quantity transacted are determined by the intersection of aggregate supply and aggregate demand. Knowing, say, that price and quantity was (a,b) today, and is (c,d) tomorrow, does not let me figure out the shape of either the supply or demand curve. If price and quantity both rise, it may be that demand alone has increased pushing the demand curve to the right, or that demand has increased while the supply curve has also shifted to the right a small amount, or many other outcomes. An instrument that increases supply without changing demand, or vice versa, can be used to “identify” the supply and demand curves: an exogenous change in the price of oil will affect the price of gasoline without much of an effect on the demand curve, and hence we can examine price and quantity transacted before and after the oil supply shock to find the slope of supply and demand.

Note the difference between the supply and demand equation and the treatment effects use of instrumental variables. In the former case, we have a well-specified system of supply and demand, based on economic theory. Once the supply and demand curves are estimated, we can then perform all sorts of counterfactual and welfare analysis. In the latter case, we generate a treatment effect (really, a LATE), but we do not really know why we got the treatment effect we got. Are rail stations useful because they reduce price variance across cities, because they allow for increasing returns to scale in industry to be utilized, or some other reason? Once we know the “why”, we can ask questions like, is there a cheaper way to generate the same benefit? Is heterogeneity in the benefit important? Ought I expect the results from my quasiexperiment in place A and time B to still operate in place C and time D (a famous example being the drug Opren, which was very successful in RCTs but turned out to be particularly deadly when used widely by the elderly)? Worse, the whole idea of LATE is backwards. We traditionally choose a parameter of interest, which may or may not be a treatment effect, and then choose an estimation technique that can credible estimate that parameter. Quasirandom techniques instead start by specifying the estimation technique and then hunt for a quasirandom setting, or randomize appropriately by “dosing” some subjects and not others, in order to fit the assumptions necessary to generate a LATE. If is often the case that even policymakers do not care principally about the LATE, but rather they care about some measure of welfare impact which rarely is immediately interpretable even if the LATE is credibly known!

Given these problems, why are random and quasirandom techniques so heavily endorsed by the dominant branch of development? Again, let’s turn to Deaton: “There has also been frustration with the World Bank’s apparent failure to learn from its own projects, and its inability to provide a convincing argument that its past activities have enhanced economic growth and poverty reduction. Past development practice is seen as a succession of fads, with one supposed magic bullet replacing another—from planning to infrastructure to human capital to structural adjustment to health and social capital to the environment and back to infrastructure—a process that seems not to be guided by progressive learning.” This is to say, the conditions necessary to estimate theoretical models are so stringent that development economists have been writing noncredible models, estimating them, generating some fad of programs that is used in development for a few years until it turns out not to be silver bullet, then abandoning the fad for some new technique. Better, the randomistas argue, to forget about external validity for now, and instead just evaluate the LATEs on a program-by-program basis, iterating what types of programs we evaluate until we have a suitable list of interventions that we feel confident work. That is, development should operate like medicine.

We have something of an impasse here. Everyone agrees that on many questions theory is ambiguous in the absence of particular types of data, hence more and better data collection is important. Everyone agrees that many parameters of interest for policymaking require certain assumptions, some more justifiable than others. Deaton’s position is that the parameters of interest to economists by and large are not LATEs, and cannot be generated in a straightforward way from LATEs. Thus, following Nancy Cartwright’s delightful phrasing, if we are to “use” causes rather than just “hunt” for what they are, we have no choice but to specify the minimal economic model which is able to generate the parameters we care about from the data. Glen Weyl’s attempt to rehabilitate price theory and Raj Chetty’s sufficient statistics approach are both attempts to combine the credibility of random and quasirandom inference with the benefits of external validity and counterfactual analysis that model-based structural designs permit.

One way to read Deaton’s prize, then, is as an award for the idea that effective development requires theory if we even hope to compare welfare across space and time or to understand why policies like infrastructure improvements matter for welfare and hence whether their beneficial effects will remain when moved to a new context. It is a prize which argues against the idea that all theory does is propose hypotheses. For Deaton, going all the way back to his work with AIDS, theory serves three roles: proposing hypotheses, suggesting which data is worthwhile to collect, and permitting inference on the basis of that data. A secondary implication, very clear in Deaton’s writing, is that even though the “great escape” from poverty and want is real and continuing, that escape is almost entirely driven by effects which are unrelated to aid and which are uninfluenced by the type of small bore, partial equilibrium policies for which randomization is generally suitable. And, indeed, the best development economists very much understand this point. The problem is that the media, and less technically capable young economists, still hold the mistaken belief that they can infer everything they want to infer about “what works” solely using the “scientific” methods of random- and quasirandomization. For Deaton, results that are easy to understand and communicate, like the “dollar-a-day” poverty standard or an average treatment effect, are less virtuous than results which carefully situate numbers in the role most amenable to answering an exact policy question.

Let me leave you three side notes and some links to Deaton’s work. First, I can’t help but laugh at Deaton’s description of his early career in one of his famous “Notes from America”. Deaton, despite being a student of the 1984 Nobel laureate Richard Stone, graduated from Cambridge essentially unaware of how one ought publish in the big “American” journals like Econometrica and the AER. Cambridge had gone from being the absolute center of economic thought to something of a disconnected backwater, and Deaton, despite writing a paper that would win a prize as one of the best papers in Econometrica published in the late 1970s, had essentially no understanding of the norms of publishing in such a journal! When the history of modern economics is written, the rise of a handful of European programs and their role in reintegrating economics on both sides of the Atlantic will be fundamental. Second, Deaton’s prize should be seen as something of a callback to the ’84 prize to Stone and ’77 prize to Meade, two of the least known Nobel laureates. I don’t think it is an exaggeration to say that the majority of new PhDs from even the very best programs will have no idea who those two men are, or what they did. But as Deaton mentions, Stone in particular was one of the early “structural modelers” in that he was interested in estimating the so-called “deep” or behavioral parameters of economic models in a way that is absolutely universal today, as well as being a pioneer in the creation and collection of novel economic statistics whose value was proposed on the basis of economic theory. Quite a modern research program! Third, of the 19 papers in the AER “Top 20 of all time” whose authors were alive during the era of the economics Nobel, 14 have had at least one author win the prize. Should this be a cause for hope for the living outliers, Anne Krueger, Harold Demsetz, Stephen Ross, John Harris, Michael Todaro and Dale Jorgensen?

For those interested in Deaton’s work beyond what this short essay, his methodological essay, quoted often in this post, is here. The Nobel Prize technical summary, always a great and well-written read, can be found here.

“Forced Coexistence and Economic Development: Evidence from Native American Reservations,” C. Dippel (2014)

I promised one more paper from Christian Dippel, and it is another quite interesting one. There is lots of evidence, folk and otherwise, that combining different ethnic or linguistic groups artificially, as in much of the ex-colonial world, leads to bad economic and governance outcomes. But that’s weird, right? After all, ethnic boundaries are themselves artificial, and there are tons of examples – Italy and France being the most famous – of linguistic diversity quickly fading away once a state is developed. Economic theory (e.g., a couple recent papers by Joyee Deb) suggests an alternative explanation: groups that have traditionally not worked with each other need time to coordinate on all of the Pareto-improving norms you want in a society. That is, it’s not some kind of intractable ethnic hate, but merely a lack of trust that is the problem.

Dippel uses the history of American Indian reservations to examine the issue. It turns out that reservations occasionally included different subtribal bands even though they almost always were made up of members of a single tribe with a shared language and ethnic identity. For example, “the notion of tribe in Apachean cultures is very weakly developed. Essentially it was only a recognition
that one owed a modicum of hospitality to those of the same speech, dress, and customs.” Ethnographers have conveniently constructed measures of how integrated governance was in each tribe prior to the era of reservations; some tribes had very centralized governance, whereas others were like the Apache. In a straight OLS regression with the natural covariates, incomes are substantially lower on reservations made up of multiple bands that had no pre-reservation history of centralized governance.

Why? First, let’s deal with identification (more on what that means in a second). You might naturally think that, hey, tribes with centralized governance in the 1800s were probably quite socioeconomically advanced already: think Cherokee. So are we just picking up that high SES in the 1800s leads to high incomes today? Well, in regions with lots of mining potential, bands tended to be grouped onto one reservation more frequently, which suggests that resource prevalence on ancestral homelands outside of the modern reservation boundaries can instrument for the propensity for bands to be placed together. Instrumented estimates of the effect of “forced coexistence” is just as strong as the OLS estimate. Further, including tribe fixed effects for cases where single tribes have a number of reservations, a surprisingly common outcome, also generates similar estimates of the effect of forced coexistence.

I am very impressed with how clear Dippel is about what exactly is being identified with each of these techniques. A lot of modern applied econometrics is about “identification”, and generally only identifies a local average treatment effect, or LATE. But we need to be clear about LATE – much more important than “what is your identification strategy” is an answer to “what are you identifying anyway?” Since LATE identifies causal effects that are local conditional on covariates, and the proper interpretation of that term tends to be really non-obvious to the reader, it should go without saying that authors using IVs and similar techniques ought be very precise in what exactly they are claiming to identify. Lots of quasi-random variation generates that variation along a local margin that is of little economic importance!

Even better than the estimates is an investigation of the mechanism. If you look by decade, you only really see the effect of forced coexistence begin in the 1990s. But why? After all, the “forced coexistence” is longstanding, right? Think of Nunn’s famous long-run effect of slavery paper, though: the negative effects of slavery are mediated during the colonial era, but are very important once local government has real power and historically-based factionalism has some way to bind on outcomes. It turns out that until the 1980s, Indian reservations had very little local power and were largely run as government offices. Legal changes mean that local power over the economy, including the courts in commercial disputes, is now quite strong, and anecdotal evidence suggests lots of factionalism which is often based on longstanding intertribal divisions. Dippel also shows that newspaper mentions of conflict and corruption at the reservation level are correlated with forced coexistence.

How should we interpret these results? Since moving to Canada, I’ve quickly learned that Canadians generally do not subscribe to the melting pot theory; largely because of the “forced coexistence” of francophone and anglophone populations – including two completely separate legal traditions! – more recent immigrants are given great latitude to maintain their pre-immigration culture. This heterogeneous culture means that there are a lot of actively implemented norms and policies to help reduce cultural division on issues that matter to the success of the country. You might think of the problems on reservations and in Nunn’s post-slavery states as a problem of too little effort to deal with factionalism rather than the existence of the factionalism itself.

Final working paper, forthcoming in Econometrica. No RePEc IDEAS version. Related to post-colonial divisions, I also very much enjoyed Mobilizing the Masses for Genocide by Thorsten Rogall, a job market candidate from IIES. When civilians slaughter other civilians, is it merely a “reflection of ancient ethnic hatred” or is it actively guided by authority? In Rwanda, Rogall finds that almost all of the killing is caused directly or indirectly by the 50,000-strong centralized armed groups who fanned out across villages. In villages that were easier to reach (because the roads were not terribly washed out that year), more armed militiamen were able to arrive, and the more of them that arrived, the more deaths resulted. This in-person provoking appears much more important than the radio propaganda which Yanigazawa-Drott discusses in his recent QJE; one implication is that post-WW2 restrictions on free speech in Europe related to Nazism may be completely misdiagnosing the problem. Three things I especially liked about Rogall’s paper: the choice of identification strategy is guided by a precise policy question which can be answered along the local margin identified (could a foreign force stopping these centralized actors a la Romeo Dallaire have prevented the genocide?), a theoretical model allows much more in-depth interpretation of certain coefficients (for instance, he can show that most villages do not appear to have been made up of active resistors), and he discusses external cases like the Lithuanian killings of Jews during World War II, where a similar mechanism appears to be at play. I’ll have many more posts on cool job market papers coming shortly!

“International Trade and Institutional Change: Medieval Venice’s Response to Globalization,” D. Puga & D. Trefler

(Before discussing the paper today, I should forward a couple great remembrances of Stanley Reiter, who passed away this summer, by Michael Chwe (whose interests at the intersection of theory and history are close to my heart) and Rakesh Vohra. After leaving Stanford – Chwe mentions this was partly due to a nasty letter written by Reiter’s advisor Milton Friedman! – Reiter established an incredible theory group at Purdue which included Afriat, Vernon Smith and PhD students like Sonnenschein and Ledyard. He then moved to Northwestern where he helped build up the great group in MEDS which is too long to list, but which includes one Nobel winner already in Myerson and, by my reckoning, two more which are favorites to win the prize next Monday.

I wonder if we may be at the end of an era for topic-diverse theory departments. Business schools are all a bit worried about “Peak MBA”, and theorists are surely the first ones out the door when enrollment falls. Economic departments, journals and funders seem to have shifted, in the large, toward more empirical work, for better or worse. Our knowledge both of how economic and social interactions operate in their most platonic form, and our ability to interpret empirical results when considering novel or counterfactual policies, have greatly benefited by the theoretical developments following Samuelson and Hicks’ mathematization of primitives in the 1930s and 40s, and the development of modern game theory and mechanism design in the 1970s and 80s. Would that a new Cowles and a 21st century Reiter appear to help create a critical mass of theorists again!)

On to today’s paper, a really interesting theory-driven piece of economic history. Venice was one of the most important centers of Europe’s “commercial revolution” between the 10th and 15th century; anyone who read Marco Polo as a schoolkid knows of Venice’s prowess in long-distance trade. Among historians, Venice is also well-known for the inclusive political institutions that developed in the 12th century, and the rise of oligarchy following the “Serrata” at the end of the 13th century. The Serrata was followed by a gradual decrease in Venice’s power in long-distance trade and a shift toward manufacturing, including the Murano glass it is still famous for today. This is a fairly worrying history from our vantage point today: as the middle class grew wealthier, democratic forms of government and free markets did not follow. Indeed, quite the opposite: the oligarchs seized political power, and within a few decades of the serrata restricted access to the types of trade that previously drove wealth mobility. Explaining what happened here is both a challenge due to limited data, and of great importance given the public prominence of worries about the intersection of growing inequality and corruption of the levers of democracy.

Dan Trefler, an economic historian here at U. Toronto, and Diego Puga, an economist at CEMFI who has done some great work in economic geography, provide a great explanation of this history. Here’s the model. Venice begins with lots of low-wealth individuals, a small middle and upper class, and political power granted to anyone in the upper class. Parents in each dynasty can choose to follow a risky project – becoming a merchant in a long-distance trading mission a la Niccolo and Maffeo Polo – or work locally in a job with lower expected pay. Some of these low and middle class families will succeed on their trade mission and become middle and upper class in the next generation. Those with wealth can sponsor ships via the colleganza, a type of early joint-stock company with limited liability, and potentially join the upper class. Since long-distance trade is high variance, there is a lot of churn across classes. Those with political power also gather rents from their political office. As the number of wealthy rise in the 11th and 12th century, the returns to sponsoring ships falls due to competition across sponsors in the labor and export markets. At any point, the upper class can vote to restrict future entry into the political class by making political power hereditary. They need to include sufficiently many powerful people in this hereditary class or there will be a revolt. As the number of wealthy increase, eventually the wealthy find it worthwhile to restrict political power so they can keep political rents within their dynasty forever. Though political power is restricted, the economy is still free, and the number of wealthy without power continue to grow, lowering the return to wealth for those with political power due to competition in factor and product markets. At some point, the return is so low that it is worth risking revolt from the lower classes by restricting entry of non-nobles into lucrative industries. To prevent revolt, a portion of the middle classes are brought in to the hereditary political regime, such that the regime is powerful enough to halt a revolt. Under these new restrictions, lower classes stop engaging in long-distance trade and instead work in local industry. These outcomes can all be generated with a reasonable looking model of dynastic occupation choice.

What historical data would be consistent with this theoretical mechanism? We should expect lots of turnover in political power and wealth in the 10th through 13th centuries. We should find examples in the literature of families beginning as long-distance traders and rising to voyage sponsors and political agents. We should see a period of political autocracy develop, followed later by the expansion of hereditary political power and restrictions on lucrative industry entry to those with such power. Economic success based on being able to activate large amounts of capital from within the nobility class will make the importance of inter-family connections more important in the 14th and 15th centuries than before. Political power and participation in lucrative economic ventures will be limited to a smaller number of families after this political and economic closure than before. Those left out of the hereditary regime will shift to local agriculture and small-scale manufacturing.

Indeed, we see all of these outcomes in Venetian history. Trefler and Puga use some nice techniques to get around limited data availability. Since we don’t have data on family incomes, they use the correlation in eigenvector centrality within family marriage networks as a measure of the stability of the upper classes. They code colleganza records – a non-trivial task involving searching thousands of scanned documents for particular Latin phrases – to investigate how often new families appear in these records, and how concentration in the funding of long-distance trade changes over time. They show that all of the families with high eigenvector centrality in the noble marriage market after political closure – a measure of economic importance, remember – were families that were in the top quartile of seat-share in the pre-closure Venetian legislature, and that those families which had lots of political power pre-closure but little commercial success thereafter tended to be unsuccessful in marrying into lucrative alliances.

There is a lot more historical detail in the paper, but as a matter of theory useful to the present day, the Venetian experience ought throw cold water on the idea that political inclusiveness and economic development always form a virtuous circle. Institutions are endogenous, and changes in the nature of inequality within a society following economic development alter the potential for political and economic crackdowns to survive popular revolt.

Final published version in QJE 2014 (RePEc IDEAS). A big thumbs up to Diego for having the single best research website I have come across in five years of discussing papers in this blog. Every paper has an abstract, well-organized replication data, and a link to a locally-hosted version of the final published paper. You may know his paper with Nathan Nunn on how rugged terrain in Africa is associated with good economic outcomes today because slave traders like the infamous Tippu Tip couldn’t easily exploit mountainous areas, but it’s also worth checking out his really clever theoretical disambiguation of why firms in cities are more productive, as well as his crazy yet canonical satellite-based investigation of the causes of sprawl. There is a really cool graphic on the growth of U.S. sprawl at that last link!

“The Rise and Fall of General Laws of Capitalism,” D. Acemoglu & J. Robinson (2014)

If there is one general economic law, it is that every economist worth their salt is obligated to put out twenty pages responding to Piketty’s Capital. An essay by Acemoglu and Robinson on this topic, though, is certainly worth reading. They present three particularly compelling arguments. First, in a series of appendices, they follow Debraj Ray, Krusell and Smith and others in trying to clarify exactly what Piketty is trying to say, theoretically. Second, they show that it is basically impossible to find any effect of the famed r-g on top inequality in statistical data. Third, they claim that institutional features are much more relevant to the impact of economic changes on societal outcomes, using South Africa and Sweden as examples. Let’s tackle these in turn.

First, the theory. It has been noted before that Piketty is, despite beginning his career as a very capable economist theorist (hired at MIT at age 22!), very disdainful of the prominence of theory. He, quite correctly, points out that we don’t even have any descriptive data on a huge number of topics of economic interest, inequality being principal among these. And indeed he is correct! But, shades of the Methodenstreit, he then goes on to ignore theory where it is most useful, in helping to understand, and extrapolate from, his wonderful data. It turns out that even in simple growth models, not only is it untrue that r>g necessarily holds, but the endogeneity of r and our standard estimates of the elasticity of substitution between labor and capital do not at all imply that capital-to-income ratios will continue to grow (see Matt Rognlie on this point). Further, Acemoglu and Robinson show that even relatively minor movement between classes is sufficient to keep the capital share from skyrocketing. Do not skip the appendices to A and R’s paper – these are what should have been included in the original Piketty book!

Second, the data. Acemoglu and Robinson point out, and it really is odd, that despite the claims of “fundamental laws of capitalism”, there is no formal statistical investigation of these laws in Piketty’s book. A and R look at data on growth rates, top inequality and the rate of return (either on government bonds, or on a computed economy-wide marginal return on capital), and find that, if anything, as r-g grows, top inequality shrinks. All of the data is post WW2, so there is no Great Depression or World War confounding things. How could this be?

The answer lies in the feedback between inequality and the economy. As inequality grows, political pressures change, the endogenous development and diffusion of technology changes, the relative use of capital and labor change, and so on. These effects, in the long run, dominate any “fundamental law” like r>g, even if such a law were theoretically supported. For instance, Sweden and South Africa have very similar patterns of top 1% inequality over the twentieth century: very high at the start, then falling in mid-century, and rising again recently. But the causes are totally different: in Sweden’s case, labor unrest led to a new political equilibrium with a high-growth welfare state. In South Africa’s case, the “poor white” supporters of Apartheid led to compressed wages at the top despite growing black-white inequality until 1994. So where are we left? The traditional explanations for inequality changes: technology and politics. And even without r>g, these issues are complex and interesting enough – what could be a more interesting economic problem for an American economist than diagnosing the stagnant incomes of Americans over the past 40 years?

August 2014 working paper (No IDEAS version yet). Incidentally, I have a little tracker on my web browser that lets me know when certain pages are updated. Having such a tracker follow Acemoglu’s working papers pages is, frankly, depressing – how does he write so many papers in such a short amount of time?

“Agricultural Productivity and Structural Change: Evidence from Brazil,” P. Bustos et al (2014)

It’s been a while – a month of exploration in the hinterlands of the former Soviet Union, a move up to Canada, and a visit down to the NBER Summer Institute really put a cramp on my posting schedule. That said, I have a ridiculously long backlog of posts to get up, so they will be coming rapidly over the next few weeks. I saw today’s paper presented a couple days ago at the Summer Institute. (An aside: it’s a bit strange that there isn’t really any media at SI – the paper selection process results in a much better set of presentations than at the AEA or the Econometric Society, which simply have too long of a lag from the application date to the conference, and too many half-baked papers.)

Bustos and her coauthors ask, when can improvements in agricultural productivity help industrialization? An old literature assumed that any such improvement would help: the newly rich agricultural workers would demand more manufactured goods, and since manufactured and agricultural products are complements, rising agricultural productivity would shift workers into the factories. Kiminori Matsuyama wrote a model (JET 1992) showing the problem here: roughly, if in a small open economy productivity goes up in a good you have a Ricardian comparative advantage in, then you want to produce even more of that good. A green revolution which doubles agricultural productivity in, say, Mali, while keeping manufacturing productivity the same, will allow Mali to earn twice as much selling the agriculture overseas. Workers will then pour into the agricultural sector until the marginal product of labor is re-equated in both sectors.

Now, if you think that industrialization has a bunch of positive macrodevelopment spillovers (via endogenous growth, population control or whatever), then this is worrying. Indeed, it vaguely suggests that making villages more productive, an outright goal of a lot of RCT-style microdevelopment studies, may actually be counterproductive for the country as a whole! That said, there seems to be something strange going on empirically, because we do appear to see industrialization in countries after a Green Revolution. What could be going on? Let’s look back at the theory.

Implicitly, the increase in agricultural productivity in Matsuyama was “Hicks-neutral” – it increased the total productivity of the sector without affecting the relative marginal factor productivities. A lot of technological change, however, is factor-biased; to take two examples from Brazil, modern techniques that allow for double harvesting of corn each year increase the marginal productivity of land, whereas “Roundup Ready” GE soy that requires less tilling and weeding increases the marginal productivity of farmers. We saw above that Hicks-neutral technological change in agriculture increases labor in the farm sector: workers choosing where to work means that the world price of agriculture times the marginal product of labor in that sector must be equal to world price of manufacturing times the marginal product of labor in manufacturing. A Hicks-neutral improvement in agricultural productivity raises MPL in that sector no matter how much land or labor is currently being used, hence wage equality across sectors requires workers to leave the factor for the farm.

What of biased technological change? As before, the only thing we need to know is whether the technological change increases the marginal product of labor. Land-augmenting technical change, like double harvesting of corn, means a country can produce the same amount of output with the old amount of farm labor and less land. If one more worker shifts from the factory to the farm, she will be farming less marginal land than before the technological change, hence her marginal productivity of labor is higher than before the change, hence she will leave the factory. Land-augmenting technological change always increases the amount of agricultural labor. What about farm-labor-augmenting technological change like GM soy? If land and labor are not very complementary (imagine, in the limit, that they are perfect substitutes in production), then trivially the marginal product of labor increases following the technological change, and hence the number of farm workers goes up. The situation is quite different if land and farm labor are strong complements. Where previously we had 1 effective worker per unit of land, following the labor-augmenting technology change it is as if we have, say, 2 effective workers per unit of land. Strong complementarity implies that, at that point, adding even more labor to the farms is pointless: the marginal productivity of labor is decreasing in the technological level of farm labor. Therefore, labor-augmenting technology with a strongly complementary agriculture production function shifts labor off the farm and into manufacturing.

That’s just a small bit of theory, but it really clears things up. And even better, the authors find empirical support for this idea: following the introduction to Brazil of labor-augmenting GM soy and land-augmenting double harvesting of maize, agricultural productivity rose everywhere, the agricultural employment share rose in areas that were particularly suitable for modern maize production, and the manufacturing employment share rose in areas that were particularly suitable for modern soy production.

August 2013 working paper. I think of this paper as a nice complement to the theory and empirics in Acemoglu’s Directed Technical Change and Walker Hanlon’s Civil War cotton paper. Those papers ask how changes in factor prices endogenously affect the development of different types of technology, whereas Bustos and coauthors ask how the exogenous development of different types of technology affect the use of various factors. I read the former as most applicable to structural change questions in countries at the technological frontier, and the latter as appropriate for similar questions in developing countries.

“On the Origin of States: Stationary Bandits and Taxation in Eastern Congo,” R. S. de la Sierra (2013)

The job market is yet again in full swing. I won’t be able to catch as many talks this year as I would like to, but I still want to point out a handful of papers that I consider particularly elucidating. This article, by Columbia’s de la Sierra, absolutely fits that category.

The essential question is, why do states form? Would that all young economists interested in development put their effort toward such grand questions! The old Rousseauian idea you learned your first year of college, where individuals come together voluntarily for mutual benefit, seems contrary to lots of historical evidence. Instead, war appears to be a prime mover for state formation; armed groups establish a so-called “monopoly on violence” in an area for a variety of reasons, and proto-state institutions evolve. This basic idea is widespread in the literature, but it is still not clear which conditions within an area lead armed groups to settle rather than to pillage. Further, examining these ideas empirically seems quite problematic for two reasons, first because states themselves are the ones who collect data hence we rarely observe anything before states have formed, and second, because most of the planet has long since been under the rule of a state (with apologies to James Scott!)

De la Sierra brings some economics to this problem. What is the difference between pillaging and sustained state-like forms? The pillager can only extract assets on its way through, while the proto-state can establish “taxes”. What taxes will it establish? If the goal is long-run revenue maximization, Ramsey long ago told us that it is optimal to tax elements that are inelastic. If labor can flee, but the output of the mine can not, then you ought tax the output of the mine highly and set a low poll tax. If labor supply is inelastic but output can be hidden from the taxman, then use a high poll tax. Thus, when will bandits form a state instead of just pillaging? When there is a factor which can be dynamically taxed at such a rate that the discounted tax revenue exceeds what can be pillaged today. Note that the ability to, say, restrict movement along roads, or to expand output through state-owned capital, changes relevant tax elasticities, so at a more fundamental level, capacity by rebels along these margins are also important (and I imagine that extending de la Sierra’s paper will involve the evolutionary development of these types of capacities).

This is really an important idea. It is not that there is a tradeoff between producing and pillaging. Instead, there is a three way tradeoff between producing in your home village, joining an armed group to pillage, and joining an armed group that taxes like a state! The armed group that taxes will, as a result of its desire to increase tax revenue, perhaps introduce institutions that increase production in the area under its control. And to the extent that institutions persist, short-run changes that cause potential bandits to form taxing relationships may actually lead to long-run increases in productivity in a region.

De la Sierra goes a step beyond theory, investigating these ideas empirically in the Congo. Eastern Congo during and after the Second Congo War was characterized by a number of rebel groups that occasionally just pillaged, but occasionally formed stable tax relationships with villages that could last for years. That is, the rebels occasionally implemented something looking like states. The theory above suggests that exogenous changes in the ability to extract tax revenue (over a discounted horizon) will shift the rebels from pillagers to proto-states. And, incredibly, there were a number of interesting exogenous changes that had exactly that effect.

The prices of coltan and gold both suffered price shocks during the war. Coltan is heavy, hard to hide, and must be shipped by plane in the absence of roads. Gold is light, easy to hide, and can simply be carried from the mine on jungle footpaths. When the price of coltan rises, the maximal tax revenue of a state increases since taxable coltan production is relatively inelastic. This is particularly true near airstrips, where the coltan can actually be sold. When the price of gold increases, the maximal tax revenue does not change much, since gold is easy to hide, and hence the optimal tax is on labor rather than on output. An exogenous rise in coltan prices should encourage proto-state formation in areas with coltan, then, while an exogenous rise is gold prices should have little impact on the pillage vs. state tradeoff. Likewise, a government initiative to root out rebels (be they stationary or pillaging) decreases the expected number of years a proto-state can extract rents, hence makes pillaging relatively more lucrative.

How to confirm these ideas, though, when there was no data collected on income, taxes, labor supply, or proto-state existence? Here is the crazy bit – 11 locals were hired in Eastern Congo to travel to a large number of villages, spend a week there querying families and village elders about their experiences during the war, the existence of mines, etc. The “state formation” in these parts of Congo is only a few years in the past, so it is at least conceivable that memories, suitably combined, might actually be reliable. And indeed, the data do seem to match aggregate trends known to monitors of the war. What of the model predictions? They all seem to hold, and quite strongly: the ability to extract more tax revenue is important for proto-state formation, and areas where proto-states existed do appear to have retained higher productive capacity years later perhaps as a result of the proto-institutions those states developed. Fascinating. Even better, because there is a proposed mechanism rather than an identified treatment effect, we can have some confidence that this course is, to some extent, externally valid!

December 2013 working paper (No IDEAS page). You may wonder what a study like this costs (particularly if you are, like me, a theorist using little more than chalk and a chalkboard); I have no idea, but de la Sierra’s CV lists something like a half million dollars of grants, an incredible total for a graduate student. On a personal level, I spent a bit of time in Burundi a number of years ago, including visiting a jungle camp where rebels from the Second Congo War were still hiding. It was pretty amazing how organized even these small groups were in the areas they controlled; there was nothing anarchic about it.

“Does Ethnicity Pay?,” Y. Huang, L. Jin & Y. Qian (2010)

Ethnic networks in trade and foreign investment are widespread. Avner Greif, in his medieval trade papers, has pointed out the role of ethnic trade groups in facilitating group punishment of deviations from implicitly contracted behavior in cases where contracts cannot be legally enforced. Ethnic investors may also have an advantage when investing in their home country, due to better knowledge of local profit opportunities.

Huang, Jin and Qian investigate the ethnic advantage using an amazing database of the universe of Chinese industrial firms. The database tags firms formed using FDI (perhaps as a joint venture) from Hong Kong, Macao and Taiwan; in the latter two cases, nearly 100 percent of Chinese FDI is from ethnic Chinese. Amazingly, firms funded with FDI from these regions performs worse, as measured by ROI, ROA or margins, than Chinese firms funded with FDI from other countries. In the first years after the firms are founded, there is only a small difference between Chinese-funded firms and others, but over time, the disadvantage grows; it is not just that ethnic Chinese investors invest in companies with low profitability at the beginning, but that they actually get worse over time. Restricting the sample just to Taiwanese electronics firms’ FDI compared to Korean electronics firms’ FDI, the Koreans make more profitable investments, both at the beginning and as measured by relative performance over time.

What’s going on here? It’s not just that ethnic Chinese are making low profit investments in their ancestral hometown; omitting Fujian and Guangdong, ancestral source of most HK, Macao and Taiwan Chinese, does not change the results in any qualitative way. Instead, it appears that ethnic Chinese-funded firms do substantially less work building up intangible assets and human capital in the firms they invest in. Stratifying the firms, if Chinese-funded firms would have grown their human capital (as proxied by employee wage) or intangible assets (as measured in accounting data) at the same rate as non Chinese-funded firms, there would have been no difference in ROI over time.

This leads to a bigger question, of course. Why would ethnic investors fail to build up intangible capital? Certainly there are anecdotal stories along these lines, particularly when it comes to wealthy minority investors; think Lebanese in West Africa, Fujianese in Indonesia, or Jewish firms in 19th century Europe. I don’t have a model that can explain such behavior, however. Any thoughts?

2010 NBER working paper (IDEAS version)

“The African Growth Miracle,” A. Young (2013)

Alwyn Young, well known for his empirical work on growth, has finally published his African Growth paper in the new issue of the JPE. Africa is quite interesting right now. Though it is still seen by much of the public as a bit of a basket case, the continent seems to be by-and-large booming. At least to the “eye test”, it has been doing so for some time now, to some extent in the 1990s but much more so in the 2000s. I remember visiting Kigali, Rwanda for the first time in 2008; this is a spotless, law-abiding city with glass skyscrapers downtown housing multinational companies. Not what you may have expected!

What is interesting, however, is that economic statistics have until very recently still shown African states growing much slower than other developing countries. A lot of economic data from the developing world is of poor quality, but Young notes that for many countries, it is literally non-existent: those annual income per capita tables you see in UN data and elsewhere involve pretty heroic imputation. Can we do better? Young looks at an irregular set of surveys from 1990 to 2006, covering dozens of poor countries, called the Demographic and Health Survey. This survey covers age, family size, education level and some consumption (“do you have a bicycle?”, “do you have a non-dirt floor?”). What you see immediately is that, across many items, the growth rate in consumption in African states surveyed more or less matches the growth rate in non-African developing countries, despite official statistics suggesting the non-African states have seen private consumption growing at a much faster clip.

Can growth in real consumption be backed out of such statistics? The DHS is nice in that it, in some countries and years, includes wages. The basic idea is the following: consumption of normal goods rises with income, and income rises with education, so consumption of normal goods should rise with education. I can estimate very noisy Engel curves linking consumption to education, and using the parts of the sample where wage data exists, a Mincerian regression with a whole bunch of controls gives us some estimate of the link between a year of education and income: on average, it is on the order of 11 percent. We now have a method to go from consumption changes to implied mean education levels to real consumption changes. Of course, this estimate is very noisy. Young uses a properly specified maximum likelihood function with random effects to show how outliers or noisy series should be weighted when averaging estimates of real income changes using each individual product; indeed, a simple average of the estimated real consumption growth from each individual product gives a wildly optimistic growth rate, so such econometric techniques are quite necessary.

What, then, does this heavy lifting give us? Real consumption in countries in the African sample grew 3.4% per household per annum in 1990-2006, versus 3.8% in developing countries outside Africa. This is contra 1% in African and 2% in non-African countries, using the same sample of countries, in other prominent international data sources. Now, many of these countries are not terribly far from subsistence, so it is impossible for most African states to have been growing at this level throughout the 70s and 80s as well, but at least for the 90s, consumption microdata suggests a far rosier past two decades on the continent than many people imagine. Clever.

Final working paper (IDEAS version). I am somehow drawing a blank on the name of the recent book covering the poor quality of developing world macro data – perhaps a commenter can add this for me.

“Railroads of the Raj: Estimating the Impact of Transportation Infrastructure,” D. Donaldson (2013)

Somehow I’ve never written about Dave Donaldson’s incredible Indian railroad paper before; as it has a fair claim on being the best job market paper in the past few years, it’s time to rectify that. I believe Donaldson spent eight years as LSE working on his PhD, largely made up of this paper. And that time led to a well-received result: in addition to conferences, a note on the title page mentions that the paper has been presented at Berkeley, BU, Brown, Chicago, Harvard, the IMF, LSE, MIT, the Minneapolis Fed, Northwestern, Nottingham, NYU, Oxford, Penn, Penn State, the Philly Fed, Princeton, Stanford, Toronto, Toulouse, UCL, UCLA, Warwick, the World Bank and Yale! So we can safely say, this is careful and well-vetted work.

Donaldson’s study considers the importance of infrastructure to development; it is, in many ways, the opposite of the “small changes”, RCT-based development literature that was particularly en vogue in the 2000s. Intuitively, we all think infrastructure is important, both for improving total factor productivity and for improving market access. The World Bank, for instance, spends 20 percent of its funds on infrastructure, more than “education, health, and social services combined.” But how important is infrastructure spending anyway? That’s a pretty hard question to define, let alone answer.

So let’s go back to one of the great infrastructure projects in human history: the Indian railroad during the British Raj. The British built over 67,000 km of rail in a country with few navigable rivers. They also, luckily for the economist, were typically British in the enormous number of price, weather, and rail shipment statistics they collected. Problematically for the economist, these statistics tended to be hand-written in weathered documents hidden away in the back rooms of India’s bureaucratic state. Donaldson nonetheless collected almost 1.5 million individual pieces of data from these weathered tomes. Now, you might think, let’s just regress new rail access on average incomes, use some IV to make sure that rail lines weren’t endogenous, and be done with it. Not so fast! First, there’s no district-level income per capita data for India in the 1800s! And second, we can use some theory to really tease out why infrastructure matters.

Let’s use four steps. First, try to estimate how much rail access lowered trade costs per kilometer; if a good is made in only one region, then theory suggests that the trade cost between regions is just the price difference of that commodity across regions. Even if we had shipping receipts, this wouldn’t be sufficient; bandits, and spoilage, and all the rest of Samuelson’s famous “iceberg” raise trade costs as well. Second, check whether lowered trade costs actually increased trade volume, and at what elasticity, using rainfall as a proxy for local productivity shocks. Third, note that even though we don’t have income, theory tells us that for agricultural workers, percentage changes in total production per unit of land deflated by a local price index is equivalent to percentage changes in real income per unit of land. Therefore, we can check in a reduced form way whether new rail access increases real incomes, though we can’t say why. Fourth, in Donaldson’s theoretical model (an extension, more or less, of Eaton and Kortum’s Ricardian model), trade costs and differences in region sizes and productivity shocks in all regions all interact to affect local incomes, but they all act through a sufficient statistic: the share of consumption that consists of local products. That is, if we do our regression testing for the impact of rail access on real income changes, but control for changes in the share of consumption from within the district, we should see no effect from rail access.

Now, these stages are tough. Donaldson constructs a network of rail, road and river routes using 19th century sources linked on GIS, and traces out the least-cost paths from any one district to another. He then non-linearly estimates the relative cost per kilometer of rail, sea, river and road transport using the prices of eight types of salt, each of which were sold across British India but only produced in a single location. He then finds that lowered trade costs do appear to raise trade volumes with quite high elasticity. The reduced form regression suggests that access to the Indian railway increased local incomes by an average of 16 percent (Indian real incomes per capita increased only 22 percent during the entire period 1870 to 1930, so 16 percent locally is substantial). Using the “trade share” sufficient statistic described above, Donaldson shows that almost all of that increase was due to lowered trade costs rather than internal migration or other effects. Wonderful.

This paper is a great exercise in the value of theory for empiricists. Theory is meant to be used, not tested. Here, fairly high-level trade theory – literally the cutting edge – was deployed to coax an answer to a super important question even though atheoretical data could have provided us nothing (remember, there isn’t even any data on income per capita to use!). The same theory also allowed to explain the effect, rather than just state it, a feat far more interesting to those who care about external validity. Two more exercises would be nice, though; first, and Donaldson notes this in the conclusion, trade can also improve welfare by lowering volatility of income, particularly in agricultural areas. Is this so in the Indian data? Second, rail, like lots of infrastructure, is a network – what did the time trend in income effects look like?

September 2012 Working Paper (IDEAS version). No surprise, Donaldson’s website mentions this is forthcoming in the AER. (There is a bit of a mystery – Donaldson was on the market with this paper over four years ago. If we need four years to get even a paper of this quality through the review process, something has surely gone wrong with the review process in our field.)

“Pollution for Promotion,” R. Jia (2012)

Ruixue Jia is on the job market from IIES in Stockholm, and she has the good fortune to have a job market topic which is very much au courant. In China, government promotions often depend both on the inherent quality of the politician and on how connected you are to current leaders; indeed, a separate paper by Jia finds that promotion probability in China depends only on the interaction of economic growth and personal connections rather than either factor by itself. Assume that a mayor can choose how much costly effort to exert. The mayor chooses how much dirty and clean technology – complements in production – to use, with the total amount of technology available an increasing function of the mayor’s effort. The mayor may personally dislike dirty technology. For any given bundle of technology, the observed economic output is higher the higher the mayor’s inherent quality (which he does not know). The central government, when deciding on promotions, only observes economic output.

Since mayors with good connections have a higher probability of being promoted for any level of output in their city, the marginal return to effort and the marginal return to dirty technology are increasing in the connectedness of the mayor. For any given distaste for pollution among the mayor, a more connected mayor will mechanically want to substitute clean for dirty technology since higher output is more valuable to him for career concerns while the marginal cost of distaste for pollution has not changed. Further, by a Le Chatelier argument, higher marginal returns to output increase the optimal effort choice, which allows a higher budget to purchase technology, dirty tech included. To the extent that the government cares about limiting the (unobserved) use of dirty tech, this is “almost” the standard multitasking concern: the folly of rewarding A and hoping for B. Although in this case, empirically there is no evidence that the central government cares about promoting local politicians who are good for the environment!

How much do local leaders increase pollution (and simultaneously speed up economic growth!) in exchange for a shot at a better job? The theory above gives us some help. We see that the same politician will substitute in dirty technology if, in some year, his old friends get on the committee that assigns promotions (the Politburo Standing Committee, or PSC, in China’s case). This allows us to see the effect of the Chinese incentive system on pollution even if we know nothing about the quality of each individual politician or whether highly-connected politicians get plum jobs in low pollution regions, since every effect we find is at the within-politician level. Using a diff-in-diff, Jia finds that in the year after a politician’s old friend makes the PSC, sulfur dioxide goes up 25%, a measure of river pollution goes up by a similar amount, industrial GDP rises by 15%, and non-industrial GDP does not change. So it appears that China’s governance institution does incentivize governors, although whether those incentives are good or bad for welfare depends on how you trade off pollution and growth in your utility function.

Good stuff. A quick aside, since what I like about Jia’s work is that she makes an attempt to more than simply find a clever strategy for getting internal validity. Many other recent job market stars – Dave Donaldson and Melissa Dell, for instance – have been equally good when it comes to caring about more than just nice identification. But such care is rare indeed! It has been three decades since we, supposedly, “took the ‘con’ out of Econometrics”. And yet an unbearable number of papers are still floating around which quite nicely identify a relationship of interest in a particular dataset, then go on to give only the vaguest and most unsatisfying remarks concerning external validity. That’s a much worse con than bad identification! Identification, by definition, can only hold ceteris paribus. Even perfect identification of some marginal effect tells me absolutely nothing about the magnitude of that effect when I go to a different time, or a different country, or a more general scenario. The only way – the only way! – to generalize an internally valid result, and the only way to explain why that result is the way it is, is to use theory. A good paper puts the theoretical explanation and the specific empirical case examined in context with other empirical papers on the same general topic, rather than stopping after the identification is cleanly done. And a good empirical paper needs to explain, and needs to generalize, because we care about unemployment (not unemployment in border counties of New Jersey in the 1990s) and we care about the effect of military training on labor supply (not the effect of the Vietnam War on labor supply in the few years following), etc. If we really want the credibility revolution in empirical economics to continue, let’s spend less seminar and referee time worrying only about internal validity, and more time shutting down the BS that is often passed off as “explanation”.

November 2012 working paper. Jia also has an interesting paper about the legacy of China’s treaty ports, as well as a nice paper (a la Nunn and Qian) on the importance of the potato in world history (really! I may be a biased Dorchester-born Mick, but still, the potato has been fabulously important).


Get every new post delivered to your Inbox.

Join 282 other followers

%d bloggers like this: