Category Archives: Growth

“Agricultural Productivity and Structural Change: Evidence from Brazil,” P. Bustos et al (2014)

It’s been a while – a month of exploration in the hinterlands of the former Soviet Union, a move up to Canada, and a visit down to the NBER Summer Institute really put a cramp on my posting schedule. That said, I have a ridiculously long backlog of posts to get up, so they will be coming rapidly over the next few weeks. I saw today’s paper presented a couple days ago at the Summer Institute. (An aside: it’s a bit strange that there isn’t really any media at SI – the paper selection process results in a much better set of presentations than at the AEA or the Econometric Society, which simply have too long of a lag from the application date to the conference, and too many half-baked papers.)

Bustos and her coauthors ask, when can improvements in agricultural productivity help industrialization? An old literature assumed that any such improvement would help: the newly rich agricultural workers would demand more manufactured goods, and since manufactured and agricultural products are complements, rising agricultural productivity would shift workers into the factories. Kiminori Matsuyama wrote a model (JET 1992) showing the problem here: roughly, if in a small open economy productivity goes up in a good you have a Ricardian comparative advantage in, then you want to produce even more of that good. A green revolution which doubles agricultural productivity in, say, Mali, while keeping manufacturing productivity the same, will allow Mali to earn twice as much selling the agriculture overseas. Workers will then pour into the agricultural sector until the marginal product of labor is re-equated in both sectors.

Now, if you think that industrialization has a bunch of positive macrodevelopment spillovers (via endogenous growth, population control or whatever), then this is worrying. Indeed, it vaguely suggests that making villages more productive, an outright goal of a lot of RCT-style microdevelopment studies, may actually be counterproductive for the country as a whole! That said, there seems to be something strange going on empirically, because we do appear to see industrialization in countries after a Green Revolution. What could be going on? Let’s look back at the theory.

Implicitly, the increase in agricultural productivity in Matsuyama was “Hicks-neutral” – it increased the total productivity of the sector without affecting the relative marginal factor productivities. A lot of technological change, however, is factor-biased; to take two examples from Brazil, modern techniques that allow for double harvesting of corn each year increase the marginal productivity of land, whereas “Roundup Ready” GE soy that requires less tilling and weeding increases the marginal productivity of farmers. We saw above that Hicks-neutral technological change in agriculture increases labor in the farm sector: workers choosing where to work means that the world price of agriculture times the marginal product of labor in that sector must be equal to world price of manufacturing times the marginal product of labor in manufacturing. A Hicks-neutral improvement in agricultural productivity raises MPL in that sector no matter how much land or labor is currently being used, hence wage equality across sectors requires workers to leave the factor for the farm.

What of biased technological change? As before, the only thing we need to know is whether the technological change increases the marginal product of labor. Land-augmenting technical change, like double harvesting of corn, means a country can produce the same amount of output with the old amount of farm labor and less land. If one more worker shifts from the factory to the farm, she will be farming less marginal land than before the technological change, hence her marginal productivity of labor is higher than before the change, hence she will leave the factory. Land-augmenting technological change always increases the amount of agricultural labor. What about farm-labor-augmenting technological change like GM soy? If land and labor are not very complementary (imagine, in the limit, that they are perfect substitutes in production), then trivially the marginal product of labor increases following the technological change, and hence the number of farm workers goes up. The situation is quite different if land and farm labor are strong complements. Where previously we had 1 effective worker per unit of land, following the labor-augmenting technology change it is as if we have, say, 2 effective workers per unit of land. Strong complementarity implies that, at that point, adding even more labor to the farms is pointless: the marginal productivity of labor is decreasing in the technological level of farm labor. Therefore, labor-augmenting technology with a strongly complementary agriculture production function shifts labor off the farm and into manufacturing.

That’s just a small bit of theory, but it really clears things up. And even better, the authors find empirical support for this idea: following the introduction to Brazil of labor-augmenting GM soy and land-augmenting double harvesting of maize, agricultural productivity rose everywhere, the agricultural employment share rose in areas that were particularly suitable for modern maize production, and the manufacturing employment share rose in areas that were particularly suitable for modern soy production.

August 2013 working paper. I think of this paper as a nice complement to the theory and empirics in Acemoglu’s Directed Technical Change and Walker Hanlon’s Civil War cotton paper. Those papers ask how changes in factor prices endogenously affect the development of different types of technology, whereas Bustos and coauthors ask how the exogenous development of different types of technology affect the use of various factors. I read the former as most applicable to structural change questions in countries at the technological frontier, and the latter as appropriate for similar questions in developing countries.

Debraj Ray on Piketty’s Capital

As mentioned by Sandeep Baliga over at Cheap Talk, Debraj Ray has a particularly interesting new essay on Piketty’s Capital in the 21st Century. If you are theoretically inclined, you will find Ray’s comments to be one of the few reviews of Piketty that proves insightful.

I have little to add to Ray, but here are four comments about Piketty’s book:

1) The data collection effort on inequality by Piketty and coauthors is incredible and supremely interesting; not for nothing does Saez-Piketty 2003 have almost 2000 citations. Much of this data can be found in previous articles, of course, but it is useful to have it all in one place. Why it took so long for this data to become public, compared to things like GDP measures, is an interesting one which sociology Dan Hirschman is currently working on. Incidentally, the data quality complaints by the Financial Times seem to me of rather limited importance to the overall story.

2) The idea that Piketty is some sort of outsider, as many in the media want to make him out to be, is very strange. His first job was at literally the best mainstream economics department in the entire world, he won the prize given to the best young economist in Europe, he has published a paper in a Top 5 economics journal every other year since 1995, his most frequent coauthor is at another top mainstream department, and that coauthor himself won the prize for the best young economist in the US. It is also simply not true that economists only started caring about inequality after the 2008 financial crisis; rather, Autor and others were writing on inequality well before date in response to clearer evidence that the “Great Compression” of the income distribution in the developed world during the middle of the 20th century had begun to reverse itself sometime in the 1970s. Even I coauthored a review of income inequality data in late 2006/early 2007!

3) As Ray points out quite clearly, the famous “r>g” of Piketty’s book is not an explanation for rising inequality. There are lots of standard growth models – indeed, all standard growth models that satisfy dynamic efficiency – where r>g holds with no impact on the income distribution. Ray gives the Harrod model: let output be produced solely by capital, and let the capital-output ratio be constant. Then Y=r*K, where r is the return to capital net of depreciation, or the capital-output ratio K/Y=1/r. Now savings in excess of that necessary to replace depreciated assets is K(t+1)-K(t), or

Y(t+1)[K(t+1)/Y(t+1)] – Y(t)[K(t)/Y(t)]

Holding the capital-output ratio constant, we have that savings s=[Y(t+1)-Y(t)]K/Y=g[K/Y], where g is the growth rate of the economy. Finally, since K/Y=1/r in the Harrod model, we have that s=g/r, and hence r>g will hold in a Harrod model whenever the savings rate is less than 100% of current income. This model, however, has nothing to do with the distribution of income. Ray notes that the Phelps-Koopmans theorem implies that a similar r>g result will hold along any dynamically efficient growth path in much more general models.

You may wonder, then, how we can have r>g and yet not have exploding income held by the capital-owning class. Two reasons: first, as Piketty has pointed out, r in these economic models (the return to capital, full stop) and r in the sense important to growing inequality, are not the same concept, since wars and taxes lower the r received by savers. Second, individuals presumably also dissave according to some maximization concept. Imagine an individual has $1 billion, the risk-free market return after taxes is 4%, and the economy-wide growth rate is 2%, with both numbers exogenously holding forever. It is of course true true that this individual could increase their share of the economy’s wealth without bound. Even with the caveat that as the capital-owning class owns more and more, surely the portion of r due to time preference, and hence r itself, will decline, we still oughtn’t conclude that income inequality will become worse or that capital income will increase. If this representative rich individual simply consumes 1.92% of their income each year – a savings rate of over 98 percent! – the ratio of income among the idle rich to national income will remain constant. What’s worse, if some of the savings is directed to human capital rather than physical capital, as is clearly true for the children of the rich in the US, the ratio of capital income to overall income will be even less likely to grow.

These last couple paragraphs are simply an extended argument that r>g is not a “Law” that says something about inequality, but rather a starting point for theoretical investigation. I am not sure why Piketty does not want to do this type of investigation himself, but the book would have been better had he done so.

4) What, then, does all this mean about the nature of inequality in the future? Ray suggests an additional law: that there is a long-run tendency for capital to replace labor. This is certainly true, particularly if human capital is counted as a form of “capital”. I disagree with Ray about the implication of this fact, however. He suggests that “to avoid the ever widening capital-labor inequality as we lurch towards an automated world, all its inhabitants must ultimately own shares of physical capital.” Consider the 19th century as a counterexample. There was enormous technical progress in agriculture. If you wanted a dynasty that would be rich in 2014, ought you have invested in agricultural land? Surely not. There has been enormous technical progress in RAM chips and hard drives in the last couple decades. Is the capital related to those industries where you ought to have invested? No. With rapid technical progress in a given sector, the share of total income generated by that sector tends to fall (see Baumol). Even when the share of total income is high, the social surplus of technical progress is shared among various groups according to the old Ricardian rule: rents accrue to the (relatively) fixed factor! Human capital which is complementary to automation, or goods which can maintain a partial monopoly in an industry complementary to those affected by automation, are much likelier sources of riches than owning a bunch of robots, since robots and the like are replicable and hence the rents accrued to their owners, regardless of the social import, will be small.

There is still a lot of work to be done concerning the drivers of long-run inequality, by economists and by those more concerned with political economy and sociology. Piketty’s data, no question, is wonderful. Ray is correct that the so-called Laws in Piketty’s book, and the predictions about the next few decades that they generate, are of less interest.

A Comment on Thomas Piketty, inclusive of appendix, is in pdf form, or a modified version in html can be read here.

“On the Origin of States: Stationary Bandits and Taxation in Eastern Congo,” R. S. de la Sierra (2013)

The job market is yet again in full swing. I won’t be able to catch as many talks this year as I would like to, but I still want to point out a handful of papers that I consider particularly elucidating. This article, by Columbia’s de la Sierra, absolutely fits that category.

The essential question is, why do states form? Would that all young economists interested in development put their effort toward such grand questions! The old Rousseauian idea you learned your first year of college, where individuals come together voluntarily for mutual benefit, seems contrary to lots of historical evidence. Instead, war appears to be a prime mover for state formation; armed groups establish a so-called “monopoly on violence” in an area for a variety of reasons, and proto-state institutions evolve. This basic idea is widespread in the literature, but it is still not clear which conditions within an area lead armed groups to settle rather than to pillage. Further, examining these ideas empirically seems quite problematic for two reasons, first because states themselves are the ones who collect data hence we rarely observe anything before states have formed, and second, because most of the planet has long since been under the rule of a state (with apologies to James Scott!)

De la Sierra brings some economics to this problem. What is the difference between pillaging and sustained state-like forms? The pillager can only extract assets on its way through, while the proto-state can establish “taxes”. What taxes will it establish? If the goal is long-run revenue maximization, Ramsey long ago told us that it is optimal to tax elements that are inelastic. If labor can flee, but the output of the mine can not, then you ought tax the output of the mine highly and set a low poll tax. If labor supply is inelastic but output can be hidden from the taxman, then use a high poll tax. Thus, when will bandits form a state instead of just pillaging? When there is a factor which can be dynamically taxed at such a rate that the discounted tax revenue exceeds what can be pillaged today. Note that the ability to, say, restrict movement along roads, or to expand output through state-owned capital, changes relevant tax elasticities, so at a more fundamental level, capacity by rebels along these margins are also important (and I imagine that extending de la Sierra’s paper will involve the evolutionary development of these types of capacities).

This is really an important idea. It is not that there is a tradeoff between producing and pillaging. Instead, there is a three way tradeoff between producing in your home village, joining an armed group to pillage, and joining an armed group that taxes like a state! The armed group that taxes will, as a result of its desire to increase tax revenue, perhaps introduce institutions that increase production in the area under its control. And to the extent that institutions persist, short-run changes that cause potential bandits to form taxing relationships may actually lead to long-run increases in productivity in a region.

De la Sierra goes a step beyond theory, investigating these ideas empirically in the Congo. Eastern Congo during and after the Second Congo War was characterized by a number of rebel groups that occasionally just pillaged, but occasionally formed stable tax relationships with villages that could last for years. That is, the rebels occasionally implemented something looking like states. The theory above suggests that exogenous changes in the ability to extract tax revenue (over a discounted horizon) will shift the rebels from pillagers to proto-states. And, incredibly, there were a number of interesting exogenous changes that had exactly that effect.

The prices of coltan and gold both suffered price shocks during the war. Coltan is heavy, hard to hide, and must be shipped by plane in the absence of roads. Gold is light, easy to hide, and can simply be carried from the mine on jungle footpaths. When the price of coltan rises, the maximal tax revenue of a state increases since taxable coltan production is relatively inelastic. This is particularly true near airstrips, where the coltan can actually be sold. When the price of gold increases, the maximal tax revenue does not change much, since gold is easy to hide, and hence the optimal tax is on labor rather than on output. An exogenous rise in coltan prices should encourage proto-state formation in areas with coltan, then, while an exogenous rise is gold prices should have little impact on the pillage vs. state tradeoff. Likewise, a government initiative to root out rebels (be they stationary or pillaging) decreases the expected number of years a proto-state can extract rents, hence makes pillaging relatively more lucrative.

How to confirm these ideas, though, when there was no data collected on income, taxes, labor supply, or proto-state existence? Here is the crazy bit – 11 locals were hired in Eastern Congo to travel to a large number of villages, spend a week there querying families and village elders about their experiences during the war, the existence of mines, etc. The “state formation” in these parts of Congo is only a few years in the past, so it is at least conceivable that memories, suitably combined, might actually be reliable. And indeed, the data do seem to match aggregate trends known to monitors of the war. What of the model predictions? They all seem to hold, and quite strongly: the ability to extract more tax revenue is important for proto-state formation, and areas where proto-states existed do appear to have retained higher productive capacity years later perhaps as a result of the proto-institutions those states developed. Fascinating. Even better, because there is a proposed mechanism rather than an identified treatment effect, we can have some confidence that this course is, to some extent, externally valid!

December 2013 working paper (No IDEAS page). You may wonder what a study like this costs (particularly if you are, like me, a theorist using little more than chalk and a chalkboard); I have no idea, but de la Sierra’s CV lists something like a half million dollars of grants, an incredible total for a graduate student. On a personal level, I spent a bit of time in Burundi a number of years ago, including visiting a jungle camp where rebels from the Second Congo War were still hiding. It was pretty amazing how organized even these small groups were in the areas they controlled; there was nothing anarchic about it.

“Back to Basics: Basic Research Spillovers, Innovation Policy and Growth,” U. Akcigit, D. Hanley & N. Serrano-Velarde (2013)

Basic and applied research, you might imagine, differ in a particular manner: basic research has unexpected uses in a variety of future applied products (though it sometimes has immediate applications), while applied research is immediately exploitable but has fewer spillovers. An interesting empirical fact is that a substantial portion of firms report that they do basic research, though subject to a caveat I will mention at the end of this post. Further, you might imagine that basic and applied research are complements: success in basic research in a given area expands the size of the applied ideas pond which can be fished by firms looking for new applied inventions.

Akcigit, Hanley and Serrano-Velarde take these basic facts and, using some nice data from French firms, estimate a structural endogenous growth model with both basic and applied research. Firms hire scientists then put them to work on basic or applied research, where the basic research “increases the size of the pond” and occasionally is immediately useful in a product line. The government does “Ivory Tower” basic research which increases the size of the pond but which is never immediately applied. The authors give differential equations for this model along a balanced growth path, have the government perform research equal to .5% of GDP as in existing French data, and estimate the remaining structural parameters like innovation spillover rates, the mean “jump” in productivity from an innovation, etc.

The pretty obvious benefit of structural models as compared to estimating simple treatment effects is counterfactual analysis, particularly welfare calculations. (And if I may make an aside, the argument that structural models are too assumption-heavy and hence non-credible is nonsense. If the mapping from existing data to the actual questions of interest is straightforward, then surely we can write a straightforward model generating that external validity. If the mapping from existing data to the actual question of interest is difficult, then it is even more important to formally state what mapping you have in mind before giving policy advice. Just estimating a treatment effect off some particular dataset and essentially ignoring the question of external validity because you don’t want to take a stand on how it might operate makes me wonder why I, the policymaker, should take your treatment effect seriously in the first place. It seems to me that many in the profession already take this stance – Deaton, Heckman, Whinston and Nevo, and many others have published papers on exactly this methodological point – and therefore a decade from now, you will find it equally as tough to publish a paper that doesn’t take external validity seriously as it is to publish a paper with weak internal identification today.)

Back to the estimates: the parameters here suggest that the main distortion is not that firms perform too little R&D, but that they misallocate between basic and applied R&D; the basic R&D spills over to other firms by increasing the “size of the pond” for everybody, hence it is underperformed. This spillover, estimated from data, is of substantial quantitative importance. The problem, then, is that uniform subsidies like R&D tax credits will just increase total R&D without alleviating this misallocation. I think this is a really important result (and not only because I have a theory paper myself, coming at the question of innovation direction from the patent race literature rather than the endogenous growth literature, which generates essentially the same conclusion). What you really want to do to increase welfare is increase the amount of basic research performed. How to do this? Well, you could give heterogeneous subsidies to basic and applied research, but this would involve firms reporting correctly, which is a very difficult moral hazard problem. Alternatively, you could just do more research in academia, but if this is never immediately exploited, it is less useful than the basic research performed in industry which at least sometimes is used in products immediately (by assumption); shades of Aghion, Dewatripont and Stein (2008 RAND) here. Neither policy performs particularly well.

I have two small quibbles. First, basic research in the sense reported by national statistics following the Frascati manual is very different from basic research in the sense of “research that has spillovers”; there is a large literature on this problem, and it is particularly severe when it comes to service sector work and process innovation. Second, the authors suggest at one point that Bayh-Dole style university licensing of research is a beneficial policy: when academic basic research can now sometimes be immediately applied, we can easily target the optimal amount of basic research by increasing academic funding and allowing academics to license. But this prescription ignores the main complaint about Bayh-Dole, which is that academics begin, whether for personal or institutional reasons, to shift their work from high-spillover basic projects to low-spillover applied projects. That is, it is not obvious the moral hazard problem concerning targeting of subsidies is any easier at the academic level than at the private firm level. In any case, this paper is very interesting, and well worth a look.

September 2013 Working Paper (RePEc IDEAS version).

“The African Growth Miracle,” A. Young (2013)

Alwyn Young, well known for his empirical work on growth, has finally published his African Growth paper in the new issue of the JPE. Africa is quite interesting right now. Though it is still seen by much of the public as a bit of a basket case, the continent seems to be by-and-large booming. At least to the “eye test”, it has been doing so for some time now, to some extent in the 1990s but much more so in the 2000s. I remember visiting Kigali, Rwanda for the first time in 2008; this is a spotless, law-abiding city with glass skyscrapers downtown housing multinational companies. Not what you may have expected!

What is interesting, however, is that economic statistics have until very recently still shown African states growing much slower than other developing countries. A lot of economic data from the developing world is of poor quality, but Young notes that for many countries, it is literally non-existent: those annual income per capita tables you see in UN data and elsewhere involve pretty heroic imputation. Can we do better? Young looks at an irregular set of surveys from 1990 to 2006, covering dozens of poor countries, called the Demographic and Health Survey. This survey covers age, family size, education level and some consumption (“do you have a bicycle?”, “do you have a non-dirt floor?”). What you see immediately is that, across many items, the growth rate in consumption in African states surveyed more or less matches the growth rate in non-African developing countries, despite official statistics suggesting the non-African states have seen private consumption growing at a much faster clip.

Can growth in real consumption be backed out of such statistics? The DHS is nice in that it, in some countries and years, includes wages. The basic idea is the following: consumption of normal goods rises with income, and income rises with education, so consumption of normal goods should rise with education. I can estimate very noisy Engel curves linking consumption to education, and using the parts of the sample where wage data exists, a Mincerian regression with a whole bunch of controls gives us some estimate of the link between a year of education and income: on average, it is on the order of 11 percent. We now have a method to go from consumption changes to implied mean education levels to real consumption changes. Of course, this estimate is very noisy. Young uses a properly specified maximum likelihood function with random effects to show how outliers or noisy series should be weighted when averaging estimates of real income changes using each individual product; indeed, a simple average of the estimated real consumption growth from each individual product gives a wildly optimistic growth rate, so such econometric techniques are quite necessary.

What, then, does this heavy lifting give us? Real consumption in countries in the African sample grew 3.4% per household per annum in 1990-2006, versus 3.8% in developing countries outside Africa. This is contra 1% in African and 2% in non-African countries, using the same sample of countries, in other prominent international data sources. Now, many of these countries are not terribly far from subsistence, so it is impossible for most African states to have been growing at this level throughout the 70s and 80s as well, but at least for the 90s, consumption microdata suggests a far rosier past two decades on the continent than many people imagine. Clever.

Final working paper (IDEAS version). I am somehow drawing a blank on the name of the recent book covering the poor quality of developing world macro data – perhaps a commenter can add this for me.

“Railroads of the Raj: Estimating the Impact of Transportation Infrastructure,” D. Donaldson (2013)

Somehow I’ve never written about Dave Donaldson’s incredible Indian railroad paper before; as it has a fair claim on being the best job market paper in the past few years, it’s time to rectify that. I believe Donaldson spent eight years as LSE working on his PhD, largely made up of this paper. And that time led to a well-received result: in addition to conferences, a note on the title page mentions that the paper has been presented at Berkeley, BU, Brown, Chicago, Harvard, the IMF, LSE, MIT, the Minneapolis Fed, Northwestern, Nottingham, NYU, Oxford, Penn, Penn State, the Philly Fed, Princeton, Stanford, Toronto, Toulouse, UCL, UCLA, Warwick, the World Bank and Yale! So we can safely say, this is careful and well-vetted work.

Donaldson’s study considers the importance of infrastructure to development; it is, in many ways, the opposite of the “small changes”, RCT-based development literature that was particularly en vogue in the 2000s. Intuitively, we all think infrastructure is important, both for improving total factor productivity and for improving market access. The World Bank, for instance, spends 20 percent of its funds on infrastructure, more than “education, health, and social services combined.” But how important is infrastructure spending anyway? That’s a pretty hard question to define, let alone answer.

So let’s go back to one of the great infrastructure projects in human history: the Indian railroad during the British Raj. The British built over 67,000 km of rail in a country with few navigable rivers. They also, luckily for the economist, were typically British in the enormous number of price, weather, and rail shipment statistics they collected. Problematically for the economist, these statistics tended to be hand-written in weathered documents hidden away in the back rooms of India’s bureaucratic state. Donaldson nonetheless collected almost 1.5 million individual pieces of data from these weathered tomes. Now, you might think, let’s just regress new rail access on average incomes, use some IV to make sure that rail lines weren’t endogenous, and be done with it. Not so fast! First, there’s no district-level income per capita data for India in the 1800s! And second, we can use some theory to really tease out why infrastructure matters.

Let’s use four steps. First, try to estimate how much rail access lowered trade costs per kilometer; if a good is made in only one region, then theory suggests that the trade cost between regions is just the price difference of that commodity across regions. Even if we had shipping receipts, this wouldn’t be sufficient; bandits, and spoilage, and all the rest of Samuelson’s famous “iceberg” raise trade costs as well. Second, check whether lowered trade costs actually increased trade volume, and at what elasticity, using rainfall as a proxy for local productivity shocks. Third, note that even though we don’t have income, theory tells us that for agricultural workers, percentage changes in total production per unit of land deflated by a local price index is equivalent to percentage changes in real income per unit of land. Therefore, we can check in a reduced form way whether new rail access increases real incomes, though we can’t say why. Fourth, in Donaldson’s theoretical model (an extension, more or less, of Eaton and Kortum’s Ricardian model), trade costs and differences in region sizes and productivity shocks in all regions all interact to affect local incomes, but they all act through a sufficient statistic: the share of consumption that consists of local products. That is, if we do our regression testing for the impact of rail access on real income changes, but control for changes in the share of consumption from within the district, we should see no effect from rail access.

Now, these stages are tough. Donaldson constructs a network of rail, road and river routes using 19th century sources linked on GIS, and traces out the least-cost paths from any one district to another. He then non-linearly estimates the relative cost per kilometer of rail, sea, river and road transport using the prices of eight types of salt, each of which were sold across British India but only produced in a single location. He then finds that lowered trade costs do appear to raise trade volumes with quite high elasticity. The reduced form regression suggests that access to the Indian railway increased local incomes by an average of 16 percent (Indian real incomes per capita increased only 22 percent during the entire period 1870 to 1930, so 16 percent locally is substantial). Using the “trade share” sufficient statistic described above, Donaldson shows that almost all of that increase was due to lowered trade costs rather than internal migration or other effects. Wonderful.

This paper is a great exercise in the value of theory for empiricists. Theory is meant to be used, not tested. Here, fairly high-level trade theory – literally the cutting edge – was deployed to coax an answer to a super important question even though atheoretical data could have provided us nothing (remember, there isn’t even any data on income per capita to use!). The same theory also allowed to explain the effect, rather than just state it, a feat far more interesting to those who care about external validity. Two more exercises would be nice, though; first, and Donaldson notes this in the conclusion, trade can also improve welfare by lowering volatility of income, particularly in agricultural areas. Is this so in the Indian data? Second, rail, like lots of infrastructure, is a network – what did the time trend in income effects look like?

September 2012 Working Paper (IDEAS version). No surprise, Donaldson’s website mentions this is forthcoming in the AER. (There is a bit of a mystery – Donaldson was on the market with this paper over four years ago. If we need four years to get even a paper of this quality through the review process, something has surely gone wrong with the review process in our field.)

“What Determines Productivity,” C. Syverson (2011)

Chad Syverson, along with Nick Bloom, John van Reenen, Pete Klenow and many others, has been at the forefront of a really interesting new strand of the economics literature: persistent differences in productivity. Syverson looked at productivity differences within 4-digit SIC industries in the US (quite narrow industries like “Greeting Cards” or “Industrial Sealants”) a number of years back, and found that in the average industry, the 90-10 ratio of total factor productivity plants was almost 2. That is, the top decile plant in the average industry produced twice as much output as the bottom decline plant, using exactly the same inputs! Hsieh and Klenow did a similar exercise in China and India and found even starker productivity differences, largely due a big left-tail of very low productivity firms. This basic result is robust to different measures of productivity, and to different techniques for identifying differences; you can make assumptions which let you recover a Solow residual directly, or run a regression (adjusting for differences in labor and capital quality, or not), or look at deviations like firms having higher marginal productivity of labor than the wage rate, etc. In the paper discussed in the post, Syverson summarizes the theoretical and empirical literature on persistent productivity differences.

Why aren’t low productivity firms swept from the market? We know from theory that if entry is allowed, potentially infinite and instantaneous, then no firm can remain which is less productive than the entrants. This suggests that persistence of inefficient firms must result from either limits on entry, limits on expansion by efficient firms, or non-immediate efficiency because of learning-by-doing or similar (a famous study by Benkard of a Lockwood airplane showed that a plant could produce a plane with half the labor hours after producing 30, and half again after producing 100). Why don’t inefficient firms already in the market adopt best practices? This is related to the long literature on diffusion, which Syverson doesn’t cover in much detail, but essentially it is not obvious to a firm whether a “good” management practice at another firm is actually good or not. Everett Rogers, in his famous “Diffusion of Innovations” book, refers to a great example of this from Peru in the 1950s. A public health consultant was sent for two years to a small village, and tried to convince the locals to boil their water before drinking it. The water was terribly polluted and the health consequences of not boiling were incredible. After two years, only five percent of the town adopted the “innovation” of boiling. Some didn’t adopt because it was too hard, many didn’t adopt because of a local belief system that suggested only the already-sick ought drink boiled water, some didn’t adopt because they didn’t trust the experience of the advisor, et cetera. Diffusion is difficult.

Ok, so given that we have inefficient firms, what is the source of the inefficiency? It is difficult to decompose all of the effects. Learning-by-doing is absolutely relevant in many industries – we have plenty of evidence on this count. Nick Bloom and coauthors seem to suggest that management practices play a huge role. They have shown clear correlation between “best practice” management and high TFP across firms, and a recent randomized field experiment in India (discussed before on this site) showed massive impacts on productivity from management improvements. Regulation and labor/capital distortions also appear to play quite a big role. On this topic, James Schmitz wrote a very interesting paper, published in 2005 in the JPE, on iron ore producers. TFP in Great Lakes ore had been more or less constant for many decades, with very little entry or foreign competition until the 1980s. Once Brazil began exporting ore to the US, labor productivity doubled within a handful of years, and capital and total factor productivity also soared. A main driver of the change was more flexible workplace rules.

Final version in 2011 JEP (IDEAS version). Syverson was at Kellogg recently presenting a new paper of his, with an all-star cast of coauthors, on the medical market. It’s well worth reading. Medical productivity is similarly heterogeneous, and since the medical sector is coming up on 20% of GDP, the sources of inefficiency in medicine are particularly important!

“Chinese Economic Performance in the Long Run,” A. Maddison (2007)

Many economists know the rough contours of Western economic history well. Real income of unskilled laborer and farmer households was at no time and in no place more than, at best, three times subsistence income (see Scheidel for a nice summary of this evidence). Peaks in per capita GDP were reached in the heyday of ancient Rome and the early Arab caliphate. Regional regression was nothing strange – Europe in 1000 was using less advanced technology in many cases than the Romans had, credit markets were essentially nonexistent, long-distance or even regional trade had dried up, and no city in Europe existed with a population of even 10,000 people at the turn of the millennium. Living standards begin to rise slowly after the Black Death, first in Renaissance Italy, and then in the Netherlands and England. The Industrial Revolution finally severs the Malthusian noose by the mid-1800s, when living standards for most members of society begin to rise from their historical norm.

But what of China? Before he died, one of Angus Maddison’s final projects was compiling data on historic China. In Chinese culture, the classic periods in history are the Tang and Song dynasties, roughly from the 7th to the 12th centuries, with brief interludes, and perhaps the late Yuan and early Ming, from the late 13 to the late 1400s. Did China escape the Malthusian curse? They also did not. It seems likely that incomes were roughly at subsistence until the Tang dynasty in the 9th century, when income per capita rose perhaps 30 percent. That peak would not be seen again until around 1970!

Now, in a Malthusian world, you can still grow, or be more advanced economically, but that growth is eaten up by population growth. The main pattern in China seems to be a massive shift in population density in the south, meaning south of the Yangtse, after the beginning of the Song dynasty. Woodblock printing, allowing for the dissemination of guides to more productive agriculture, appeared in this era. Chinese agriculture appears to have been much more advanced that that of Europe or India; indeed, more of China’s farmland was irrigated in 1400 than America’s today, and not until the 20th century did Europe reach grain yields seen in China in 1400. If you know your Joseph Needham, you know much of this is driven by Chinese agricultural inventions like the curved mouldboard and the use of crop rotation (not seen in Europe until the eighteenth century!). Population rose ten-fold from 1400 to 1950 despite little change in per capita income. A nontrivial increase in caloric yield per acre of farmland came from the introduction of new world crops like maize and the sweet potato, which appear in China during the Ming dynasty. Nonagricultural rural work also appears to have been much more developed than in medieval Europe, with William Skinner’s “hexagonal trade” existent during nearly all of the post-Tang dynasties. Such trade allowed cities to develop – around 1000, China had almost 100 cities with population above 10,000, as compared to none in Europe!

More recently, industrialization gets a late start. The 1800s are a giant disaster for China, with wars against Europeans, Russians and Japanese (China lost essentially all of these), the Taiping rebellion that kills tens of millions in the nation’s heartland, Muslim rebellions in the Northwest, and a near complete lack of institutional modernization of the type seen in Japan. By 1890, only 10 miles of rail are found in the whole country, and modern industry makes up only one-half percent of the economy. Despite some fits and starts during the Republican era (especially in Shanghai and Japanese-controlled Manchuria), by the end of World War 2 and the Chinese Civil War, per capita income is no higher than it was during the Tang dynasty. Perhaps the non-vilification of Mao in today’s China has to do with the fact that, even with near-complete autarky, the Great Leap Forward and the Cultural Revolution, per capita income still nearly doubled during the Maoist era, and the industrial share of GDP rose up to match the agricultural share. That is, despite all of the human rights disasters, the Maoist economic performance was simply unheard of in Chinese history. Nearly all of this growth came from capital deepening and (especially) increases in labor supply and the human capital embodied in that labor supply; literacy rose from 20 percent to about 80 percent. And, of course, the economic history since 1976 is well-known – in only three years of the past 37 has GDP per capita grown slower than six percent, an unprecedented streak in the history of the globe.

http://browse.oecdbookshop.org/oecd/pdfs/product/4107091e.pdf (Full PDF version of the published book – big thumbs up to the OECD for making these public. If you are a Chinese speaker, prepare to be annoyed by Maddison’s habit of using Wade-Giles transliteration, i.e., Cheng Ho instead of Zheng He, Yung-lo Emperor instead of the Yangle Emperor, Kwangtung for Guangdong, Tseng Kuo-fan for Zeng Guofan. Speaking of Maddison, his historic income tables (.XLS) are a great way to while away a rainy afternoon. Who knew Australia was once the world’s richest place, or that Sri Lanka was historically a particularly wealthy part of Asia, or that Venezuela was wealthier per capita than all of Western Europe in the middle of the 20th century?)

“The Human Capital Stock: A Generalized Approach,” B. Jones (2012)

(A quick note: the great qualitative economist Albert O. Hirschman died earlier today. “Exit, Voice and Loyalty” is, of course, his most famous work, and probably deserves more consideration in the modern IO literature. If a product changes or deteriorates, our usual models have consumers “exiting”, or refusing to buy the product anymore. However, in some kinds of long-term relationships, I can instead voice my displeasure at bad outcomes. For instance, if the house has a bad night at a restaurant I’ve never been to, I simply never return. If the house has a bad night at one of my regular spots, I chalk it up to bad luck, tell the waiter the food was subpar, and return to give them another shot. Hirschman is known more for his influence on sociology and political science than on core economics, but if you are like me, the ideas in EVL look suspiciously game theoretic: I can imperfectly monitor a firm (since I only buy one of the millions of their products), they can make costly investments in loyalty (responding to a bad set of products by, say, refunding all customers), etc. That’s all perfectly standard work for a theorist. So, clever readers, has anyone seen a modern theoretic take on EVL? Let me know in the comments.)

Back to the main article in today’s post, Ben Jones’ Human Capital Stock paper. Measuring human capital is difficult. We think of human capital as an input in a production function. A general production function is Y=f(K,H,A) where A is a technology scalar, K is a physical capital aggregator, and H (a function of H(1),H(2), etc., marking different types of human capital) is a human capital aggregator. Every factor is paid its marginal product if firms are cost minimizers. Let H(i)=h(i)L(i) be the quantity of some class of labor (like college educated workers) weighted by the flow of services h(i) provided by that class. We can measure L, but not h. The marginal product of L(i), the wage received by laborers of type i, is df/dH*dH/dH(i)*h(i). That is, wage depends both on the amount of human capital in workers of type i, as well as contribution of H(i) to the human capital aggregator.

Consider the ratio of wages w(i)/w(j)=[dH/dH(i)*h(i)]/[dH/dH(j)*h(j)]. Again, we need to the know how each type of human capital affects the aggregator to be able to go from wage differences to human capital differences. If the production function is constant returns to scale, then the human capital aggregator can be rewritten as h(1)*H(L(1),[w(2)*dH/dH(1)]/[w(1)*dH/dH(2)]…). If wages w and labor allocations L were observed, we could infer the amount of human capital if we knew h(1) and we knew the ratios of marginal contributions of each type of human capital to the aggregator. Traditional human capital accounting assumes that h(1), the human capital of unskilled workers, is identical across countries, and that the aggregator equals the sum of h(i)L(i). Implicitly, this says each skill-adjusted unit of labor is perfectly substitutable in the production function: a worker with wage twice the unskilled wage, by the above assumptions, has twice the human capital of the unskilled worker. If you replaced her with two unskilled workers, the total productive capacity of the economy would be unchanged.

You may not like those assumptions. Jones notes that, since rich countries have many fewer unskilled workers, and since marginal product is a partial equilibrium concept, the marginal productivity of unskilled workers is likely higher in rich countries than in poor ones. Also, unskilled worker productivity has complementarities with the amount of skilled labor; a janitor keeping a high-tech hospital clean has higher marginal product than an unskilled laborer in the third world (if you know Kremer’s O-Ring paper, this will be no surprise). These two effects mean that traditional assumptions in human capital accounting will bias downward the relative amount of human capital in the wealthy world. It turns out that, under a quite general function form for the production function, we only need to add the elasticity of unskilled-skilled labor substitution to our existing wage and labor allocation data to estimate the amount of human capital with the generalized human capital function; critically, we don’t need to know anything about how different types of skilled labor combine.

How does this matter empirically? There seems to be a puzzle in growth accounting. Highly educated countries almost always correlate with high incomes. Yet traditional growth accounting finds only 30% or so of across-country income difference can be explained by differences in human capital. However, empirical estimates of the elasticity of substitution of unskilled and skilled labor are generally something like 1.4 – there are complementarities. Jones calculates for a number of country pairs what elasticity would be necessary to explain 100% of the difference in incomes with human capital alone. The difference between Israel (the 85th percentile of the income distribution) and Kenya (the 15th percentile) is totally explained if the elasticity of substitution between skilled and unskilled labor is 1.54. Similar numbers prevail for other countries.

So if human capital is in fact quite important, why explains the differences in labor allocation? Why are there so many more skilled workers in the US than in Congo? Two things are important to note. First, in general equilibrium, workers choose how much education to receive. That is, if anyone in the US is not going to college, the difference in wages between skilled and unskilled labor cannot be too large. For the differences in wages to not grow too large, there must be a supply response: the amount of unskilled laborers shrinks, causing each unskilled worker’s marginal product to rise. Israel has a ratio of skilled to unskilled labor 2300% higher than Kenya, but the skilled worker wage premium is only 20% higher in Israel than in Kenya. If the elasticity of substitution is 1.6, service flows from skilled workers in Israel are almost 100 times higher than in Kenya, despite an almost identical skilled-unskilled wage premium. That is, we will see high societal returns to human capital in the share of skilled workers rather than in the wage premium.

Second, why don’t poor countries have such high share of human capital? Adam Smith long ago wrote that the division of labor is limited by the size of the market. At high levels of human capital, specialization has huge returns. Jones gives the example of a thoracic surgeon: willingness to pay for such a surgeon to perform heart surgery is far higher than willingness to pay a dermatologist or an economics professor, despite similar levels of education. Specialization, therefore, increases the societal return to human capital, and such specialization may be limited by small markets, coordination costs, low levels of existing advanced knowledge, or limited local access to such knowledge. A back of the envelope calculation suggests that a 4.3-fold difference in the amount of specialization can explain the differences in labor allocation between Israel and Kenya, and that this difference is even lower if rich countries have better ability to transmit education than poor countries.

This is all to say that, in some ways, the focus on TFP growth may be misleading. Growth in technology, for developing countries, is very similar to growth in human capital, at least intuitively. If the Solow residual is, in fact, relatively unimportant once human capital is measured correctly, then the problem of growth in poor countries is much simpler: do we deepen our physical capital, or improve our human capital? This paper suggests that human capital improvements are most important, and that useful improvements in human capital may be partially driven by coordinating increased specialization of workers. Interesting.

2011 working paper, which appears to be the newest version; IDEAS page.

“Why was it Europeans Who Conquered the World?,” P. Hoffman (2012)

Talk about an ambitious title! Take it as given that, by the eighteenth century, Europeans had a huge advantage in gunpowder-based technology and tactics, and that this was the primary reason they were able to colonize large swaths of the globe. Why was it that Europeans had such an advantage? The substance gunpowder did not originate in Europe, as is well-known. But Europeans did not even originate certain important tactics, like volley fire with layers of infantry. Nonetheless, from 1600-1800, weapons manufacturing productivity, firing rate, and naval firepower had all increased at an annual rate in Europe which far exceeded the rate of total economic growth or total productivity growth anywhere in the world up to that point. Why?

A common story is that competition in Europe was important. There were many small states who fought often, and hence better and better technology was selected. And Europeans were belligerent indeed! From 1500-1800, the Austrians were at war with a power 24% of the time, the English 53 percent, and the Spanish 81 percent of the time. The problem with the competition thesis, Hoffman points out, is that we have other similar entities: the Chinese were constantly fighting nomads in the north and west, the Japanese were in frequent warfare until the Tokagawa in 1600, and the small states of India were no peaceful assembly before the conquests of the British East India Company. So why, then, Europe?

Hoffman’s explanation is the following. Technology improves from learning by doing. It improves faster the more and the longer you practice, and disseminates easier when costs of dissemination are low. In war, then, gunpowder improves rapidly when countries fight, and when their fighting involves heavy expenditure. Countries go to war when the expected gain from fighting exceeds the expected cost (and they fight rather than settling immediately based on their expectations of the outcome because arbitrary transfers are not easy when the “prize” for winning is something like glory). Countries differ in their variable costs of war because of, for instance, differential abilities to extract tax revenue, and they differ in their benefits from winning war; Indian states may have, for example, had lower benefits from winning war because interdynastic conflict was frequent compared to Europe, and hence the winner of a war may have been sacked by his brother before even having a chance to bask in the glory of victory. Note that “death and destruction” was not a cost of war for most states in this period; indeed, from 1500-1790, not a single European monarch was deposed due to loss in battle in anything but a civil war! Shall we call this the original agency problem?

This model looks a lot like a micro theory tournament plus diffusion of inventions gained from learning by doing. Solve for the equilibrium, as Hoffman does, and you will see that rapid progress in arms technology requires that there is a lot of war using a lot of resources among combatants geographically close enough for technology to transfer easily, and conditions for that to happen are that countries for which gunpowder is effective in war are evenly matched in their ability to raise an army, and that the prize for winning (measured in glory or whatever) is high compared to the costs of battle (measured in the cost of raising revenue for an army, etc.). The Ottomans in this period had too little ability to raise revenue for war. The Chinese were unified internally and fought externally mostly with cavalry, since guns were not terribly effective against steppe nomads. Japan was unified by 1600, hence had no incentive to fight internally and improve their weapons technology, and the fixed cost of invading China or Korea was seen to be too high after some late 16th century adventures. In India, interdynastic battles were so frequent that the benefit of total warfare, as opposed to light skirmishes, was too limited, and hence even though war was frequent, it was at such a low level that there was limited learning-by-doing.

An interesting hypothesis. As invention is my own field of research, I am a bit skeptical of the learning-by-doing mechanism, however. Despite what schoolkids are taught, necessity is absolutely not the mother of invention. We need many things, but we only invent very few of them. Rather, technological feasibility tends to be the important constraint on technological improvement. My hunch is that a detailed investigation of specific microinventions in European military technology would show that they rely heavily on complementary developments in private industry, in scientific research, or in “common” engineering. Indeed, I would suspect that many of the important inventions come from places not known for their belligerence; Hoffman even mentions an important Swiss cannon foundry whose technology was critical to French artillery in the 1700s. Such importation from non-military external sources is not uncommon: later on, we have the American engineer Hiram Maxim inventing an early machine gun, and the Dutchman Fokker playing the most important role in airplane technology in World War I. The ability of the UK and Germany to procure these inventions has less to do with the frequency of war in those countries, but instead simply results from the fact that Western Europe and America had, by this time, developed large amounts of non-military engineering talent.

March 2012 working paper (no IDEAS version). This paper was published in the September 2012 issue of the Journal of Economic History. If you find it interesting, Hoffman recently published a book Why the West Rules – For Now which has come highly recommended to me by a well-known historian of this era. [CORRECTION: As noted by Mark Schaffer below, Why the West Rules is by Ian Morris, not Philip Hoffman. Nonetheless, it is still a great book!]

Follow

Get every new post delivered to your Inbox.

Join 188 other followers

%d bloggers like this: