Category Archives: Economic History

“The Rise and Fall of General Laws of Capitalism,” D. Acemoglu & J. Robinson (2014)

If there is one general economic law, it is that every economist worth their salt is obligated to put out twenty pages responding to Piketty’s Capital. An essay by Acemoglu and Robinson on this topic, though, is certainly worth reading. They present three particularly compelling arguments. First, in a series of appendices, they follow Debraj Ray, Krusell and Smith and others in trying to clarify exactly what Piketty is trying to say, theoretically. Second, they show that it is basically impossible to find any effect of the famed r-g on top inequality in statistical data. Third, they claim that institutional features are much more relevant to the impact of economic changes on societal outcomes, using South Africa and Sweden as examples. Let’s tackle these in turn.

First, the theory. It has been noted before that Piketty is, despite beginning his career as a very capable economist theorist (hired at MIT at age 22!), very disdainful of the prominence of theory. He, quite correctly, points out that we don’t even have any descriptive data on a huge number of topics of economic interest, inequality being principal among these. And indeed he is correct! But, shades of the Methodenstreit, he then goes on to ignore theory where it is most useful, in helping to understand, and extrapolate from, his wonderful data. It turns out that even in simple growth models, not only is it untrue that r>g necessarily holds, but the endogeneity of r and our standard estimates of the elasticity of substitution between labor and capital do not at all imply that capital-to-income ratios will continue to grow (see Matt Rognlie on this point). Further, Acemoglu and Robinson show that even relatively minor movement between classes is sufficient to keep the capital share from skyrocketing. Do not skip the appendices to A and R’s paper – these are what should have been included in the original Piketty book!

Second, the data. Acemoglu and Robinson point out, and it really is odd, that despite the claims of “fundamental laws of capitalism”, there is no formal statistical investigation of these laws in Piketty’s book. A and R look at data on growth rates, top inequality and the rate of return (either on government bonds, or on a computed economy-wide marginal return on capital), and find that, if anything, as r-g grows, top inequality shrinks. All of the data is post WW2, so there is no Great Depression or World War confounding things. How could this be?

The answer lies in the feedback between inequality and the economy. As inequality grows, political pressures change, the endogenous development and diffusion of technology changes, the relative use of capital and labor change, and so on. These effects, in the long run, dominate any “fundamental law” like r>g, even if such a law were theoretically supported. For instance, Sweden and South Africa have very similar patterns of top 1% inequality over the twentieth century: very high at the start, then falling in mid-century, and rising again recently. But the causes are totally different: in Sweden’s case, labor unrest led to a new political equilibrium with a high-growth welfare state. In South Africa’s case, the “poor white” supporters of Apartheid led to compressed wages at the top despite growing black-white inequality until 1994. So where are we left? The traditional explanations for inequality changes: technology and politics. And even without r>g, these issues are complex and interesting enough – what could be a more interesting economic problem for an American economist than diagnosing the stagnant incomes of Americans over the past 40 years?

August 2014 working paper (No IDEAS version yet). Incidentally, I have a little tracker on my web browser that lets me know when certain pages are updated. Having such a tracker follow Acemoglu’s working papers pages is, frankly, depressing – how does he write so many papers in such a short amount of time?

“Immigration and the Diffusion of Technology: The Huguenot Diaspora in Prussia,” E. Hornung (2014)

Is immigration good for natives of the recipient country? This is a tough question to answer, particularly once we think about the short versus long run. Large-scale immigration might have bad short-run effects simply because more L plus fixed K means lower average incomes in essentially any economic specification, but even given that fact, immigrants bring with them tacit knowledge of techniques, ideas, and plans which might be relatively uncommon in the recipient country. Indeed, world history is filled with wise leaders who imported foreigners, occasionally by force, in order to access their knowledge. As that knowledge spreads among the domestic population, productivity increases and immigrants are in the long-run a net positive for native incomes.

How substantial can those long-run benefits be? History provides a nice experiment, described by Erik Hornung in a just-published paper. The Huguenots, French protestants, were largely expelled from France after the Edict of Nantes was revoked by the Sun King, Louis XIV. The Huguenots were generally in the skilled trades, and their expulsion to the UK, the Netherlands and modern Germany (primarily) led to a great deal of tacit technology transfer. And, no surprise, in the late 17th century, there was very little knowledge transfer aside from face-to-face contact.

In particular, Frederick William, Grand Elector of Brandenburg, offered his estates as refuge for the fleeing Huguenots. Much of his land had been depopulated in the plagues that followed the Thirty Years’ War. The centralized textile production facilities sponsored by nobles and run by Huguenots soon after the Huguenots arrived tended to fail quickly – there simply wasn’t enough demand in a place as poor as Prussia. Nonetheless, a contemporary mentions 46 professions brought to Prussia by the Huguenots, as well as new techniques in silk production, dyeing fabrics and cotton printing. When the initial factories failed, knowledge among the apprentices hired and purchased capital remained. Technology transfer to natives became more common as later generations integrated more tightly with natives, moving out of Huguenot settlements and intermarrying.

What’s particularly interesting with this history is that the quantitative importance of such technology transfer can be measured. In 1802, incredibly, the Prussians had a census of manufactories, or factories producing stock for a wide region, including capital and worker input data. Also, all immigrants were required to register yearly, and include their profession, in 18th century censuses. Further, Huguenots did not simply move to places with existing textile industries where their skills were most needed; indeed, they tended to be placed by the Prussians in areas which had suffered large population losses following the Thirty Years’ War. These population losses were highly localized (and don’t worry, before using population loss as an IV, Hornung makes sure that population loss from plague is not simply tracing out existing transportation highways). Using input data to estimate a Cobb-Douglas textile production function, an additional percentage point of the population with Huguenot origins in 1700 is associated with a 1.5 percentage point increase in textile productivity in 1800. This result is robust in the IV regression using wartime population loss to proxy for the percentage of Huguenot immigrants, as well as many other robustness checks. 1.5% is huge given the slow rate of growth in this era.

An interesting historical case. It is not obvious to me how relevant this estimation to modern immigration debates; clearly it must depend on the extent to which knowledge can be written down or communicated at distance. I would posit that the strong complementarity of factors of production (including VC funding, etc.) are much more important that tacit knowledge spread in modern agglomeration economies of scale, but that is surely a very difficult claim to investigate empirically using modern data.

2011 Working Paper (IDEAS version). Final paper published in the January 2014 AER.

“Information Frictions and the Law of One Price,” C. Steinwender (2014)

Well, I suppose there is no surprise that I really enjoyed this paper by Claudia Steinwender, a PhD candidate from LSE. The paper’s characteristics are basically my catnip: one of the great inventions in history, a policy question relevant to the present day, and a nice model to explain what is going on. The question she asks is how informational differences affect the welfare gains from trade. In the present day, the topic comes up over and over again, from the importance of cell phones to village farmers to the welfare impact of public versus closed financial exchanges.

Steinwender examines the completion of the transatlantic telegraph in July 1866. A number of attempts over a decade had been made in constructing this link; the fact that the 1866 line was stable was something of a surprise. Its completion lowered the time necessary to transmit information about local cotton prices in New York (from which much of the supply was sent) and Liverpool (where much of the cotton was bought; see Chapter 15 of Das Kapital for a nice empirical description of the cotton industry at this time). Before the telegraph, steam ships took 7 to 21 days, depending on weather conditions, to traverse the Pond. In a reduced form estimate, the mean price difference in each port, and the volatility of the price difference, fell; price shocks in Liverpool saw immediate responses in shipments from America, and the prices there; exports increases and become more volatile; and similar effects were seen from shocks to ship speed before the telegraph, or temporary technical problems with the line after July 1866. These facts come from amazingly well documented data in New York and UK newspapers.

Those facts are all well and good, but how to explain them, and how to interpret them? It is not at all obvious that information in trade with a durable good should matter. If you ship too much one day, then just store it and ship less in the next period, right? But note the reduced form evidence: it is not just that prices harmonize, but that total shipments increase. What is going on? Without the telegraph, the expected price tomorrow in Liverpool from the perspective of New York sellers is less variable (the conditional expectation conditions on less information about the underlying demand shock, since only the two-week-old autocorrelated demand shock data brought by steamship is available). When high demand in Liverpool is underestimated, then, exports are lower in the era before the telegraph. On the other hand, a low demand shock and a very low demand shock in Liverpool both lead to zero exports, since exporting is unprofitable. Hence, ignoring storage, better information increases the variance of perceived demand, with asymmetric effects from high and low demand shocks, leading to higher overall exports. Storage should moderate the volatility of exports, but not entirely, since a period of many consecutive high demand shocks will eventually exhaust the storage in Liverpool. That is, the lower bound on stored cotton at zero means that even optimal cotton storage does not fully harmonize prices in the presence of information frictions.

Steinwender confirms that intuition by solving for the equilibrium with storage numerically; this is actually a pretty gutsy move, since the numerical estimates are quantitatively quite different than what was observed in the data. Nonetheless, I think she is correct that we are fine interpreting these as qualitative comparative statics from an abstract model rather than trying to interpret their magnitude in any way. (Although I should note, it is not clear to me that we cannot sign the relevant comparative statics just because the model with storage cannot be solved analytically in its entirety…)

The welfare impact of information frictions with storage can be bounded below in a very simple way. If demand is overestimated in New York, then too much is exported, and though some of this cotton is stored, the lower bound at zero for storage means that the price in Liverpool is still too high. If demand in underestimated in New York, then too little is exported, and though some stored cotton might be sold, the lower bound on storage means that the price in Liverpool is still too low. A lower bound on the deadweight loss from those effects can be computed simply by knowing the price difference between the UK and the US and the slopes of the demand and supply curves; in the case of the telegraph, this deadweight loss is on the order of 8% of the value of US cotton exports to the UK, or equivalent to the DWL from a 6% tax on cotton. That is large. I am curious about the impact of this telegraph on US vis-a-vis Indian or Egyptian cotton imports, the main Civil War substitutes; information differences must distort the direction of trade in addition to its magnitude.

January 2014 working paper (No IDEAS version).

Tunzelmann and the Nature of Social Savings from Steam

Research Policy, the premier journal for innovation economists, recently produced a symposium on the work of Nick von Tunzelmann. Tunzelmann is best known for exploring the social value of the invention of steam power. Many historians had previously granted great importance to the steam engine as a driver of the Industrial Revolution. However, as with Fogel’s argument that the railroad was less important to the American economy than previously believed (though see Donaldson and Hornbeck’s amendment claiming that market access changes due to rail were very important), the role of steam in the Industrial Revolution may have been overstated.

This is surprising. To my mind, the four most important facts for economics to explain is why the world economy (in per capita terms) stagnated until the early 1800s, why cumulative per-capita growth began then in a corner of Northwest Europe, why growth at the frontier has continued to the present, and why growth at the frontier has been so consistent over this period. The consistency is really surprising, given that individual non-frontier country growth rates, and World GDP growth, has vacillated pretty wildly on a decade-by-decade basis.

Malthus’ explanation still explains the first puzzle best. But there remain many competing explanations for how exactly the Malthusian trap was broken. The idea that a thrifty culture or expropriation of colonies was critical sees little support from economic historians; as McCloskey writes, “Thrifty self-discipline and violent expropriation have been too common in human history to explain a revolution utterly unprecedented in scale and unique to Europe around 1800.” The problem, more generally, of explaining a large economic X on the basis of some invention/program/institution Y is that basically everything in the economic world is a complement. Human capital absent good institutions has little value, modern management techniques absent large markets is ineffective, etc. The problem is tougher when it comes to inventions. Most “inventions” that you know of have very little immediate commercial importance, and a fair ex-post reckoning of the critical parts of the eventual commercial product often leaves little role for the famous inventor.

What Tunzelmann and later writers in his tradition point out is that even though Watt’s improvement to the steam engine was patented in 1769, steam produces less horsepower than water in the UK as late as 1830, and in the US as late as the Civil War. Indeed, even today, hydropower based on the age-old idea of the turbine is still an enormous factor in the siting of electricity-hungry industries. It wasn’t until the invention of high-pressure steam engines like the Lancanshire boiler in the 1840s that textile mills really saw steam power as an economically viable source of energy. Most of the important inventions in the textile industry were designed originally for non-steam power sources.

The economic historian Nicholas Crafts supports Tunzelmann’s original argument against the importance of steam using a modern growth accounting framework. Although the cost of steam power fell rapidly following Watt, and especially after the Corliss engine in the mid 19th century, steam was still a relatively small part of economy until the mid-late 19th century. Therefore, even though productivity growth within steam was quick, only a tiny portion of overall TFP growth in the early Industrial Revolution can be explained by steam. Growth accounting exercises have a nice benefit over partial equilibrium social savings calculations because the problem that “everything is a complement” is taken care of so long as you believe the Cobb-Douglas formulation.

The December 2013 issue of Research Policy (all gated) is the symposium on Tunzelmann. For some reason, Tunzelmann’s “Steam Power and British Industrialization Until 1860″ is quite expensive used, but any decent library should have a copy.

“The ‘Industrial Revolution’ in the Home: Household Technology and Social Change in the 20th Century,” R. S. Cowan (1976)

The really fascinating thing about the “Second Industrial Revolution” (roughly 1870 until World War I) is how much of its effect is seen first for consumers and only later for production. Electricity is the famous example here; most energy-heavy industries were purposefully located near low-cost energy sources like fast-flowing water. Energy produced via transmitted electricity simply wasn’t that competitive until well into the 20th century in these industries.

Ruth Cowan, a historian, investigated how household production was affected by the introduction of electricity, which in the non-rural US roughly means between 1918 and the Great Depression; electrification rose from 25 percent to 80 percent during this period. Huge amounts of drudgery, once left to housewives and domestic workers, was reduced. Consider the task of ironing. Before electricity (barring gas irons, which were not widespread), ironing involved heating a heavy flatiron on a stove, carrying it to the ironing board and quickly knocking out wrinkles before the heat dissipated, bringing in back to stove, and so on. The replacement of the coal stove by central heating similarly limited tedious work, including constant cleaning of coal dust. Cowan traces diffusion of these technologies in part by examining advertisements in magazines like the Ladies’ Home Journal.

The interesting aspect of this consumer revolution, however, was that it did not in fact reduce the amount of work done by housewives. By the end of the 1920s, urban women, most affected by these technological changes, were still doing more housework per week than rural women. It appears the standard story of how Industrial Revolution technologies affected industry – more specialization, more importance of managerial talent, disappearing emotional content of work – was not true of household production. Instead, upper middle class women no longer employed specialized domestic help (and the implied importance of managerial talent on the part of the housewife), and advertisements for new consumer goods frequently emphasized the emotional content of, e.g., the improved cleanliness of modern appliances with respect to children’s health. Indeed, technological progress tended to significantly increase the number of tasks women were expected to perform within the house. There’s not much reason in economic theory for TFP improvements to lead to reductions or increases in worker skill or autonomy, so perhaps it’s no surprise that the household sector saw a different pattern from certain industrial sectors.

Final version in Technology & Culture Jan 1976. If you’re not familiar with the term “Second Industrial Revolution”, Joel Mokyr has a nice summary of this period of frequent important macro/GPT inventions. Essentially, the big inventions of the late 19th century were much more reliant on scientific knowledge, and much more connected to network effects and increasing returns to scale, than those of the late 18th and early 19th century.

“Technology and Learning by Factory Workers: The Stretch-Out at Lowell, 1842,” J. Bessen (2003)

This is a wonderful piece of theory-driven economic history. Everyone knows that machinery in the Industrial Revolution was “de-skilling”, replacing craft workers with rote machine work. Bessen suggests, using data from mid-19th century mills in New England, that this may not be the case; capital is expensive and sloppy work can cause it to be out of service, so you may want to train your workers even more heavily as you deepen capital. It turns out that it is true that literate Yankee girls were largely replaced by illiterate, generally Irish workers (my ancestors included!) at Lowell and Waltham, while simultaneously the amount of time spend training (off of piece-wages) increased as did the number of looms run by each worker. How can we account for this?

Two traditional stories – that history is driven by the great inventor, or that the mill-owners were driven by philanthropy – are quickly demolished. The shift to more looms per worker was not the result of some new technology. Indeed, adoption of the more rigorous process spread slowly to Britain and southern New England. As for philanthropy, an economic model of human capital acquisition shows that the firms appear to have shifted toward unskilled workers for profit-based reasons.

Here’s the basic idea. If I hire literate workers like the Yankee farm girls, I can better select high-quality workers, but these workers will generally return home to marry after a short tenure. If I hire illiterate workers, their initial productivity is lower but, having their family in the mill town, they are less likely to leave the town. Mill owners had a number of local methods to collude and earn rents, hence they have some margin to pay for training. Which type should I prefer? If there exist many trained illiterate workers in town already, I just hire them. If not, the higher the ratio of wage to cloth price, the more I am willing to invest in training; training takes time during which no cloth is made, but increases future productivity at any given wage.

Looking at the Massachusetts mill data, a structural regression suggests that almost all of the increase in labor productivity between 1834 and 1855 was the result of increasing effective worker experience, a measure of industry-specific human capital (and note that a result of this kind is impossible without some sort of structural model). Why didn’t firms move to illiterate workers with more training earlier? Initially, there was no workforce that was both skilled and stable. With cloth prices relatively high compared to wages, it was initially (as can be seen in Bessen’s pro forma calculation) much more profitable to use a labor system that tries to select high quality workers even though they leave quickly. Depressed demand in the late 1830s led cloth prices to fall, which narrowed the relative profitability of well-trained but stable illiterate workers as compared to the skilled but unstable farm girls. A few firms began hiring illiterate workers and training them (presumably selecting high quality illiterate workers based on modern-day unobservables). This slowly increased the supply of trained illiterate workers, making it more profitable to switch a given factory floor over to three or four looms per worker, rather than two. By the 1850s, there was a sufficiently large base of trained illiterate workers to make them more profitable than the farm girls. Some light counterfactual calculations suggest that pure profit incentive is enough to drive the entire shift.

What is interesting is that the shift to what was ex-post a far more productive system appears to hinge critically on social factors – changes in the nature of the local labor supply, changes in demand for downstream products, etc. – rather than on technological change embodied in new inventions or managerial techniques. An important lesson to keep in mind, as nothing in the above story had any Whiggish bias toward increasing productivity!

Final working paper (IDEAS version). Final paper published in the Journal of Economic History, 2003. I’m a big fan of Bessen’s work, so I’m sure I’ve mentioned before on this site the most fascinating part of his CV: he has no graduate degree of any kind, yet has a faculty position at a great law school and an incredible publication record in economics, notably his 2009 paper on socially inefficient patents with Eric Maskin. Pretty amazing!

“Inventors, Patents and Inventing Activities in the English Brewing Industry, 1634-1850,” A. Nuvolari & J. Sumner (2013)

Policymakers often assume that patents are necessary for inventions to be produced or, if the politician is sophisticated, for a market in knowledge to develop. Economists are skeptical of such claims, for theoretical and empirical reasons. For example, Petra Moser has shown how few important inventions are ever patented, and Bessen and Maskin have a paper showing how the existence of patents can slow down innovation in certain technical industries. The literature more generally often mentions how heterogeneous appropriation strategies are across industries: some rely entirely on trade secrets, other on open source sharing, and yet others on patent protection.

Nuvolari and Sumner look at the English brewing industry from the 17th to the 19th century. This industry was actually quite innovative, most famously through the (perhaps collective) invention of that delightful winter friend named English Porter. The two look in great detail through lists of patents prior to 1850, and note that, despite the importance of brewing and its technical complexity, beer-related patents make up less than one percent of all patents granted during that period. Further, they note that there are enormous differences in patenting behavior within the brewing industry. Nonetheless, even in the absence of patents, there still existed a market for ideas.

Delving deeper, the authors show that many patentees were seen more as charlatans than as serious inventors. The most important inventors tended to either keep their inventions secret within their firm or guild, keep the inventions partially secret, publicize completely in order to enhance the status of their brewery as “scientific”, or publicize completely in order to garner consulting or engineering contracts. The partial secrecy and status-enhancing publicity reasons are particularly interesting. Humphrey Jackson, an aspiring chemist, sold a book with many technical details left as blank spots; by paying to attend his lecture, the details of his processes could be filled in, though the existence of the lecture was predicated on sufficiently large numbers buying the book! James Bavestock, a brewer in Hampshire, brought his hydrometer to the attention of a prominent London brewer Henry Thrale; in exchange, Thrale could organize entry into the London market, or a job in Thrale’s brewery should the small Hampshire concern go under.

2012 Working Paper (IDEAS version). This article appeared in the new issue of Business History Review, which was particularly good; it also featured, among others, a review on markets for knowledge in 19th century America which will probably be the final publication of the late Kenneth Sokoloff, and a paper by the always interesting Zorina Khan on international technology markets in the 19th century. Many current issues, such as open source, patent trolls, etc. are completely rehashing similar questions during that period, so the articles are well worth a look even for the non-historian.

“Without Consent or Contract,” R. W. Fogel (1989)

Word comes that Bob Fogel, an absolute giant in economic history and a Nobel Prize winner, passed away today. I first encountered Fogel in a class a decade or so ago taught by Robert Margo, another legendary scholar of the economics of American history.

Fogel’s most famous contribution is summarized in the foreword to the very readable Without Consent or Contract. “Although the slave system was horribly retrogressive in its social, political, and ideological aspects, it was quite advanced by the standards of the time in its technology and economic organization. The paradox is only apparent…because the paradox rests on the widely held assumption that technological efficiency is inherently good. It is this beguiling assumption that is false and, when applied to slavery, insidious.”

Roughly, it was political change alone, not economic change, which could have led to the end of slavery in America. The plantation system was, in fact, a fairly efficient system in the economic sense, and was not in danger of petering out on its own accord. Evidence on this point was laid out in technical detail in Fogel and Engerman’s “Time on the Cross”. In that text, evidence from an enormous number of sources is brought to bear on the value of a slave over time; McCloskey has called Fogel “a carpenter of history…measure, measure again, measure again.” The idea that the economic effects of history can be (and are) wildly different from the moral or political effects remains misunderstood; Melissa Dell’s wonderful paper on the Peruvian mita is a great example of a terrible social policy which nonetheless had positive long-run economic effects. As historians disdain “Whig history”, the idea that things improve as time marches on, economists ought disdain “Whig economics”, the idea that growth-inducing policies are somehow linked to moral ones.

There is much beyond the slavery research, of course. In one of the most famous papers in economic history, Fogel studied the contribution of the railroad to American economic growth (Google has this at only 86 citations; how is such a low number possible?). He notes that, as economists, we should care about the marginal benefit, not the absolute benefit, of the railroad. In the absence of rail, steamboats and canals were still possible (and would likely have been built in the midwest). He famously claims that the US would have reached its income in January 1890 by the end of March 1890 had there been no rail at all, a statement very much contrary to traditional historical thinking.

Fogel’s later life was largely devoted to his project on the importance of improved nutrition and its interaction with economic growth, particularly since the 1700s. If you’ve not seen these statistics, it is amazing just how short and skinny the average human was before the modern era. There has been an enormous debate over the relative role of nutrition, vis-a-vis technologies, knowledge like germ theory, or embodied or diffused knowledge, in the increased stature of man: Angus Deaton summarizes the literature nicely. In particular, my read is that the thesis whereby better nutrition causes a great rise in human incomes is on fairly shaky ground, though the debate is by no means settled.

Amazon has Without Consent or Contract for sale for under 15 bucks, well worth it. Some quick notes: Fogel was by no means a lone voice in cliometrics; for example, Conrad and Meyer in a 1958 JPE make very much the same point as Fogel concerning the economic success of slavery, using tools from capital theory in the argument. Concerning the railroad, modern work suggests Fogel may have understated its importance. Donaldson and Hornbeck, two of the best young economic historians in the world, use some developments in modern trade theory to argue that increased market access due to rail, measured as market access is capitalized into farmland, was far more important to GDP growth than Fogel suggested.

“The Institutional Causes of China’s Great Famine, 1959-1961,” X. Meng, N. Qian & P. Yared (2011)

Nancy Qian, along with a big group of coauthors, has done a great amount of interesting empirical work in recent years on the economics of modern China; among other things, she has shown that local elections actually do cause policy changes in line with local preferences and that the state remains surprisingly powerful in the Chinese economy. In this paper with Xin Meng and Pierre Yared, she considers what is likely the worst famine in the history of mankind, China’s famous famine following the Great Leap Forward. After a agricultural production shock in 1959, a series of misguided policy experiments in the mid-1950s (like “backyard steel” production, which produced worthless metal), and an anti-Rightist purge which ended a brief period of less rigid bureaucracy, 30 million or so people would die from hunger over the next two years, with most deaths among the young and the very old. To put this in relative context, in the worst-hit counties, the birth-cohorts that should have been born or very young in 1960 and 1961 are today missing more than 80% of their projected members.

What is interesting, and what we have known since Sen, is that famines generally result from problems of food distribution rather than food production. And, indeed, the authors show that total grain production in caloric terms across rural parts of China is a multiple of what is necessary to hold off starvation during the height of the productivity shock. What is interesting and novel, though, is that provinces with higher historic per-capita grain production had the highest mortality, and likewise counties with the highest per-capita production as measured by a proxy based on climate also have the largest number of “missing” members in their birth year cohort in the 1990 census. This is strange – you might think that places that are living on the edge in normal times are most susceptible to famine.

This is where politics comes into play. The Chinese government “sent down” many competent bureaucrats during the anti-Rightist purges in the late 1950s, limiting the ability of the government to use flexible mechanisms for food procurement. The food system at the time involved the central government collecting a set amount of grain from each region, then returning stocks to communal kitchens. Now, local leaders had a strong incentive to understate how much was produced in a given year so that they could use the remainder for local power purposes. Because of limited communication technology and ineffective bureaucracy, the optimal mechanism (not specified formally, but apparently done so in an earlier version) for the central government involved pre-setting fixed production goals for every region. Here is the problem: imagine you wish the city, rural area 1 and rural area 2 to have the same expected consumption, with the city producing no food, and rural area 1 producing 1 ton per capita per year and rural area 2 producing 1.4 tons per capita. This gives total consumption of .8 tons per capita if the government sets in advance a fixed “tax” of .2 tons per capita from 1 and .6 tons per capita from region 2. Now a productivity shock lowers production everywhere by 10 percent. The city still gets its .8 tons per capita (since the “tax” is fixed), but area 1 now gets .9*1-.2=.7 tons per capita, and area 2 gets 1.4*.9-.6=.66 tons per capita. That is, the lack of flexibility in the system is more likely to push the productive regions into famine than other regions.

Now, this is not the whole story. Alternative explanations, already suggested in the literature, also are quantitatively important. Places with more anti-Rightist purges before the famine saw higher mortality (see this 2011 APSR by Kung and Chen), as did places with earlier adoption of communal dining halls or larger increases in backyard steel production, both proxies for “zealous” adherence to the Great Leap Forward. I would really like to see some attempt at a decomposition here: if you buy that local political leadership, the central government quota system, and political punishment of counterrevolutionary areas were all important, and that weather shocks alone were not, how many of the deaths should we ascribe to each of those factors? This seems an important question for preventing future famines. It seems that a further fleshing out of how these results relate to the old theory of the firm debates about flexibility of local managers under imperfect and partially unverifiable reporting can help us understand what was going on with the CCP policy choices; I’m thinking, for instance, of explicitly showing whether it is true that loss of members of the bureacracy (i.e., an increase in the cost of monitoring) necessarily incentivizes more rigid allocation rules. Theory here could help to quantify how important this mechanism might be.

2011 working paper (IDEAS version). This paper is R&R at ReStud currently. Qian has a couple other working papers that caught my eye. First, a paper with Duflo and Banerjee on Chinese transportation infrastructure finds very little impact on relative incomes of (quasi-random) access to a good transportation network, and suggests in a short model (which is less convincing…) that relative immobility of capital might be causing this. The techniques in the paper are similar to those used by Ben Faber in his very nice paper showing Krugman’s home market effect: if you are small and poor, being connected with a big productive place may not be good for you due to increasing returns to scale. Qian also has a 2013 paper with Nathan Nunn on food aid which suggests, pretty convincingly, that food aid in civil war zones prolongs conflicts; the mechanism, roughly, is that local armies can easily steal the aid and hence have less reason to sue for peace. The identification strategy here is quite nice: the US government buys wheat for price stabilization reasons, then gives much of this away to impoverished countries. The higher the price of wheat, the less the government surplus is, hence the less is given away.

“Patent Alchemy: The Market for Technology in U.S. History,” N. Lamoreaux, K. Sokoloff & D. Sutthiphisall (2012)

It may appear that the world of innovation looks very different today than it used to. Large in-house R&D outfits – the Bell Labs of the past – are being replaced by small firms who sell the results of their research on to producers. Venture capital funding of research appears more and more important, both for providing capital to inventors and to linking the inventors up with potential buyers. Patent trolls hound the innocent, suing them for patent violations they weren’t even aware of. The speed with which patents are evaluated has slowed to a crawl, and the number of patents being granted continues to grow. Many patents are merely defensive, acquired solely to keep someone else from acquiring them.

Lamoreaux et al, building on earlier work by Lamoreaux and Sokoloff as well as Tom Nicholas’ interesting recent research, point out that none of the above is strange. The rise of in-house R&D is a phenomenon that doesn’t show up in great number in America until well into the twentieth century, only becoming dominant after the Second World War. Around the turn of the century, most innovation was done by small, independent inventors, or by small research firms like Edison’s outfit. A series of intermediaries, principally but not always patent lawyers, served both to file the proper paperwork and to link inventors with potential buyers; the authors provide a bunch of juicy historical stories, derived from lawyer diaries during this period, on exactly how such transactions took place. Railroads were frequently being hounded by patent trolls who tried to catch them unaware, and traveling patentbuyers crossed the Midwest and South suing farmers for using unlicensed barbed wire or milk buckets. Patents took an average of three years to be processed by the early 1900s, and the patenting rate was near an all time high. Firms regularly bought patents just so their competitors wouldn’t have them.

This is all to say that, to the extent we are worried about certain aspects of the patent system today, looking to history may be a useful place to begin. “Submarine patents”, acquired by trolls and kept unused until a particularly juicy potential violator has started to earn large profits, don’t appear to have been too prominent at the turn of the century – given how lucrative this business appears, perhaps an investigation of why they only appear in the present would be worthwhile. The role of a patent as a saleable piece of knowledge, allowing non-producers to do useful research and then sell that research to a firm who finds it useful, surely has some role, as Arrow pointed out in his famous 1962 essay. When patents instead simply add transaction costs or result in thickets, discouraging activity by true innovators, something has gone awry. And when something goes wrong in the world, it is rarely the case that history can offer us no useful guidance.

2012 working paper (No IDEAS version). Prof. Sokoloff passed away from cancer at a young age in 2007, so this may become his final published paper – it incorporates a great number of ideas he worked on throughout his career, so that would be a fitting tribute.

Follow

Get every new post delivered to your Inbox.

Join 189 other followers

%d bloggers like this: