Category Archives: Income Inequality

“Valuing Diversity,” G. Loury & R. Fryer (2013)

The old chair of my alma mater’s economics department, Glenn Loury is, somehow, wrapped up in a kerfuffle related to the student protests that have broken out across the United States. Loury, who is now at Brown, wrote an op-ed in the student paper which to an economist just says that the major racial problem in the United States is statistical discrimination not taste-based discrimination, and hence the types of protests and desired recourse of the student protesters is wrongheaded. After being challenged about “what type of a black scholar” he is, Loury wrote a furious response pointing out that he is, almost certainly, the world’s most prominent scholar on the topic of racial discrimination and potential remedies, and has been thinking about how policy can remedy racial injustice since before the student’s parents were even born.

An important aspect of his work is that, under statistical discrimination, there is huge scope for perverse and unintended effects of policies. This idea has been known since Ken Arrow’s famous 1973 paper, but Glenn Loury and Stephen Coate in 1993 worked it out in greater detail. Imagine there are black and white workers, and high-paid good jobs, which require skill, and low-paid bad jobs which do not. Workers make an unobservable investment in skill, where the firm only sees a proxy: sometimes unskilled workers “look like” skilled workers, sometimes skilled workers “look like” unskilled workers, and sometimes we aren’t sure. As in Arrow’s paper, there can be multiple equilibria: when firms aren’t sure of a worker’s skill, if they assume all of those workers are unskilled, then in equilibrium investment in skill will be such that the indeterminate workers can’t profitably be placed in skilled jobs, but if the firms assume all indeterminate workers are skilled, then there is enough skill investment to make it worthwhile for firms to place those workers in high-skill, high-wage jobs. Since there are multiple equilibria, if race or some other proxy is observable, we can be in the low-skill-job, low-investment equilibrium for one group, and the high-skill-job, high-investment equilibrium for a different group. That is, even with no ex-ante difference across groups and no taste-based bias, we still wind up with a discriminatory outcome.

The question Coate and Loury ask is whether affirmative action can fix this negative outcome. Let an affirmative action rule state that the proportion of all groups assigned to the skilled job must be equal. Ideally, affirmative action would generate equilibrium beliefs by firms about workers that are the same no matter what group those workers come from, and hence skill investment across groups that is equal. Will this happen? Not necessarily. Assume we are in the equilibrium where one group is assumed low-skill when their skill in indeterminate, and the other group is assumed high-skill.

In order to meet the affirmative action rule, either more of the discriminated group needs to be assigned to the high-skill job, or more of the favored group need to be assigned to the low-skill job. Note that in the equilibrium without affirmative action, the discriminated group invests less in skills, and hence the proportion of the discriminated group that tests as unskilled is higher than the proportion of the favored group that does so. The firms can meet the affirmative action rule, then, by keeping the assignment rule for favored groups as before, and by assigning all proven-skilled and indeterminate discriminated workers as well as some random proportion of proven-unskilled discriminated workers, to the skilled task. This rule decreases the incentive to invest in skills for the discriminated group, and hence it is no surprise that not only can it be an equilibrium, but that Coate and Loury can show the dynamics of this policy lead to fewer and fewer discriminated workers investing in skills over time: despite identical potential at birth, affirmative action policies can lead to “patronizing equilibria” that exacerbate, rather than fix, differences across groups. The growing skill difference between previously-discriminated-against “Bumiputra” Malays and Chinese Malays following affirmative action policies in the 1970s fits this narrative nicely.

The broader point here, and one that comes up in much of Loury’s theoretical work, is that because policies affect beliefs even of non-bigoted agents, statistical discrimination is a much harder problem to solve than taste-based or “classical” bias. Consider the job market for economists. If women or minorities have trouble finding jobs because of an “old boys’ club” that simply doesn’t want to hire those groups, then the remedy is simple: require hiring quotas and the like. If, however, the problem is that women or minorities don’t enter economics PhD programs because of a belief that it will be hard to be hired, and that difference in entry leads to fewer high-quality women or minorities come graduation, then remedies like simple quotas may lead to perverse incentives.

Moving beyond perverse incentives, there is also the question of how affirmative action programs should be designed if we want to equate outcomes across groups that face differential opportunities. This question is taken up in “Valuing Diversity”, a recent paper Loury wrote with recent John Bates Clark medal winner Roland Fryer. Consider dalits in India or African-Americans: for a variety of reasons, from historic social network persistence to neighborhood effects, the cost of increasing skill may be higher for these groups. We have an opportunity which is valuable, such as slots at a prestigious college. Simply providing equal opportunity may not be feasible because the social reasons why certain groups face higher costs of increasing skill are very difficult to solve. Brown University, or even the United States government as a whole, may be unable to fix the persistence social difference in upbringing among blacks and whites. So what to do?

There are two natural fixes. We can provide a lower bar for acceptance for the discriminated group at the prestigious college, or subsidize skill acquisition for the discriminated group by providing special summer programs, tutoring, etc. If policy can be conditioned on group identity, then the optimal policy is straightforward. First, note that in a laissez faire world, individuals invest in skill until the cost of investment for the marginal accepted student exactly equates to the benefit the student gets from attending the fancy college. That is, the equilibrium is efficient: students with the lowest cost of acquiring skill are precisely the ones who invest and are accepted. But precisely that weighing of marginal benefit and costs holds within group if the acceptance cutoff differs by group identity, so if policy can condition on group identity, we can get whatever mix of students from different groups we want while still ensuring that the students within each group with the lowest cost of upgrading their skill are precisely the ones who invest and are accepted. The policy change itself, by increasing the quota of slots for the discriminated group, will induce marginal students from that group to upgrade their skills in order to cross the acceptance threshold; that is, quotas at the assignment stage implicitly incentivize higher investment by the discriminated group.

The trickier problem is when policy cannot condition on group identity, as is the case in the United States under current law. I would like somehow to accept more students from the discriminated against group, and to ensure that those students invest in their skill, but the policy I set needs to treat the favored and discriminated against groups equally. Since discriminated-against students make up a bigger proportion of those with a high cost of skill acquisition compared to students with a low cost of skill acquisition, any “blind” policy that does condition on group identity will induce identical investment activity and acceptance probability among agents with identical costs of skill upgrading. Hence any blind policy that induces more discriminated-students to attend college must somehow be accepting students with higher costs of skill acquisition than the marginal accepted student under laissez faire, and must not be accepting students whose costs of skill acquisition were at the laissez faire margin. Fryer and Loury show, by solving the relevant linear program, that we can best achieve this by allowing the most productive students to buy their slots, and then randomly assigning slots to everyone else.

Under that policy, very low cost of effort students still invest so that their skill is high enough that buying a guaranteed slot is worth it. I then use either a tax or subsidy on skill investment in order to affect how many people find it worth investing in skill and then buying the guaranteed slot, and hence in conjunction with the randomized slot assignment, ensuring that the desired mixture across groups that are accepted is achieved.

This result resembles certain results in dynamic pricing. How do I get people to pay a high price for airplane tickets while still hoping to sell would-be-empty seats later at a low price? The answer is that I make high-value people worried that if they don’t buy early, the plane may sell out. The high-value people then trade off paying a high price and getting a seat with probability 1 versus waiting for a low price by maybe not getting on the plane at all. Likewise, how do I induce people to invest in skills even when some lower-skill people will be admitted? Ensure that lower-skill people are only admitted with some randomness. The folks who can get perfect grades and test scores fairly easily will still exert effort to do so, ensuring they get into their top choice college guaranteed rather than hoping to be admitted subject to some random luck. This type of intuition is non-obvious, which is precisely Loury’s point: racial and other forms of injustice are often due to factors much more subtle than outright bigotry, and the optimal response to these more subtle causes do not fit easily on a placard or a bullhorn slogan.

Final working paper (RePEc IDEAS version), published in the JPE, 2013. Hanming Fang and Andrea Moro have a nice handbook chapter on theoretical explorations of discrimination. On the recent protests, Loury and John McWhorter have an interesting and provocative dialog on the recent student protests at Bloggingheads.

Angus Deaton, 2015 Nobel Winner: A Prize for Structural Analysis?

Angus Deaton, the Scottish-born, Cambridge-trained Princeton economist, best known for his careful work on measuring the changes in wellbeing of the world’s poor, has won the 2015 Nobel Prize in economics. His data collection is fairly easy to understand, so I will leave larger discussion of exactly what he has found to the general news media; Deaton’s book “The Great Escape” provides a very nice summary of what he has found as well, and I think a fair reading of his development preferences are that he much prefers the currently en vogue idea of just giving cash to the poor and letting them spend it as they wish.

Essentially, when one carefully measures consumption, health, or generic characteristics of wellbeing, there has been tremendous improvement indeed in the state of the world’s poor. National statistics do not measure these ideas well, because developing countries do not tend to track data at the level of the individual. Indeed, even in the United States, we have only recently begun work on localized measures of the price level and hence the poverty rate. Deaton claims, as in his 2010 AEA Presidential Address (previously discussed briefly on two occasions on AFT), that many of the measures of global inequality and poverty used by the press are fundamentally flawed, largely because of the weak theoretical justification for how they link prices across regions and countries. Careful non-aggregate measures of consumption, health, and wellbeing, like those generated by Deaton, Tony Atkinson, Alwyn Young, Thomas Piketty and Emmanuel Saez, are essential for understanding how human welfare has changed over time and space, and is a deserving rationale for a Nobel.

The surprising thing about Deaton, however, is that despite his great data-collection work and his interest in development, he is famously hostile to the “randomista” trend which proposes that randomized control trials (RCT) or other suitable tools for internally valid causal inference are the best way of learning how to improve the lives of the world’s poor. This mode is most closely associated with the enormously influential J-PAL lab at MIT, and there is no field in economics where you are less likely to see traditional price theoretic ideas than modern studies of development. Deaton is very clear on his opinion: “Randomized controlled trials cannot automatically trump other evidence, they do not occupy any special place in some hierarchy of evidence, nor does it make sense to refer to them as “hard” while other methods are “soft”… [T]he analysis of projects needs to be refocused towards the investigation of potentially generalizable mechanisms that explain why and in what contexts projects can be expected to work.” I would argue that Deaton’s work is much closer to more traditional economic studies of development than to RCTs.

To understand this point of view, we need to go back to Deaton’s earliest work. Among Deaton’s most famous early papers was his well-known development of the Almost Ideal Demand System (AIDS) in 1980 with Muellbauer, a paper chosen as one of the 20 best published in the first 100 years of the AER. It has long been known that individual demand equations which come from utility maximization must satisfy certain properties. For example, a rational consumer’s demand for food should not depend on whether the consumer’s equivalent real salary is paid in American or Canadian dollars. These restrictions turn out to be useful in that if you want to know how demand for various products depend on changes in income, among many other questions, the restrictions of utility theory simplify estimation greatly by reducing the number of free parameters. The problem is in specifying a form for aggregate demand, such as how demand for cars depends on the incomes of all consumers and prices of other goods. It turns out that, in general, aggregate demand generated by utility-maximizing households does not satisfy the same restrictions as individual demand; you can’t simply assume that there is a “representative consumer” with some utility function and demand function equal to each individual agent. What form should we write for aggregate demand, and how congruent is that form with economic theory? Surely an important question if we want to estimate how a shift in taxes on some commodity, or a policy of giving some agricultural input to some farmers, is going to affect demand for output, its price, and hence welfare!

Let q(j)=D(p,c,e) say that the quantity of j consumed, in aggregate is a function of the price of all goods p and the total consumption (or average consumption) c, plus perhaps some random error e. This can be tough to estimate: if D(p,c,e)=Ap+e, where demand is just a linear function of relative prices, then we have a k-by-k matrix to estimate, where k is the number of goods. Worse, that demand function is also imposing an enormous restriction on what individual demand functions, and hence utility functions, look like, in a way that theory does not necessarily support. The AIDS of Deaton and Muellbauer combine the fact that Taylor expansions approximately linearize nonlinear functions and that individual demand can be aggregated even when heterogeneous across individuals if the restrictions of Muellbauer’s PIGLOG papers are satisfied to show a functional form for aggregate demand D which is consistent with aggregated individual rational behavior and which can sometimes be estimated via OLS. They use British data to argue that aggregate demand violates testable assumptions of the model and hence factors like credit constraints or price expectations are fundamental in explaining aggregate consumption.

This exercise brings up a number of first-order questions for a development economist. First, it shows clearly the problem with estimating aggregate demand as a purely linear function of prices and income, as if society were a single consumer. Second, it gives the importance of how we measure the overall price level in figuring out the effects of taxes and other policies. Third, it combines theory and data to convincingly suggest that models which estimate demand solely as a function of current prices and current income are necessarily going to give misleading results, even when demand is allowed to take on very general forms as in the AIDS model. A huge body of research since 1980 has investigated how we can better model demand in order to credibly evaluate demand-affecting policy. All of this is very different from how a certain strand of development economist today might investigate something like a subsidy. Rather than taking obversational data, these economists might look for a random or quasirandom experiment where such a subsidy was introduced, and estimate the “effect” of that subsidy directly on some quantity of interest, without concern for how exactly that subsidy generated the effect.

To see the difference between randomization and more structural approaches like AIDS, consider the following example from Deaton. You are asked to evaluate whether China should invest more in building railway stations if they wish to reduce poverty. Many economists trained in a manner influenced by the randomization movement would say, well, we can’t just regress the existence of a railway on a measure of city-by-city poverty. The existence of a railway station depends on both things we can control for (the population of a given city) and things we can’t control for (subjective belief that a town is “growing” when the railway is plopped there). Let’s find something that is correlated with rail station building but uncorrelated with the random component of how rail station building affects poverty: for instance, a city may lie on a geographically-accepted path between two large cities. If certain assumptions hold, it turns out that a two-stage “instrumental variable” approach can use that “quasi-experiment” to generate the LATE, or local average treatment effect. This effect is the average benefit of a railway station on poverty reduction, at the local margin of cities which are just induced by the instrument to build a railway station. Similar techniques, like difference-in-difference and randomized control trials, under slightly different assumptions can generate credible LATEs. In development work today, it is very common to see a paper where large portions are devoted to showing that the assumptions (often untestable) of a given causal inference model are likely to hold in a given setting, then finally claiming that the treatment effect of X on Y is Z. That LATEs can be identified outside of a purely randomized contexts is incredibly important and valuable, and the economists and statisticians who did the heavy statistical lifting on this so-called Rubin model will absolutely and justly win an Economics Nobel sometime soon.

However, this use of instrumental variables would surely seem strange to the old Cowles Commission folks: Deaton is correct that “econometric analysis has changed its focus over the years, away from the analysis of models derived from theory towards much looser specifications that are statistical representations of program evaluation. With this shift, instrumental variables have moved from being solutions to a well-defined problem of inference to being devices that induce quasi-randomization.” The traditional use of instrumental variables was that after writing down a theoretically justified model of behavior or aggregates, certain parameters – not treatment effects, but parameters of a model – are not identified. For instance, price and quantity transacted are determined by the intersection of aggregate supply and aggregate demand. Knowing, say, that price and quantity was (a,b) today, and is (c,d) tomorrow, does not let me figure out the shape of either the supply or demand curve. If price and quantity both rise, it may be that demand alone has increased pushing the demand curve to the right, or that demand has increased while the supply curve has also shifted to the right a small amount, or many other outcomes. An instrument that increases supply without changing demand, or vice versa, can be used to “identify” the supply and demand curves: an exogenous change in the price of oil will affect the price of gasoline without much of an effect on the demand curve, and hence we can examine price and quantity transacted before and after the oil supply shock to find the slope of supply and demand.

Note the difference between the supply and demand equation and the treatment effects use of instrumental variables. In the former case, we have a well-specified system of supply and demand, based on economic theory. Once the supply and demand curves are estimated, we can then perform all sorts of counterfactual and welfare analysis. In the latter case, we generate a treatment effect (really, a LATE), but we do not really know why we got the treatment effect we got. Are rail stations useful because they reduce price variance across cities, because they allow for increasing returns to scale in industry to be utilized, or some other reason? Once we know the “why”, we can ask questions like, is there a cheaper way to generate the same benefit? Is heterogeneity in the benefit important? Ought I expect the results from my quasiexperiment in place A and time B to still operate in place C and time D (a famous example being the drug Opren, which was very successful in RCTs but turned out to be particularly deadly when used widely by the elderly)? Worse, the whole idea of LATE is backwards. We traditionally choose a parameter of interest, which may or may not be a treatment effect, and then choose an estimation technique that can credible estimate that parameter. Quasirandom techniques instead start by specifying the estimation technique and then hunt for a quasirandom setting, or randomize appropriately by “dosing” some subjects and not others, in order to fit the assumptions necessary to generate a LATE. If is often the case that even policymakers do not care principally about the LATE, but rather they care about some measure of welfare impact which rarely is immediately interpretable even if the LATE is credibly known!

Given these problems, why are random and quasirandom techniques so heavily endorsed by the dominant branch of development? Again, let’s turn to Deaton: “There has also been frustration with the World Bank’s apparent failure to learn from its own projects, and its inability to provide a convincing argument that its past activities have enhanced economic growth and poverty reduction. Past development practice is seen as a succession of fads, with one supposed magic bullet replacing another—from planning to infrastructure to human capital to structural adjustment to health and social capital to the environment and back to infrastructure—a process that seems not to be guided by progressive learning.” This is to say, the conditions necessary to estimate theoretical models are so stringent that development economists have been writing noncredible models, estimating them, generating some fad of programs that is used in development for a few years until it turns out not to be silver bullet, then abandoning the fad for some new technique. Better, the randomistas argue, to forget about external validity for now, and instead just evaluate the LATEs on a program-by-program basis, iterating what types of programs we evaluate until we have a suitable list of interventions that we feel confident work. That is, development should operate like medicine.

We have something of an impasse here. Everyone agrees that on many questions theory is ambiguous in the absence of particular types of data, hence more and better data collection is important. Everyone agrees that many parameters of interest for policymaking require certain assumptions, some more justifiable than others. Deaton’s position is that the parameters of interest to economists by and large are not LATEs, and cannot be generated in a straightforward way from LATEs. Thus, following Nancy Cartwright’s delightful phrasing, if we are to “use” causes rather than just “hunt” for what they are, we have no choice but to specify the minimal economic model which is able to generate the parameters we care about from the data. Glen Weyl’s attempt to rehabilitate price theory and Raj Chetty’s sufficient statistics approach are both attempts to combine the credibility of random and quasirandom inference with the benefits of external validity and counterfactual analysis that model-based structural designs permit.

One way to read Deaton’s prize, then, is as an award for the idea that effective development requires theory if we even hope to compare welfare across space and time or to understand why policies like infrastructure improvements matter for welfare and hence whether their beneficial effects will remain when moved to a new context. It is a prize which argues against the idea that all theory does is propose hypotheses. For Deaton, going all the way back to his work with AIDS, theory serves three roles: proposing hypotheses, suggesting which data is worthwhile to collect, and permitting inference on the basis of that data. A secondary implication, very clear in Deaton’s writing, is that even though the “great escape” from poverty and want is real and continuing, that escape is almost entirely driven by effects which are unrelated to aid and which are uninfluenced by the type of small bore, partial equilibrium policies for which randomization is generally suitable. And, indeed, the best development economists very much understand this point. The problem is that the media, and less technically capable young economists, still hold the mistaken belief that they can infer everything they want to infer about “what works” solely using the “scientific” methods of random- and quasirandomization. For Deaton, results that are easy to understand and communicate, like the “dollar-a-day” poverty standard or an average treatment effect, are less virtuous than results which carefully situate numbers in the role most amenable to answering an exact policy question.

Let me leave you three side notes and some links to Deaton’s work. First, I can’t help but laugh at Deaton’s description of his early career in one of his famous “Notes from America”. Deaton, despite being a student of the 1984 Nobel laureate Richard Stone, graduated from Cambridge essentially unaware of how one ought publish in the big “American” journals like Econometrica and the AER. Cambridge had gone from being the absolute center of economic thought to something of a disconnected backwater, and Deaton, despite writing a paper that would win a prize as one of the best papers in Econometrica published in the late 1970s, had essentially no understanding of the norms of publishing in such a journal! When the history of modern economics is written, the rise of a handful of European programs and their role in reintegrating economics on both sides of the Atlantic will be fundamental. Second, Deaton’s prize should be seen as something of a callback to the ’84 prize to Stone and ’77 prize to Meade, two of the least known Nobel laureates. I don’t think it is an exaggeration to say that the majority of new PhDs from even the very best programs will have no idea who those two men are, or what they did. But as Deaton mentions, Stone in particular was one of the early “structural modelers” in that he was interested in estimating the so-called “deep” or behavioral parameters of economic models in a way that is absolutely universal today, as well as being a pioneer in the creation and collection of novel economic statistics whose value was proposed on the basis of economic theory. Quite a modern research program! Third, of the 19 papers in the AER “Top 20 of all time” whose authors were alive during the era of the economics Nobel, 14 have had at least one author win the prize. Should this be a cause for hope for the living outliers, Anne Krueger, Harold Demsetz, Stephen Ross, John Harris, Michael Todaro and Dale Jorgensen?

For those interested in Deaton’s work beyond what this short essay, his methodological essay, quoted often in this post, is here. The Nobel Prize technical summary, always a great and well-written read, can be found here.

“Bonus Culture: Competitive Pay, Screening and Multitasking,” R. Benabou & J. Tirole (2014)

Empirically, bonus pay as a component of overall renumeration has become more common over time, especially in highly competitive industries which involve high levels of human capital; think of something like management of Fortune 500 firms, where the managers now have their salary determined globally rather than locally. This doesn’t strike most economists as a bad thing at first glance: as long as we are measuring productivity correctly, workers who are compensated based on their actual output will both exert the right amount of effort and have the incentive to improve their human capital.

In an intriguing new theoretical paper, however, Benabou and Tirole point out that many jobs involve multitasking, where workers can take hard-to-measure actions for intrinsic reasons (e.g., I put effort into teaching because I intrinsically care, not because academic promotion really hinges on being a good teacher) or take easy-to-measure actions for which there might be some kind of bonus pay. Many jobs also involve screening: I don’t know who is high quality and who is low quality, and although I would optimally pay people a bonus exactly equal to their cost of effort, I am unable to do so since I don’t know what that cost is. Multitasking and worker screening interact among competitive firms in a really interesting way, since how other firms incentivize their workers affects how workers will respond to my contract offers. Benabou and Tirole show that this interaction means that more competition in a sector, especially when there is a big gap between the quality of different workers, can actually harm social welfare even in the absence of any other sort of externality.

Here is the intuition. For multitasking reasons, when different things workers can do are substitutes, I don’t want to give big bonus payments for the observable output, since if I do the worker will put in too little effort on the intrinsically valuable task: if you pay a trader big bonuses for financial returns, she will not put as much effort into ensuring all the laws and regulations are followed. If there are other finance firms, though, they will make it known that, hey, we pay huge bonuses for high returns. As a result, workers will sort, with all of the high quality traders will move to the high bonus firm, leaving only the low quality traders at the firm with low bonuses. Bonuses are used not only to motivate workers, but also to differentially attract high quality workers when quality is otherwise tough to observe. There is a tradeoff, then: you can either have only low productivity workers but get the balance between hard-to-measure tasks and easy-to-measure tasks right, or you can retain some high quality workers with large bonuses that make those workers exert too little effort on hard-to-measure tasks. When the latter is more profitable, all firms inefficiently begin offering large, effort-distorting bonuses, something they wouldn’t do if they didn’t have to compete for workers.

How can we fix things? One easy method is with a bonus cap: if the bonus is capped at the monopsony optimal bonus, then no one can try to screen high quality workers away from other firms with a higher bonus. This isn’t as good as it sounds, however, because there are other ways to screen high quality workers (such as offering lower clawbacks if things go wrong) which introduce even worse distortions, hence bonus caps may simply cause less efficient methods to perform the same screening and same overincentivization of the easy-to-measure output.

When the individual rationality or incentive compatibility constraints in a mechanism design problem are determined in equilibrium, based on the mechanisms chosen by other firms, we sometimes called this a “competing mechanism”. It seems to me that there are quite a number of open questions concerning how to make these sorts of problems tractable; a talented young theorist looking for a fun summer project might find it profitable to investigate this as-yet small literature.

Beyond the theoretical result on screening plus multitasking, Tirole and Benabou also show that their results hold for market competition more general than just perfect competition versus monopsony. They do this through a generalized version of the Hotelling line which appears to have some nice analytic properties, at least compared to the usual search-theoretic models which you might want to use when discussing imperfect labor market competition.

Final copy (RePEc IDEAS version), forthcoming in the JPE.

“The Rents from Sugar and Coercive Institutions: Removing the Sugar Coating,” C. Dippel, A. Greif & D. Trefler (2014)

Today, I’ve got two posts about some new work by Christian Dippel, an economic historian at UCLA Anderson who is doing some very interesting theoretically-informed history; no surprise to see Greif and Trefler as coauthors on this paper, as they are both prominent proponents of this analytical style.

The authors consider the following puzzle: sugar prices absolutely collapse during the mid and late 1800s, largely because of the rise of beet sugar. And yet, wages in the sugar-dominant British colonies do not appear to have fallen. This is odd, since all of our main theories of trade suggest that when an export price falls, the price of factors used to produce that export also fall (this is less obvious than just marginal product falling, but still true).

The economics seem straightforward enough, so what explains the empirical result? Well, the period in question is right after the end of slavery in the British Empire. There were lots of ways in which the politically powerful could use legal or extralegal means to keep wages from rising to marginal product. Suresh Naidu, a favorite of this blog, has a number of papers on labor coercion everywhere from the UK in the era of Master and Servant Law, to the US South post-reconstruction, to the Middle East today; actually, I understand he is writing a book on the subject which, if there is any justice, has a good shot at being the next Pikettyesque mainstream hit. Dippel et al quote a British writer in the 1850s on the Caribbean colonies: “we have had a mass of colonial legislation, all dictated by the most short-sighted but intense and disgraceful selfishness, endeavouring to restrict free labour by interfering with wages, by unjust taxation, by unjust restrictions, by oppressive and unequal laws respecting contracts, by the denial of security of [land] tenure, and by impeding the sale of land.” In particular, wages rose rapidly right after slavery ended in 1838, but those gains were clawed back by the end of 1840s due to “tenancy-at-will laws” (which let employers seize some types of property if workers left), trespass and land use laws to restrict freeholding on abandoned estates and Crown land, and emigration restrictions.

What does labor coercion have to do with wages staying high as sugar prices collapse? The authors write a nice general equilibrium model. Englishmen choose whether to move to the colonies (in which case they get some decent land) or to stay in England at the outside wage. Workers in the Caribbean can either take a wage working sugar which depends on bargaining power, or they can go work marginal freehold land. Labor coercion rules limit the ability of those workers to work some land, so the outside option of leaving the sugar plantation is worse the more coercive institutions are. Governments maximize a weighted combination of Englishmen and local wages, choosing the coerciveness of institutions. The weight on Englishmen wages is higher the more important sugar exports and their enormous rents are to the local economy. In partial equilibrium, then, if the price of sugar falls exogenously, the wages of workers on sugar plantations falls (as their MP goes down), the number of locals willing to work sugar falls, hence the number of Englishman willing to stay falls (as their profit goes down). With few plantations, sugar rents become less important, labor coercion falls, opening up more marginal land for freeholders, which causes even more workers to leave sugar plantations and improves wages for those workers. However, if sugar is very important, the government places a lot of weight on planter income in the social welfare function, hence responds to a fall in sugar prices by increasing labor coercion, lowering the outside option of workers, keeping them on the sugar plantations, where they earn lower wages than before for the usual economic reasons. That is, if sugar is really important, coercive institutions will be retained, the economic structure will be largely unchanged in response to a fall in world sugar prices, and hence wages will fall, but if sugar is only of marginal importance, a fall in sugar prices leads the politically powerful to leave, lowering the political strength of the planter class, thus causing coercive labor institutions to decline, allowing workers to reallocate such that wages approach marginal product; since the MP of options other than sugar may be higher than the wage paid to sugar workers, this reallocation caused by the decline of sugar prices can cause wages in the colony to increase.

The British, being British, kept very detailed records of things like incarceration rates, wages, crop exports, and the like, and the authors find a good deal of empirical evidence for the mechanism just described. To assuage worries about the endogeneity of planter power, they even get a subject expert to construct a measure of geographic suitability for sugar in each of 14 British Caribbean colonies, and proxies for planter power with the suitability of marginal land for sugar production. Interesting work all around.

What should we take from this? That legal and extralegal means can be used to keep factor rents from approaching their perfect competition outcome: well, that is something essentially every classical economist from Smith to Marx has described. The interesting work here is the endogeneity of factor coercion. There is still some debate about much we actually know about whether these endogenous institutions (or, even more so, the persistence of institutions) have first-order economic effects; see a recent series of posts by Dietz Vollrath for a skeptical view. I find this paper by Dippel et al, as well as recent work by Naidu and Hornbeck, are the cleanest examples of how exogenous shocks affect institutions, and how those institutions then affect economic outcomes of great importance.

December 2014 working paper (no RePEc IDEAS version)

“International Trade and Institutional Change: Medieval Venice’s Response to Globalization,” D. Puga & D. Trefler

(Before discussing the paper today, I should forward a couple great remembrances of Stanley Reiter, who passed away this summer, by Michael Chwe (whose interests at the intersection of theory and history are close to my heart) and Rakesh Vohra. After leaving Stanford – Chwe mentions this was partly due to a nasty letter written by Reiter’s advisor Milton Friedman! – Reiter established an incredible theory group at Purdue which included Afriat, Vernon Smith and PhD students like Sonnenschein and Ledyard. He then moved to Northwestern where he helped build up the great group in MEDS which is too long to list, but which includes one Nobel winner already in Myerson and, by my reckoning, two more which are favorites to win the prize next Monday.

I wonder if we may be at the end of an era for topic-diverse theory departments. Business schools are all a bit worried about “Peak MBA”, and theorists are surely the first ones out the door when enrollment falls. Economic departments, journals and funders seem to have shifted, in the large, toward more empirical work, for better or worse. Our knowledge both of how economic and social interactions operate in their most platonic form, and our ability to interpret empirical results when considering novel or counterfactual policies, have greatly benefited by the theoretical developments following Samuelson and Hicks’ mathematization of primitives in the 1930s and 40s, and the development of modern game theory and mechanism design in the 1970s and 80s. Would that a new Cowles and a 21st century Reiter appear to help create a critical mass of theorists again!)

On to today’s paper, a really interesting theory-driven piece of economic history. Venice was one of the most important centers of Europe’s “commercial revolution” between the 10th and 15th century; anyone who read Marco Polo as a schoolkid knows of Venice’s prowess in long-distance trade. Among historians, Venice is also well-known for the inclusive political institutions that developed in the 12th century, and the rise of oligarchy following the “Serrata” at the end of the 13th century. The Serrata was followed by a gradual decrease in Venice’s power in long-distance trade and a shift toward manufacturing, including the Murano glass it is still famous for today. This is a fairly worrying history from our vantage point today: as the middle class grew wealthier, democratic forms of government and free markets did not follow. Indeed, quite the opposite: the oligarchs seized political power, and within a few decades of the serrata restricted access to the types of trade that previously drove wealth mobility. Explaining what happened here is both a challenge due to limited data, and of great importance given the public prominence of worries about the intersection of growing inequality and corruption of the levers of democracy.

Dan Trefler, an economic historian here at U. Toronto, and Diego Puga, an economist at CEMFI who has done some great work in economic geography, provide a great explanation of this history. Here’s the model. Venice begins with lots of low-wealth individuals, a small middle and upper class, and political power granted to anyone in the upper class. Parents in each dynasty can choose to follow a risky project – becoming a merchant in a long-distance trading mission a la Niccolo and Maffeo Polo – or work locally in a job with lower expected pay. Some of these low and middle class families will succeed on their trade mission and become middle and upper class in the next generation. Those with wealth can sponsor ships via the colleganza, a type of early joint-stock company with limited liability, and potentially join the upper class. Since long-distance trade is high variance, there is a lot of churn across classes. Those with political power also gather rents from their political office. As the number of wealthy rise in the 11th and 12th century, the returns to sponsoring ships falls due to competition across sponsors in the labor and export markets. At any point, the upper class can vote to restrict future entry into the political class by making political power hereditary. They need to include sufficiently many powerful people in this hereditary class or there will be a revolt. As the number of wealthy increase, eventually the wealthy find it worthwhile to restrict political power so they can keep political rents within their dynasty forever. Though political power is restricted, the economy is still free, and the number of wealthy without power continue to grow, lowering the return to wealth for those with political power due to competition in factor and product markets. At some point, the return is so low that it is worth risking revolt from the lower classes by restricting entry of non-nobles into lucrative industries. To prevent revolt, a portion of the middle classes are brought in to the hereditary political regime, such that the regime is powerful enough to halt a revolt. Under these new restrictions, lower classes stop engaging in long-distance trade and instead work in local industry. These outcomes can all be generated with a reasonable looking model of dynastic occupation choice.

What historical data would be consistent with this theoretical mechanism? We should expect lots of turnover in political power and wealth in the 10th through 13th centuries. We should find examples in the literature of families beginning as long-distance traders and rising to voyage sponsors and political agents. We should see a period of political autocracy develop, followed later by the expansion of hereditary political power and restrictions on lucrative industry entry to those with such power. Economic success based on being able to activate large amounts of capital from within the nobility class will make the importance of inter-family connections more important in the 14th and 15th centuries than before. Political power and participation in lucrative economic ventures will be limited to a smaller number of families after this political and economic closure than before. Those left out of the hereditary regime will shift to local agriculture and small-scale manufacturing.

Indeed, we see all of these outcomes in Venetian history. Trefler and Puga use some nice techniques to get around limited data availability. Since we don’t have data on family incomes, they use the correlation in eigenvector centrality within family marriage networks as a measure of the stability of the upper classes. They code colleganza records – a non-trivial task involving searching thousands of scanned documents for particular Latin phrases – to investigate how often new families appear in these records, and how concentration in the funding of long-distance trade changes over time. They show that all of the families with high eigenvector centrality in the noble marriage market after political closure – a measure of economic importance, remember – were families that were in the top quartile of seat-share in the pre-closure Venetian legislature, and that those families which had lots of political power pre-closure but little commercial success thereafter tended to be unsuccessful in marrying into lucrative alliances.

There is a lot more historical detail in the paper, but as a matter of theory useful to the present day, the Venetian experience ought throw cold water on the idea that political inclusiveness and economic development always form a virtuous circle. Institutions are endogenous, and changes in the nature of inequality within a society following economic development alter the potential for political and economic crackdowns to survive popular revolt.

Final published version in QJE 2014 (RePEc IDEAS). A big thumbs up to Diego for having the single best research website I have come across in five years of discussing papers in this blog. Every paper has an abstract, well-organized replication data, and a link to a locally-hosted version of the final published paper. You may know his paper with Nathan Nunn on how rugged terrain in Africa is associated with good economic outcomes today because slave traders like the infamous Tippu Tip couldn’t easily exploit mountainous areas, but it’s also worth checking out his really clever theoretical disambiguation of why firms in cities are more productive, as well as his crazy yet canonical satellite-based investigation of the causes of sprawl. There is a really cool graphic on the growth of U.S. sprawl at that last link!

“The Rise and Fall of General Laws of Capitalism,” D. Acemoglu & J. Robinson (2014)

If there is one general economic law, it is that every economist worth their salt is obligated to put out twenty pages responding to Piketty’s Capital. An essay by Acemoglu and Robinson on this topic, though, is certainly worth reading. They present three particularly compelling arguments. First, in a series of appendices, they follow Debraj Ray, Krusell and Smith and others in trying to clarify exactly what Piketty is trying to say, theoretically. Second, they show that it is basically impossible to find any effect of the famed r-g on top inequality in statistical data. Third, they claim that institutional features are much more relevant to the impact of economic changes on societal outcomes, using South Africa and Sweden as examples. Let’s tackle these in turn.

First, the theory. It has been noted before that Piketty is, despite beginning his career as a very capable economist theorist (hired at MIT at age 22!), very disdainful of the prominence of theory. He, quite correctly, points out that we don’t even have any descriptive data on a huge number of topics of economic interest, inequality being principal among these. And indeed he is correct! But, shades of the Methodenstreit, he then goes on to ignore theory where it is most useful, in helping to understand, and extrapolate from, his wonderful data. It turns out that even in simple growth models, not only is it untrue that r>g necessarily holds, but the endogeneity of r and our standard estimates of the elasticity of substitution between labor and capital do not at all imply that capital-to-income ratios will continue to grow (see Matt Rognlie on this point). Further, Acemoglu and Robinson show that even relatively minor movement between classes is sufficient to keep the capital share from skyrocketing. Do not skip the appendices to A and R’s paper – these are what should have been included in the original Piketty book!

Second, the data. Acemoglu and Robinson point out, and it really is odd, that despite the claims of “fundamental laws of capitalism”, there is no formal statistical investigation of these laws in Piketty’s book. A and R look at data on growth rates, top inequality and the rate of return (either on government bonds, or on a computed economy-wide marginal return on capital), and find that, if anything, as r-g grows, top inequality shrinks. All of the data is post WW2, so there is no Great Depression or World War confounding things. How could this be?

The answer lies in the feedback between inequality and the economy. As inequality grows, political pressures change, the endogenous development and diffusion of technology changes, the relative use of capital and labor change, and so on. These effects, in the long run, dominate any “fundamental law” like r>g, even if such a law were theoretically supported. For instance, Sweden and South Africa have very similar patterns of top 1% inequality over the twentieth century: very high at the start, then falling in mid-century, and rising again recently. But the causes are totally different: in Sweden’s case, labor unrest led to a new political equilibrium with a high-growth welfare state. In South Africa’s case, the “poor white” supporters of Apartheid led to compressed wages at the top despite growing black-white inequality until 1994. So where are we left? The traditional explanations for inequality changes: technology and politics. And even without r>g, these issues are complex and interesting enough – what could be a more interesting economic problem for an American economist than diagnosing the stagnant incomes of Americans over the past 40 years?

August 2014 working paper (No IDEAS version yet). Incidentally, I have a little tracker on my web browser that lets me know when certain pages are updated. Having such a tracker follow Acemoglu’s working papers pages is, frankly, depressing – how does he write so many papers in such a short amount of time?

Debraj Ray on Piketty’s Capital

As mentioned by Sandeep Baliga over at Cheap Talk, Debraj Ray has a particularly interesting new essay on Piketty’s Capital in the 21st Century. If you are theoretically inclined, you will find Ray’s comments to be one of the few reviews of Piketty that proves insightful.

I have little to add to Ray, but here are four comments about Piketty’s book:

1) The data collection effort on inequality by Piketty and coauthors is incredible and supremely interesting; not for nothing does Saez-Piketty 2003 have almost 2000 citations. Much of this data can be found in previous articles, of course, but it is useful to have it all in one place. Why it took so long for this data to become public, compared to things like GDP measures, is an interesting one which sociology Dan Hirschman is currently working on. Incidentally, the data quality complaints by the Financial Times seem to me of rather limited importance to the overall story.

2) The idea that Piketty is some sort of outsider, as many in the media want to make him out to be, is very strange. His first job was at literally the best mainstream economics department in the entire world, he won the prize given to the best young economist in Europe, he has published a paper in a Top 5 economics journal every other year since 1995, his most frequent coauthor is at another top mainstream department, and that coauthor himself won the prize for the best young economist in the US. It is also simply not true that economists only started caring about inequality after the 2008 financial crisis; rather, Autor and others were writing on inequality well before date in response to clearer evidence that the “Great Compression” of the income distribution in the developed world during the middle of the 20th century had begun to reverse itself sometime in the 1970s. Even I coauthored a review of income inequality data in late 2006/early 2007!

3) As Ray points out quite clearly, the famous “r>g” of Piketty’s book is not an explanation for rising inequality. There are lots of standard growth models – indeed, all standard growth models that satisfy dynamic efficiency – where r>g holds with no impact on the income distribution. Ray gives the Harrod model: let output be produced solely by capital, and let the capital-output ratio be constant. Then Y=r*K, where r is the return to capital net of depreciation, or the capital-output ratio K/Y=1/r. Now savings in excess of that necessary to replace depreciated assets is K(t+1)-K(t), or

Y(t+1)[K(t+1)/Y(t+1)] – Y(t)[K(t)/Y(t)]

Holding the capital-output ratio constant, we have that savings s=[Y(t+1)-Y(t)]K/Y=g[K/Y], where g is the growth rate of the economy. Finally, since K/Y=1/r in the Harrod model, we have that s=g/r, and hence r>g will hold in a Harrod model whenever the savings rate is less than 100% of current income. This model, however, has nothing to do with the distribution of income. Ray notes that the Phelps-Koopmans theorem implies that a similar r>g result will hold along any dynamically efficient growth path in much more general models.

You may wonder, then, how we can have r>g and yet not have exploding income held by the capital-owning class. Two reasons: first, as Piketty has pointed out, r in these economic models (the return to capital, full stop) and r in the sense important to growing inequality, are not the same concept, since wars and taxes lower the r received by savers. Second, individuals presumably also dissave according to some maximization concept. Imagine an individual has $1 billion, the risk-free market return after taxes is 4%, and the economy-wide growth rate is 2%, with both numbers exogenously holding forever. It is of course true true that this individual could increase their share of the economy’s wealth without bound. Even with the caveat that as the capital-owning class owns more and more, surely the portion of r due to time preference, and hence r itself, will decline, we still oughtn’t conclude that income inequality will become worse or that capital income will increase. If this representative rich individual simply consumes 1.92% of their income each year – a savings rate of over 98 percent! – the ratio of income among the idle rich to national income will remain constant. What’s worse, if some of the savings is directed to human capital rather than physical capital, as is clearly true for the children of the rich in the US, the ratio of capital income to overall income will be even less likely to grow.

These last couple paragraphs are simply an extended argument that r>g is not a “Law” that says something about inequality, but rather a starting point for theoretical investigation. I am not sure why Piketty does not want to do this type of investigation himself, but the book would have been better had he done so.

4) What, then, does all this mean about the nature of inequality in the future? Ray suggests an additional law: that there is a long-run tendency for capital to replace labor. This is certainly true, particularly if human capital is counted as a form of “capital”. I disagree with Ray about the implication of this fact, however. He suggests that “to avoid the ever widening capital-labor inequality as we lurch towards an automated world, all its inhabitants must ultimately own shares of physical capital.” Consider the 19th century as a counterexample. There was enormous technical progress in agriculture. If you wanted a dynasty that would be rich in 2014, ought you have invested in agricultural land? Surely not. There has been enormous technical progress in RAM chips and hard drives in the last couple decades. Is the capital related to those industries where you ought to have invested? No. With rapid technical progress in a given sector, the share of total income generated by that sector tends to fall (see Baumol). Even when the share of total income is high, the social surplus of technical progress is shared among various groups according to the old Ricardian rule: rents accrue to the (relatively) fixed factor! Human capital which is complementary to automation, or goods which can maintain a partial monopoly in an industry complementary to those affected by automation, are much likelier sources of riches than owning a bunch of robots, since robots and the like are replicable and hence the rents accrued to their owners, regardless of the social import, will be small.

There is still a lot of work to be done concerning the drivers of long-run inequality, by economists and by those more concerned with political economy and sociology. Piketty’s data, no question, is wonderful. Ray is correct that the so-called Laws in Piketty’s book, and the predictions about the next few decades that they generate, are of less interest.

A Comment on Thomas Piketty, inclusive of appendix, is in pdf form, or a modified version in html can be read here.