Category Archives: Labor

Nobel Prize 2016 Part II: Oliver Hart

The Nobel Prize in Economics was given yesterday to two wonderful theorists, Bengt Holmstrom and Oliver Hart. I wrote a day ago about Holmstrom’s contributions, many of which are simply foundational to modern mechanism design and its applications. Oliver Hart’s contribution is more subtle and hence more of a challenge to describe to a nonspecialist; I am sure of this because no concept gives my undergraduate students more headaches than Hart’s “residual control right” theory of the firm. Even stranger, much of Hart’s recent work repudiates the importance of his most famous articles, a point that appears to have been entirely lost on every newspaper discussion of Hart that I’ve seen (including otherwise very nice discussions like Applebaum’s in the New York Times). A major reason he has changed his beliefs, and his research agenda, so radically is not simply the whims of age or the pressures of politics, but rather the impact of a devastatingly clever, and devastatingly esoteric, argument made by the Nobel winners Eric Maskin and Jean Tirole. To see exactly what’s going on in Hart’s work, and why there remains many very important unsolved questions in this area, let’s quickly survey what economists mean by “theory of the firm”.

The fundamental strangeness of firms goes back to Coase. Markets are amazing. We have wonderful theorems going back to Hurwicz about how competitive market prices coordinate activity efficiently even when individuals only have very limited information about how various things can be produced by an economy. A pencil somehow involves graphite being mined, forests being explored and exploited, rubber being harvested and produced, the raw materials brought to a factory where a machine puts the pencil together, ships and trains bringing the pencil to retail stores, and yet this decentralized activity produces a pencil costing ten cents. This is the case even though not a single individual anywhere in the world knows how all of those processes up the supply chain operate! Yet, as Coase pointed out, a huge amount of economic activity (including the majority of international trade) is not coordinated via the market, but rather through top-down Communist-style bureaucracies called firms. Why on Earth do these persistent organizations exist at all? When should firms merge and when should they divest themselves of their parts? These questions make up the theory of the firm.

Coase’s early answer is that something called transaction costs exist, and that they are particularly high outside the firm. That is, market transactions are not free. Firm size is determined at the point where the problems of bureaucracy within the firm overwhelm the benefits of reducing transaction costs from regular transactions. There are two major problems here. First, who knows what a “transaction cost” or a “bureaucratic cost” is, and why they differ across organizational forms: the explanation borders on tautology. Second, as the wonderful paper by Alchian and Demsetz in 1972 points out, there is no reason we should assume firms have some special ability to direct or punish their workers. If your supplier does something you don’t like, you can keep them on, or fire them, or renegotiate. If your in-house department does something you don’t like, you can keep them on, or fire them, or renegotiate. The problem of providing suitable incentives – the contracting problem – does not simply disappear because some activity is brought within the boundary of the firm.

Oliver Williamson, a recent Nobel winner joint with Elinor Ostrom, has a more formal transaction cost theory: some relationships generate joint rents higher than could be generated if we split ways, unforeseen things occur that make us want to renegotiate our contract, and the cost of that renegotiation may be lower if workers or suppliers are internal to a firm. “Unforeseen things” may include anything which cannot be measured ex-post by a court or other mediator, since that is ultimately who would enforce any contract. It is not that everyday activities have different transaction costs, but that the negotiations which produce contracts themselves are easier to handle in a more persistent relationship. As in Coase, the question of why firms do not simply grow to an enormous size is largely dealt with by off-hand references to “bureaucratic costs” whose nature was largely informal. Though informal, the idea that something like transaction costs might matter seemed intuitive and had some empirical support – firms are larger in the developing world because weaker legal systems means more “unforeseen things” will occur outside the scope of a contract, hence the differential costs of holdup or renegotiation inside and outside the firm are first order when deciding on firm size. That said, the Alchian-Demsetz critique, and the question of what a “bureaucratic cost” is, are worrying. And as Eric van den Steen points out in a 2010 AER, can anyone who has tried to order paper through their procurement office versus just popping in to Staples really believe that the reason firms exist is to lessen the cost of intrafirm activities?

Grossman and Hart (1986) argue that the distinction that really makes a firm a firm is that it owns assets. They retain the idea that contracts may be incomplete – at some point, I will disagree with my suppliers, or my workers, or my branch manager, about what should be done, either because a state of the world has arrived not covered by our contract, or because it is in our first-best mutual interest to renegotiate that contract. They retain the idea that there are relationship-specific rents, so I care about maintaining this particular relationship. But rather than rely on transaction costs, they simply point out that the owner of the asset is in a much better bargaining position when this disagreement occurs. Therefore, the owner of the asset will get a bigger percentage of rents after renegotiation. Hence the person who owns an asset should be the one whose incentive to improve the value of the asset is most sensitive to that future split of rents.

Baker and Hubbard (2004) provide a nice empirical example: when on-board computers to monitor how long-haul trucks were driven began to diffuse, ownership of those trucks shifted from owner-operators to trucking firms. Before the computer, if the trucking firm owns the truck, it is hard to contract on how hard the truck will be driven or how poorly it will be treated by the driver. If the driver owns the truck, it is hard to contract on how much effort the trucking firm dispatcher will exert ensuring the truck isn’t sitting empty for days, or following a particularly efficient route. The computer solves the first problem, meaning that only the trucking firm is taking actions relevant to the joint relationship which are highly likely to be affected by whether they own the truck or not. In Grossman and Hart’s “residual control rights” theory, then, the introduction of the computer should mean the truck ought, post-computer, be owned by the trucking firm. If these residual control rights are unimportant – there is no relationship-specific rent and no incompleteness in contracting – then the ability to shop around for the best relationship is more valuable than the control rights asset ownership provides. Hart and Moore (1990) extends this basic model to the case where there are many assets and many firms, suggesting critically that sole ownership of assets which are highly complementary in production is optimal. Asset ownership affects outside options when the contract is incomplete by changing bargaining power, and splitting ownership of complementary assets gives multiple agents weak bargaining power and hence little incentive to invest in maintaining the quality of, or improving, the assets. Hart, Schleifer and Vishny (1997) provide a great example of residual control rights applied to the question of why governments should run prisons but not garbage collection. (A brief aside: note the role that bargaining power plays in all of Hart’s theories. We do not have a “perfect” – in a sense that can be made formal – model of bargaining, and Hart tends to use bargaining solutions from cooperative game theory like the Shapley value. After Shapley’s prize alongside Roth a few years ago, this makes multiple prizes heavily influenced by cooperative games applied to unexpected problems. Perhaps the theory of cooperative games ought still be taught with vigor in PhD programs!)

There are, of course, many other theories of the firm. The idea that firms in some industries are big because there are large fixed costs to enter at the minimum efficient scale goes back to Marshall. The agency theory of the firm going back at least to Jensen and Meckling focuses on the problem of providing incentives for workers within a firm to actually profit maximize; as I noted yesterday, Holmstrom and Milgrom’s multitasking is a great example of this, with tasks being split across firms so as to allow some types of workers to be given high powered incentives and others flat salaries. More recent work by Bob Gibbons, Rebecca Henderson, Jon Levin and others on relational contracting discusses how the nexus of self-enforcing beliefs about how hard work today translates into rewards tomorrow can substitute for formal contracts, and how the credibility of these “relational contracts” can vary across firms and depend on their history.

Here’s the kicker, though. A striking blow was dealt to all theories which rely on the incompleteness or nonverifiability of contracts by a brilliant paper of Maskin and Tirole (1999) in the Review of Economic Studies. Theories relying on incomplete contracts generally just hand-waved that there are always events which are unforeseeable ex-ante or impossible to verify in court ex-post, and hence there will always scope for disagreement about what to do when those events occur. But, as Maskin and Tirole correctly point out, agent don’t care about anything in these unforeseeable/unverifiable states except for what the states imply about our mutual valuations from carrying on with a relationship. Therefore, every “incomplete contract” should just involve the parties deciding in advance that if a state of the world arrives where you value keeping our relationship in that state at 12 and I value it at 10, then we should split that joint value of 22 at whatever level induces optimal actions today. Do this same ex-ante contracting for all future profit levels, and we are done. Of course, there is still the problem of ensuring incentive compatibility – why would the agents tell the truth about their valuations when that unforeseen event occurs? I will omit the details here, but you should read the original paper where Maskin and Tirole show a (somewhat convoluted but still working) mechanism that induces truthful revelation of private value by each agent. Taking the model’s insight seriously but the exact mechanism less seriously, the paper basically suggests that incomplete contracts don’t matter if we can truthfully figure out ex-post who values our relationship at what amount, and there are many real-world institutions like mediators who do precisely that. If, as Maskin and Tirole prove (and Maskin described more simply in a short note), incomplete contracts aren’t a real problem, we are back to square one – why have persistent organizations called firms?

What should we do? Some theorists have tried to fight off Maskin and Tirole by suggesting that their precise mechanism is not terribly robust to, for instance, assumptions about higher-order beliefs (e.g., Aghion et al (2012) in the QJE). But these quibbles do not contradict the far more basic insight of Maskin and Tirole, that situations we think of empirically as “hard to describe” or “unlikely to occur or be foreseen”, are not sufficient to justify the relevance of incomplete contracts unless we also have some reason to think that all mechanisms which split rent on the basis of future profit, like a mediator, are unavailable. Note that real world contracts regularly include provisions that ex-ante describe how contractual disagreement ex-post should be handled.

Hart’s response, and this is both clear from his CV and from his recent papers and presentations, is to ditch incompleteness as the fundamental reason firms exist. Hart and Moore’s 2007 AER P&P and 2006 QJE are very clear:

Although the incomplete contracts literature has generated some useful insights about firm boundaries, it has some shortcomings. Three that seem particularly important to us are the following. First, the emphasis on noncontractible ex ante investments seems overplayed: although such investments are surely important, it is hard to believe that they are the sole drivers of organizational form. Second, and related, the approach is ill suited to studying the internal organization of firms, a topic of great interest and importance. The reason is that the Coasian renegotiation perspective suggests that the relevant parties will sit down together ex post and bargain to an efficient outcome using side payments: given this, it is hard to see why authority, hierarchy, delegation, or indeed anything apart from asset ownership matters. Finally, the approach has some foundational weaknesses [pointed out by Maskin and Tirole (1999)].

To my knowledge, Oliver Hart has written zero papers since Maskin-Tirole was published which attempt to explain any policy or empirical fact on the basis of residual control rights and their necessary incomplete contracts. Instead, he has been primarily working on theories which depend on reference points, a behavioral idea that when disagreements occur between parties, the ex-ante contracts are useful because they suggest “fair” divisions of rent, and induce shading and other destructive actions when those divisions are not given. These behavioral agents may very well disagree about what the ex-ante contract means for “fairness” ex-post. The primary result is that flexible contracts (e.g., contracts which deliberately leave lots of incompleteness) can adjust easily to changes in the world but will induce spiteful shading by at least one agent, while rigid contracts do not permit this shading but do cause parties to pursue suboptimal actions in some states of the world. This perspective has been applied by Hart to many questions over the past decade, such as why it can be credible to delegate decision making authority to agents; if you try to seize it back, the agent will feel aggrieved and will shade effort. These responses are hard, or perhaps impossible, to justify when agents are perfectly rational, and of course the Maskin-Tirole critique would apply if agents were purely rational.

So where does all this leave us concerning the initial problem of why firms exist in a sea of decentralized markets? In my view, we have many clever ideas, but still do not have the perfect theory. A perfect theory of the firm would need to be able to explain why firms are the size they are, why they own what they do, why they are organized as they are, why they persist over time, and why interfirm incentives look the way they do. It almost certainly would need its mechanisms to work if we assumed all agents were highly, or perfectly, rational. Since patterns of asset ownership are fundamental, it needs to go well beyond the type of hand-waving that makes up many “resource” type theories. (Firms exist because they create a corporate culture! Firms exist because some firms just are better at doing X and can’t be replicated! These are outcomes, not explanations.) I believe that there are reasons why the costs of maintaining relationships – transaction costs – endogenously differ within and outside firms, and that Hart is correct is focusing our attention on how asset ownership and decision making authority affects incentives to invest, but these theories even in their most endogenous form cannot do everything we wanted a theory of the firm to accomplish. I think that somehow reputation – and hence relational contracts – must play a fundamental role, and that the nexus of conflicting incentives among agents within an organization, as described by Holmstrom, must as well. But we still lack the precise insight to clear up this muddle, and give us a straightforward explanation for why we seem to need “little Communist bureaucracies” to assist our otherwise decentralized and almost magical market system.

Advertisement

“The Gift of Moving: Intergenerational Consequences of a Mobility Shock,” E. Nakamura, J. Sigurdsson & J. Steinsson (2016)

The past decade has seen interesting work in many fields of economics on the importance of misallocation for economic outcomes. Hsieh and Klenow’s famous 2009 paper suggested that misallocation of labor and capital in the developing world costs countries like China and India the equivalent of many years of growth. The same two authors have a new paper with Erik Hurst and Chad Jones suggesting that a substantial portion of the growth in the US since 1960 has been via better allocation of workers. In 1960, they note, 94 percent of doctors and lawyers were white men, versus 62 percent today, and we have no reason to believe the innate talent distribution in those fields had changed. Therefore, there were large numbers of women and minorities who would have been talented enough to work in these high-value fields in 1960, but due to misallocation (including in terms of who is educated) did not. Lucia Foster, John Haltiwanger and Chad Syverson have a famous paper in the AER on how to think about reallocation within industries, and the extent to which competition reallocates production from less efficient to more efficient producers; this is important because it is by now well-established that there is an enormous range of productivity within each industry, and hence potentially enormous efficiency gains from proper reallocation away from low-productivity producers.

The really intriguing misallocation question, though, is misallocation of workers across space. Some places are very productive, and others are not. Why don’t workers move? Part of the explanation, particularly in the past few decades, is that due to increasing land use regulation, local changes in total factor productivity increase housing costs, meaning that only high skilled workers gain much by mobility in response to shocks (see, e.g., Ganong and Shoag on the direct question of who benefits from moving, and Hornbeck and Moretti on the effects of productivity shocks on rents and incomes).

A second explanation is that people, quite naturally, value their community. They value their community both because they have friends and often family in the area, and also because they make investments in skills that are well-matched to where they live. For this reason, even if Town A is 10% more productive for the average blue-collar worker, a particular worker in Town B may be reluctant to move if it means giving up community connections or trying to relearn a different skill. This effect appears to be important particularly for people whose original community is low productivity: Deyrugina, Kawano and Levitt showed how those induced out of poor areas of New Orleans by Hurricane Katrina would up with higher wages than those whose neighborhoods were not flooded, and (the well-surnamed) Bryan, Chowdhury and Mobarak find large gains in income when they induce poor rural Bangladeshis to temporarily move to cities.

Today’s paper, by Nakamura et al, is interesting because it shows these beneficial effects of being forced out of one’s traditional community can hold even if the community is rich. The authors look at the impact of the 1973 volcanic eruption which destroyed a large portion of the main town, a large fishing village, on Iceland’s Westman Islands. Though the town had only 5200 residents, this actually makes it large by Icelandic standards: even today, there is only one town on the whole island which is both larger than that and located more than 45 minutes drive from the capital. Further, though the town is a fishing village, it was then and is now quite prosperous due to its harbor, a rarity in Southern Iceland. Residents whose houses were destroyed were compensated by the government, and could have either rebuilt on the island or moved away: those with destroyed houses wind up 15 percentage points more likely to move away than islanders whose houses remained intact.

So what happened? If you were a kid when your family moved away, the instrumental variables estimation suggests you got an average of 3.6 more years of schooling and mid-career earnings roughly 30,000 dollars higher than if you’d remained! Adults who left saw, if anything, a slight decrease in their lifetime earnings. Remember that the Westman Islands were and are wealthier than the rest of Iceland, so moving would really only benefit those whose dynasties had comparative advantage in fields other than fishing. In particular, parents with college educations were more likely to be move, conditional on their house being destroyed, than those without. So why did those parents need to be induced by the volcano to pack up? The authors suggest some inability to bargain as a household (the kids benefited, but not the adults), as well as uncertainty (naturally, whether moving would increase kids’ wages forty years later may have been unclear). From the perspective of a choice model, however, the outcome doesn’t seem unusual: parents, due to their community connections and occupational choice, would have considered moving very costly, even if they knew it was in their kid’s best long-term interests.

There is a lesson in the Iceland experience, as well as in the Katrina papers and other similar results: economic policy should focus on people, and not communities. Encouraging closer community ties, for instance, can make reallocation more difficult, and can therefore increase long-run poverty, by increasing the subjective cost of moving. When we ask how to handle long-run poverty in Appalachia, perhaps the answer is to provide assistance for groups who want to move, therefore gaining the benefit of reallocation across space while lessening the perceived cost of moving (my favorite example of clustered moves is that roughly 5% of the world’s Marshall Islanders now live in Springdale, Arkansas!). Likewise, limits on the movement of parolees across states can entrench poverty at precisely the time the parolee likely has the lowest moving costs.

June 2016 Working Paper (No RePEc IDEAS version yet).

“Ranking Firms Using Revealed Preference,” I. Sorkin (2015)

Roughly 20 percent of earnings inequality is not driven by your personal characteristics or the type of job you work at, but by the precise firm you work for. This is odd. In a traditional neoclassical labor market, every firm should offer to same wage to workers with the same marginal productivity. If a firm doesn’t do so, surely their workers will quit and go to firms that pay better. One explanation is that since search frictions make it hard to immediately replace workers, firms with market power will wind up sharing rents with their employees. It is costly to search for jobs, but as your career advances, you try to move “up the job ladder” from positions that pay just your marginal product to positions that pay a premium: eventually you wind up as the city bus driver with the six figure contract and once there you don’t leave. But is this all that is going on?

Isaac Sorkin, a job market candidate from Michigan, correctly notes that workers care about the utility their job offers, not the wage. Some jobs stink even though they pay well: 80 hour weeks, high pressure bosses, frequent business travel to the middle of nowhere, low levels of autonomy, etc. We can’t observe the utility a job offers, of course, but this is a problem that always comes up in demand analysis. If a Chipotle burrito and a kale salad cost the same, but you buy the burrito, then you have revealed that you get more utility from the former; this is the old theory of revealed preference. Even though we rarely observe a single person choosing from a set of job offers, we do observe worker flows between firms. If we can isolate workers who leave their existing job for individual reasons, as distinct from those who leave because their entire firm suffers a negative shock, then their new job is “revealed” better. Intuitively, we see a lot of lawyers quit to run a bed and breakfast in Vermont, but basically zero lawyers quitting to take a mining job that pays the same as running a B&B, hence the B&B must be a “better job” than mining, and further if we don’t see any B&B owners quitting to become lawyers, the B&B must be a “better job” than corporate law even if the pay is lower.

A sensible idea, then: the same worker may be paid different amounts in relation to marginal productivity either because they have moved up the job ladder and luckily landed at a firm with market power and hence pay above marginal product (a “good job”), or because different jobs offer different compensating differentials (in which case high paying jobs may actually be “bad jobs” with long hours and terrible work environments). To separate the two rationales, we need to identify the relative attractiveness of jobs, for which revealed preference should work. The problem in practice is both figuring out which workers are leaving for individual reasons, and getting around the problem that it is unusual to observe in the data a nonzero number of people going from firm A to firm B and vice versa.

Sorkin solves these difficulties in a very clever way. Would you believe the secret is to draw on the good old Perron-Frebonius theorem, a trusted tool of microeconomists interested in network structure? How could that be? Workers meet firms in a search process, with firms posting offers in terms of a utility bundle of wages plus amenities. Each worker also has idiosyncratic tastes about things like where to live, how they like the boss, and so on. The number of folks that move voluntarily from job A to job B depends on how big firm A is (bigger firms have more workers that might leave), how frequently A has no negative productivity shocks (in which case moves are voluntary), and the probability a worker from A is offered a job at B when matched and accepts it, which depends on the relative utilities of the two jobs including the individual idiosyncratic portion. An assumption about the distribution of idiosyncratic utility across jobs allows Sorkin to translate probabilities of accepting a job into relative utilities.

What is particularly nice is that the model gives a linear restriction on any two job pairs: the relative probability of moving from A to B instead of B to A depends on the relative utility (abstracting from idiosyncratic portions) adjusted for firm size and offer probability. That is, if M(A,B) is the number of moves from A to B, and V(A) is a (defined in the paper) function of the non-idiosyncratic utility of job A, then

M(A,B)/M(B,A) = V(B)/V(A)

and hence

M(A,B)V(A) = M(B,A)V(B)

Taking this to data is still problematic because we need to restrict to job changes that are not just “my factory went out of business”, and because M(A,B) or M(B,A) are zero for many firm pairs. The first problem is solved by estimating the probability a given job switch is voluntary using the fact that layoff probability is related to the size and growth rate of a firm. The second problem can be solved by noting that if we sum the previous equation over all firms B not equal to A, we have

sum(B!=A)M(A,B)*V(A) = sum(B!=A)M(B,A)*V(B)

or

V(A) = sum(B!=A)M(B,A)*V(B)/sum(B!=A)M(A,B)

The numerator is the number of hires A makes weighted for the non-idiosyncratic utility of firms the hires come from, and the denominator is the number of people that leave firm A. There is one such linear restriction per firm, but the utility of firm A depends on the utility of all firms. How to avoid this circularity? Write the linear restrictions in matrix form, and use the Perron-Frebonius theorem to see that the relative values of V are determined by a particular eigenvector as long as the matrix of moves is strongly connected! Strongly connected just means that there is at least one chain of moves between employers that can get me from firm A to B and vice versa, for all firm pairs!. All that’s left to do now is to take this to the data (not a trivial computation task, since there are so many firms in the US data that calculating eigenvectors will require some numerical techniques).

So what do we learn? Industries like education offer high utility compared to pay, and industries like mining offer the opposite, as you’d expect. Many low paying jobs offer relatively high nonpay utility, and many female-dominated sectors do as well, implying the measured earnings inequality and gender gaps may be overstating the true extent of utility inequality. That is, a teacher making half what a miner makes is partly reflective of the fact that mining is a job that requires compensating differentials to make up for long hours in the dark and dangerous mine shaft. Further, roughly two thirds of the earnings inequality related to firms seems to be reflecting compensating differentials, and since just over 20% of earnings inequality in the US is firm related, this means that about 15% of earnings inequality is just reflecting the differential perceived quality of jobs. This is a surprising result, and it appears to be driven by differences in job amenities that are not easy to measure. Goldman Sachs is a “good job” despite relatively low pay compared to other finance firms because they offer good training and connections. This type of amenity is hard to observe, but Sorkin’s theoretical approach based on revealed preference allows the econometrician to “see” these types of differences across jobs, and hence to more properly understand which jobs are desirable. This is another great example of a question – how does the quality of jobs differ and what does that say about the nature of earnings inequality – that is fundamentally unanswerable by methodological techniques that are unwilling to inject some theoretical assumptions into the analysis.

November 2015 Working Paper. Sorkin has done some intriguing work using historical data on the minimum wage as well. Essentially, minimum wage changes that are not indexed to inflation are only temporary in real terms, so if it costly to switch from labor to machines, you might not do so in response to a “temporary” minimum wage shock. But a permanent increase does appear to cause long run shifts away from labor, something Sorkin sees in industries from apparel in the early 20th century to fast food restaurants. Simon Jäger, a job candidate from Harvard, also has an interesting purely empirical paper about friction in the labor market, taking advantage of early deaths of German workers. When these deaths happen, working in similar roles at the firm see higher wages and lower separation probability for many years, whereas other coworkers see lower wages, with particularly large effects when the dead worker has unusual skills. All quite intuitive from a search model theory of labor, where workers are partial substitutes for folks with the same skills, but complements for folks with firm-specific capital but dissimilar skills. Add these papers to the evidence that efficiency in the search-and-matching process of labor to firms is a first order policy problem.

“Valuing Diversity,” G. Loury & R. Fryer (2013)

The old chair of my alma mater’s economics department, Glenn Loury is, somehow, wrapped up in a kerfuffle related to the student protests that have broken out across the United States. Loury, who is now at Brown, wrote an op-ed in the student paper which to an economist just says that the major racial problem in the United States is statistical discrimination not taste-based discrimination, and hence the types of protests and desired recourse of the student protesters is wrongheaded. After being challenged about “what type of a black scholar” he is, Loury wrote a furious response pointing out that he is, almost certainly, the world’s most prominent scholar on the topic of racial discrimination and potential remedies, and has been thinking about how policy can remedy racial injustice since before the student’s parents were even born.

An important aspect of his work is that, under statistical discrimination, there is huge scope for perverse and unintended effects of policies. This idea has been known since Ken Arrow’s famous 1973 paper, but Glenn Loury and Stephen Coate in 1993 worked it out in greater detail. Imagine there are black and white workers, and high-paid good jobs, which require skill, and low-paid bad jobs which do not. Workers make an unobservable investment in skill, where the firm only sees a proxy: sometimes unskilled workers “look like” skilled workers, sometimes skilled workers “look like” unskilled workers, and sometimes we aren’t sure. As in Arrow’s paper, there can be multiple equilibria: when firms aren’t sure of a worker’s skill, if they assume all of those workers are unskilled, then in equilibrium investment in skill will be such that the indeterminate workers can’t profitably be placed in skilled jobs, but if the firms assume all indeterminate workers are skilled, then there is enough skill investment to make it worthwhile for firms to place those workers in high-skill, high-wage jobs. Since there are multiple equilibria, if race or some other proxy is observable, we can be in the low-skill-job, low-investment equilibrium for one group, and the high-skill-job, high-investment equilibrium for a different group. That is, even with no ex-ante difference across groups and no taste-based bias, we still wind up with a discriminatory outcome.

The question Coate and Loury ask is whether affirmative action can fix this negative outcome. Let an affirmative action rule state that the proportion of all groups assigned to the skilled job must be equal. Ideally, affirmative action would generate equilibrium beliefs by firms about workers that are the same no matter what group those workers come from, and hence skill investment across groups that is equal. Will this happen? Not necessarily. Assume we are in the equilibrium where one group is assumed low-skill when their skill in indeterminate, and the other group is assumed high-skill.

In order to meet the affirmative action rule, either more of the discriminated group needs to be assigned to the high-skill job, or more of the favored group need to be assigned to the low-skill job. Note that in the equilibrium without affirmative action, the discriminated group invests less in skills, and hence the proportion of the discriminated group that tests as unskilled is higher than the proportion of the favored group that does so. The firms can meet the affirmative action rule, then, by keeping the assignment rule for favored groups as before, and by assigning all proven-skilled and indeterminate discriminated workers as well as some random proportion of proven-unskilled discriminated workers, to the skilled task. This rule decreases the incentive to invest in skills for the discriminated group, and hence it is no surprise that not only can it be an equilibrium, but that Coate and Loury can show the dynamics of this policy lead to fewer and fewer discriminated workers investing in skills over time: despite identical potential at birth, affirmative action policies can lead to “patronizing equilibria” that exacerbate, rather than fix, differences across groups. The growing skill difference between previously-discriminated-against “Bumiputra” Malays and Chinese Malays following affirmative action policies in the 1970s fits this narrative nicely.

The broader point here, and one that comes up in much of Loury’s theoretical work, is that because policies affect beliefs even of non-bigoted agents, statistical discrimination is a much harder problem to solve than taste-based or “classical” bias. Consider the job market for economists. If women or minorities have trouble finding jobs because of an “old boys’ club” that simply doesn’t want to hire those groups, then the remedy is simple: require hiring quotas and the like. If, however, the problem is that women or minorities don’t enter economics PhD programs because of a belief that it will be hard to be hired, and that difference in entry leads to fewer high-quality women or minorities come graduation, then remedies like simple quotas may lead to perverse incentives.

Moving beyond perverse incentives, there is also the question of how affirmative action programs should be designed if we want to equate outcomes across groups that face differential opportunities. This question is taken up in “Valuing Diversity”, a recent paper Loury wrote with recent John Bates Clark medal winner Roland Fryer. Consider dalits in India or African-Americans: for a variety of reasons, from historic social network persistence to neighborhood effects, the cost of increasing skill may be higher for these groups. We have an opportunity which is valuable, such as slots at a prestigious college. Simply providing equal opportunity may not be feasible because the social reasons why certain groups face higher costs of increasing skill are very difficult to solve. Brown University, or even the United States government as a whole, may be unable to fix the persistence social difference in upbringing among blacks and whites. So what to do?

There are two natural fixes. We can provide a lower bar for acceptance for the discriminated group at the prestigious college, or subsidize skill acquisition for the discriminated group by providing special summer programs, tutoring, etc. If policy can be conditioned on group identity, then the optimal policy is straightforward. First, note that in a laissez faire world, individuals invest in skill until the cost of investment for the marginal accepted student exactly equates to the benefit the student gets from attending the fancy college. That is, the equilibrium is efficient: students with the lowest cost of acquiring skill are precisely the ones who invest and are accepted. But precisely that weighing of marginal benefit and costs holds within group if the acceptance cutoff differs by group identity, so if policy can condition on group identity, we can get whatever mix of students from different groups we want while still ensuring that the students within each group with the lowest cost of upgrading their skill are precisely the ones who invest and are accepted. The policy change itself, by increasing the quota of slots for the discriminated group, will induce marginal students from that group to upgrade their skills in order to cross the acceptance threshold; that is, quotas at the assignment stage implicitly incentivize higher investment by the discriminated group.

The trickier problem is when policy cannot condition on group identity, as is the case in the United States under current law. I would like somehow to accept more students from the discriminated against group, and to ensure that those students invest in their skill, but the policy I set needs to treat the favored and discriminated against groups equally. Since discriminated-against students make up a bigger proportion of those with a high cost of skill acquisition compared to students with a low cost of skill acquisition, any “blind” policy that does condition on group identity will induce identical investment activity and acceptance probability among agents with identical costs of skill upgrading. Hence any blind policy that induces more discriminated-students to attend college must somehow be accepting students with higher costs of skill acquisition than the marginal accepted student under laissez faire, and must not be accepting students whose costs of skill acquisition were at the laissez faire margin. Fryer and Loury show, by solving the relevant linear program, that we can best achieve this by allowing the most productive students to buy their slots, and then randomly assigning slots to everyone else.

Under that policy, very low cost of effort students still invest so that their skill is high enough that buying a guaranteed slot is worth it. I then use either a tax or subsidy on skill investment in order to affect how many people find it worth investing in skill and then buying the guaranteed slot, and hence in conjunction with the randomized slot assignment, ensuring that the desired mixture across groups that are accepted is achieved.

This result resembles certain results in dynamic pricing. How do I get people to pay a high price for airplane tickets while still hoping to sell would-be-empty seats later at a low price? The answer is that I make high-value people worried that if they don’t buy early, the plane may sell out. The high-value people then trade off paying a high price and getting a seat with probability 1 versus waiting for a low price by maybe not getting on the plane at all. Likewise, how do I induce people to invest in skills even when some lower-skill people will be admitted? Ensure that lower-skill people are only admitted with some randomness. The folks who can get perfect grades and test scores fairly easily will still exert effort to do so, ensuring they get into their top choice college guaranteed rather than hoping to be admitted subject to some random luck. This type of intuition is non-obvious, which is precisely Loury’s point: racial and other forms of injustice are often due to factors much more subtle than outright bigotry, and the optimal response to these more subtle causes do not fit easily on a placard or a bullhorn slogan.

Final working paper (RePEc IDEAS version), published in the JPE, 2013. Hanming Fang and Andrea Moro have a nice handbook chapter on theoretical explorations of discrimination. On the recent protests, Loury and John McWhorter have an interesting and provocative dialog on the recent student protests at Bloggingheads.

“Bonus Culture: Competitive Pay, Screening and Multitasking,” R. Benabou & J. Tirole (2014)

Empirically, bonus pay as a component of overall renumeration has become more common over time, especially in highly competitive industries which involve high levels of human capital; think of something like management of Fortune 500 firms, where the managers now have their salary determined globally rather than locally. This doesn’t strike most economists as a bad thing at first glance: as long as we are measuring productivity correctly, workers who are compensated based on their actual output will both exert the right amount of effort and have the incentive to improve their human capital.

In an intriguing new theoretical paper, however, Benabou and Tirole point out that many jobs involve multitasking, where workers can take hard-to-measure actions for intrinsic reasons (e.g., I put effort into teaching because I intrinsically care, not because academic promotion really hinges on being a good teacher) or take easy-to-measure actions for which there might be some kind of bonus pay. Many jobs also involve screening: I don’t know who is high quality and who is low quality, and although I would optimally pay people a bonus exactly equal to their cost of effort, I am unable to do so since I don’t know what that cost is. Multitasking and worker screening interact among competitive firms in a really interesting way, since how other firms incentivize their workers affects how workers will respond to my contract offers. Benabou and Tirole show that this interaction means that more competition in a sector, especially when there is a big gap between the quality of different workers, can actually harm social welfare even in the absence of any other sort of externality.

Here is the intuition. For multitasking reasons, when different things workers can do are substitutes, I don’t want to give big bonus payments for the observable output, since if I do the worker will put in too little effort on the intrinsically valuable task: if you pay a trader big bonuses for financial returns, she will not put as much effort into ensuring all the laws and regulations are followed. If there are other finance firms, though, they will make it known that, hey, we pay huge bonuses for high returns. As a result, workers will sort, with all of the high quality traders will move to the high bonus firm, leaving only the low quality traders at the firm with low bonuses. Bonuses are used not only to motivate workers, but also to differentially attract high quality workers when quality is otherwise tough to observe. There is a tradeoff, then: you can either have only low productivity workers but get the balance between hard-to-measure tasks and easy-to-measure tasks right, or you can retain some high quality workers with large bonuses that make those workers exert too little effort on hard-to-measure tasks. When the latter is more profitable, all firms inefficiently begin offering large, effort-distorting bonuses, something they wouldn’t do if they didn’t have to compete for workers.

How can we fix things? One easy method is with a bonus cap: if the bonus is capped at the monopsony optimal bonus, then no one can try to screen high quality workers away from other firms with a higher bonus. This isn’t as good as it sounds, however, because there are other ways to screen high quality workers (such as offering lower clawbacks if things go wrong) which introduce even worse distortions, hence bonus caps may simply cause less efficient methods to perform the same screening and same overincentivization of the easy-to-measure output.

When the individual rationality or incentive compatibility constraints in a mechanism design problem are determined in equilibrium, based on the mechanisms chosen by other firms, we sometimes called this a “competing mechanism”. It seems to me that there are quite a number of open questions concerning how to make these sorts of problems tractable; a talented young theorist looking for a fun summer project might find it profitable to investigate this as-yet small literature.

Beyond the theoretical result on screening plus multitasking, Tirole and Benabou also show that their results hold for market competition more general than just perfect competition versus monopsony. They do this through a generalized version of the Hotelling line which appears to have some nice analytic properties, at least compared to the usual search-theoretic models which you might want to use when discussing imperfect labor market competition.

Final copy (RePEc IDEAS version), forthcoming in the JPE.

“The Rents from Sugar and Coercive Institutions: Removing the Sugar Coating,” C. Dippel, A. Greif & D. Trefler (2014)

Today, I’ve got two posts about some new work by Christian Dippel, an economic historian at UCLA Anderson who is doing some very interesting theoretically-informed history; no surprise to see Greif and Trefler as coauthors on this paper, as they are both prominent proponents of this analytical style.

The authors consider the following puzzle: sugar prices absolutely collapse during the mid and late 1800s, largely because of the rise of beet sugar. And yet, wages in the sugar-dominant British colonies do not appear to have fallen. This is odd, since all of our main theories of trade suggest that when an export price falls, the price of factors used to produce that export also fall (this is less obvious than just marginal product falling, but still true).

The economics seem straightforward enough, so what explains the empirical result? Well, the period in question is right after the end of slavery in the British Empire. There were lots of ways in which the politically powerful could use legal or extralegal means to keep wages from rising to marginal product. Suresh Naidu, a favorite of this blog, has a number of papers on labor coercion everywhere from the UK in the era of Master and Servant Law, to the US South post-reconstruction, to the Middle East today; actually, I understand he is writing a book on the subject which, if there is any justice, has a good shot at being the next Pikettyesque mainstream hit. Dippel et al quote a British writer in the 1850s on the Caribbean colonies: “we have had a mass of colonial legislation, all dictated by the most short-sighted but intense and disgraceful selfishness, endeavouring to restrict free labour by interfering with wages, by unjust taxation, by unjust restrictions, by oppressive and unequal laws respecting contracts, by the denial of security of [land] tenure, and by impeding the sale of land.” In particular, wages rose rapidly right after slavery ended in 1838, but those gains were clawed back by the end of 1840s due to “tenancy-at-will laws” (which let employers seize some types of property if workers left), trespass and land use laws to restrict freeholding on abandoned estates and Crown land, and emigration restrictions.

What does labor coercion have to do with wages staying high as sugar prices collapse? The authors write a nice general equilibrium model. Englishmen choose whether to move to the colonies (in which case they get some decent land) or to stay in England at the outside wage. Workers in the Caribbean can either take a wage working sugar which depends on bargaining power, or they can go work marginal freehold land. Labor coercion rules limit the ability of those workers to work some land, so the outside option of leaving the sugar plantation is worse the more coercive institutions are. Governments maximize a weighted combination of Englishmen and local wages, choosing the coerciveness of institutions. The weight on Englishmen wages is higher the more important sugar exports and their enormous rents are to the local economy. In partial equilibrium, then, if the price of sugar falls exogenously, the wages of workers on sugar plantations falls (as their MP goes down), the number of locals willing to work sugar falls, hence the number of Englishman willing to stay falls (as their profit goes down). With few plantations, sugar rents become less important, labor coercion falls, opening up more marginal land for freeholders, which causes even more workers to leave sugar plantations and improves wages for those workers. However, if sugar is very important, the government places a lot of weight on planter income in the social welfare function, hence responds to a fall in sugar prices by increasing labor coercion, lowering the outside option of workers, keeping them on the sugar plantations, where they earn lower wages than before for the usual economic reasons. That is, if sugar is really important, coercive institutions will be retained, the economic structure will be largely unchanged in response to a fall in world sugar prices, and hence wages will fall, but if sugar is only of marginal importance, a fall in sugar prices leads the politically powerful to leave, lowering the political strength of the planter class, thus causing coercive labor institutions to decline, allowing workers to reallocate such that wages approach marginal product; since the MP of options other than sugar may be higher than the wage paid to sugar workers, this reallocation caused by the decline of sugar prices can cause wages in the colony to increase.

The British, being British, kept very detailed records of things like incarceration rates, wages, crop exports, and the like, and the authors find a good deal of empirical evidence for the mechanism just described. To assuage worries about the endogeneity of planter power, they even get a subject expert to construct a measure of geographic suitability for sugar in each of 14 British Caribbean colonies, and proxies for planter power with the suitability of marginal land for sugar production. Interesting work all around.

What should we take from this? That legal and extralegal means can be used to keep factor rents from approaching their perfect competition outcome: well, that is something essentially every classical economist from Smith to Marx has described. The interesting work here is the endogeneity of factor coercion. There is still some debate about much we actually know about whether these endogenous institutions (or, even more so, the persistence of institutions) have first-order economic effects; see a recent series of posts by Dietz Vollrath for a skeptical view. I find this paper by Dippel et al, as well as recent work by Naidu and Hornbeck, are the cleanest examples of how exogenous shocks affect institutions, and how those institutions then affect economic outcomes of great importance.

December 2014 working paper (no RePEc IDEAS version)

Labor Unions and the Rust Belt

I’ve got two nice papers for you today, both exploring a really vexing question: why is it that union-heavy regions of the US have fared so disastrously over the past few decades? In principle, it shouldn’t matter: absent any frictions, a rational union and a profit-maximizing employer ought both desire to take whatever actions generate the most total surplus for the firm, with union power simply affecting how those rents are shared between management, labor and owners. Nonetheless, we notice empirically a couple of particularly odd facts. First, especially in the US, union-dominated firms tend to limit adoption of new, productivity-enhancing technology; the late adoption of the radial tire among U.S. firms is a nice example. Second, unions often negotiate not only about wages but about “work rules”, insisting upon conditions like inflexible employee roles. A great example here is a California longshoremen contract which insisted upon a crew whose sole job was to stand and watch while another crew did the job. Note that preference for leisure can’t explain this, since surely taking that leisure at home rather than standing around the worksite would be preferable for the employees!

What, then, might drive unions to push so hard for seemingly “irrational” contract terms, and how might union bargaining power under various informational frictions or limited commitment affect the dynamic productivity of firms? “Competition, Work Rules and Productivity” by the BEA’s Benjamin Bridgman discusses the first issue, and a new NBER working paper, “Competitive Pressure and the Decline of the Rust Belt: A Macroeconomic Analysis” by Alder, Lagakos and Ohanian covers the second; let’s examine these in turn.

First, work rules. Let a union care first about keeping all members employed, and about keeping wage as high as possible given full employment. Assume that the union cannot negotiate the price at which products are sold. Abstractly, work rules are most like a fixed cost that is a complete waste: no matter how much we produce, we have to incur some bureaucratic cost of guys standing around and the like. Firms will set marginal revenue equal to marginal cost when deciding how much to produce, and at what price that production should be sold. Why would the union like these wasteful costs?

Let firm output given n workers just be n-F, where n is the number of employees, and F is how many of them are essentially doing nothing because of work rules. The firm chooses price p and the number of employees n given demand D(p) and wage w to maximize p*D(p)-w*n, subject to total production being feasible D(p)=n-F. Note that, as long as total firm profits under optimal pricing exceed F, the firm stays in business and its pricing decision, letting marginal revenue equal marginal cost, is unaffected by F. That is, the optimal production quantity does not depend on F. However, the total amount of employment does depend on F, since to produce quantity D(p) you need to employ n-F workers. Hence there is a tradeoff if the union only negotiates wages: to employ more people, you need a lower wage, but using wasteful work rules, employment can be kept high even when wages are raised. Note also that F is limited by the total rents earned by the firm, since if work rules are particularly onerous, firms that are barely breaking even without work rules will simply shut down. Hence in more competitive industries (formally, when demand is less elastic), work rules are less likely to imposed by unions. Bridgman also notes that if firms can choose technology (output is An-F, where A is the level of technology), then unions will resist new technology unless they can impose more onerous work rules, since more productive technology lowers the number of employees needed to produce a given amount of output.

This is a nice result. Note that the work rule requirements have nothing to do with employees not wanting to work hard, since work rules in the above model are a pure waste and generate no additional leisure time for workers. Of course, this result really hinges on limiting what unions can bargain over: if they can select the level of output, or can impose the level of employment directly, or can permit lump-sum transfers from management to labor, then unionized firms will produce at the same productivity at non-unionized firms. Information frictions, among other worries, might be a reason why we don’t see these types of contracts at some unionized firms. With this caveat in mind, let’s turn to the experience of the Rust Belt.

The U.S. Rust Belt, roughly made up of states surrounding the Great Lakes, saw a precipitous decline from the 1950s to today. Alder et al present the following stylized facts: the share of manufacturing employment in the U.S. located in the Rust Belt fell from the 1950s to the mid-1980s, there was a large wage gap between Rust Belt and other U.S. manufacturing workers during this period, Rust Belt firms were less likely to adopt new innovations, and labor productivity growth in Rust Belt states was lower than the U.S. average. After the mid-1980s, Rust Belt manufacturing firms begin to look a lot more like manufacturing firms in the rest of the U.S.: the wage gap is essentially gone, the employment share stabilizes, strikes become much less common, and productivity growth is similar. What happened?

In a nice little model, the authors point out that output competition (do I have lots of market power?) and labor market bargaining power (are my workers powerful enough to extract a lot of my rents?) interact in an interesting way when firms invest in productivity-increasing technology and when unions cannot commit to avoid a hold-up problem by striking for a better deal after the technology investment cost is sunk. Without commitment, stronger unions will optimally bargain away some of the additional rents created by adopting an innovation, hence unions function as a type of tax on innovation. With sustained market power, firms have an ambiguous incentive to adopt new technology – on the one hand, they already have a lot of market power and hence better technology will not accrue too many more sales, but on the other hand, having market power in the future makes investments today more valuable. Calibrating the model with reasonable parameters for market power, union strength, and various elasticities, the authors find that roughly 2/3 of the decline in the Rust Belt’s manufacturing share can be explained by strong unions and little output market competition decreasing the incentive to invest in upgrading technology. After the 1980s, declining union power and more foreign competition limited both disincentives and the Rust Belt saw little further decline.

Note again that unions and firms rationally took actions that lowered the total surplus generated in their industry, and that if the union could have committed not to hold up the firm after an innovation was adopted, optimal technology adoption would have been restored. Alder et al cite some interesting quotes from union heads suggesting that the confrontational nature of U.S. management-union relations led to a belief that management figures out profits, and unions figure out to secure part of that profit for their members. Both papers discussed here show that this type of division, by limiting the nature of bargains which can be struck, can have calamitous effects for both workers and firms.

Bridgman’s latest working paper version is here (RePEc IDEAS page); the latest version of Adler, Lagakos and Ohanian is here (RePEc IDEAS). David Lagakos in particular has a very nice set of recent papers about why services and agriculture tend to have such low productivity, particularly in the developing world; despite his macro background, I think he might be a closet microeconomist!

On Gary Becker

Gary Becker, as you must surely know by now, has passed away. This is an incredible string of bad luck for the University of Chicago. With Coase and Fogel having passed recently, and Director, Stigler and Friedman dying a number of years ago, perhaps Lucas and Heckman are the only remaining giants from Chicago’s Golden Age.

Becker is of course known for using economic methods – by which I mean constrained rational choice – to expand economics beyond questions of pure wealth and prices to question of interest to social science at large. But this contribution is too broad, and he was certainly not the only one pushing such an expansion; the Chicago Law School clearly was doing the same. For an economist, Becker’s principal contribution can be summarized very simply: individuals and households are producers as well as consumers, and rational decisions in production are as interesting to analyze as rational decisions in consumption. As firms must purchase capital to realize their productive potential, humans much purchase human capital to improve their own possible utilities. As firms take actions today which alter constraints tomorrow, so do humans. These may seem to be trite statements, but that are absolutely not: human capital, and dynamic optimization of fixed preferences, offer a radical framework for understanding everything from topics close to Becker’s heart, like educational differences across cultures or the nature of addiction, to the great questions of economics like how the world was able to break free from the dreadful Malthusian constraint.

Today, the fact that labor can augment itself with education is taken for granted, which is a huge shift in how economists think about production. Becker, in his Nobel Prize speech: “Human capital is so uncontroversial nowadays that it may be difficult to appreciate the hostility in the 1950s and 1960s toward the approach that went with the term. The very concept of human capital was alleged to be demeaning because it treated people as machines. To approach schooling as an investment rather than a cultural experience was considered unfeeling and extremely narrow. As a result, I hesitated a long time before deciding to call my book Human Capital, and hedged the risk by using a long subtitle. Only gradually did economists, let alone others, accept the concept of human capital as a valuable tool in the analysis of various economic and social issues.”

What do we gain by considering the problem of human capital investment within the household? A huge amount! By using human capital along with economic concepts like “equilibrium” and “private information about types”, we can answer questions like the following. Does racial discrimination wholly reflect differences in tastes? (No – because of statistical discrimination, underinvestment in human capital by groups that suffer discrimination can be self-fulfilling, and, as in Becker’s original discrimination work, different types of industrial organization magnify or ameliorate tastes for discrimination in different ways.) Is the difference between men and women in traditional labor roles a biological matter? (Not necessarily – with gains to specialization, even very small biological differences can generate very large behavioral differences.) What explains many of the strange features of labor markets, such as jobs with long tenure, firm boundaries, etc.? (Firm-specific human capital requires investment, and following that investment there can be scope for hold-up in a world without complete contracts.) The parenthetical explanations in this paragraph require completely different policy responses from previous, more naive explanations of the phenomena at play.

Personally, I find human capital most interesting in understanding the Malthusian world. Malthus conjectured the following: as productivity improved for some reason, excess food will appear. With excess food, people will have more children and population will grow, necessitating even more food. To generate more food, people will begin farming marginal land, until we wind up with precisely the living standards per capita that prevailed before the productivity improvement. We know, by looking out our windows, that the world in 2014 has broken free from Malthus’ dire calculus. But how? The critical factors must be that as productivity improves, population does not grow, or else grows slower than the continued endogenous increases in productivity. Why might that be? The quantity-quality tradeoff. A productivity improvement generates surplus, leading to demand for non-agricultural goods. Increased human capital generates more productivity on those goods. Parents have fewer kids but invest more heavily in their human capital so that they can work in the new sector. Such substitution is only partial, so in order to get wealthy, we need a big initial productivity improvement to generate demand for the goods in the new sector. And thus Malthus is defeated by knowledge.

Finally, a brief word on the origin of human capital. The idea that people take deliberate and costly actions to improve their productivity, and that formal study of this object may be useful, is modern: Mincer and Schultz in the 1950s, and then Becker with his 1962 article and famous 1964 book. That said, economists (to the chagrin of some other social scientists!) have treated humans as a type of capital for much longer. A fascinating 1966 JPE [gated] traces this early history. Petty, Smith, Senior, Mill, von Thunen: they all thought an accounting of national wealth required accounting for the productive value of the people within the nation, and 19th century economists frequently mention that parents invest in their children. These early economists made such claims knowing they were controversial; Walras clarifies that in pure theory “it is proper to abstract completely from considerations of justice and practical expediency” and to regard human beings “exclusively from the point of view of value in exchange.” That is, don’t think we are imagining humans as being nothing other than machines for production; rather, human capital is just a useful concept when discussing topics like national wealth. Becker, unlike the caricature where he is the arch-neoliberal, was absolutely not the first to “dehumanize” people by rationalizing decisions like marriage or education in a cost-benefit framework; rather, he is great because he was the first to show how powerful an analytical concept such dehumanization could be!

Dale Mortensen as Micro Theorist

Northwestern’s sole Nobel Laureate in economics, Dale Mortensen, passed overnight; he remained active as a teacher and researcher over the past few years, though I’d be hearing word through the grapevine about his declining health over the past few months. Surely everyone knows Mortensen the macroeconomist for his work on search models in the labor market. There is something odd here, though: Northwestern has really never been known as a hotbed of labor research. To the extent that researchers rely on their coworkers to generate and work through ideas, how exactly did Mortensen became such a productive and influential researcher?

Here’s an interpretation: Mortensen’s critical contribution to economics is as the vector by which important ideas in micro theory entered real world macro; his first well-known paper is literally published in a 1970 book called “Microeconomic Foundations of Employment and Inflation Theory.” Mortensen had the good fortune to be a labor economist working in the 1970s and 1980s at a school with a frankly incredible collection of microeconomic theorists; during those two decades, Myerson, Milgrom, Loury, Schwartz, Kamien, Judd, Matt Jackson, Kalai, Wolinsky, Satterthwaite, Reinganum and many others were associated with Northwestern. And this was a rare condition! Game theory is everywhere today, and pioneers in that field (von Neumann, Nash, Blackwell, etc.) were active in the middle of the century. Nonetheless, by the late 1970s, game theory in the social sciences was close to dead. Paul Samuelson, the great theorist, wrote essentially nothing using game theory between the early 1950s and the 1990s. Quickly scanning the American Economic Review from 1970-1974, I find, at best, one article per year that can be called game-theoretic.

What is the link between Mortensen’s work and developments in microeconomic theory? The essential labor market insight of search models (an insight which predates Mortensen) is that the number of hires and layoffs is substantial even in the depth of a recession. That is, the rise in the unemployment rate cannot simply be because the marginal revenue of the potential workers is always less than the cost, since huge numbers of the unemployed are hired during recessions (as others are fired). Therefore, a model which explains changes in churn rather than changes in the aggregate rate seems qualitatively important if we are to develop policies to address unemployment. This suggests that there might be some use in a model where workers and firms search for each other, perhaps with costs or other frictions. Early models along this line by Mortensen and others were generally one-sided and hence non-strategic: they had the flavor of optimal stopping problems.

Unfortunately, Diamond in a 1971 JET pointed out that Nash equilibrium in two-sided search leads to a conclusion that all workers are paid their reservation wage: all employers pay the reservation wage, workers believe this to be true hence do not engage in costly search to switch jobs, hence the belief is accurate and nobody can profitably deviate. Getting around the “Diamond Paradox” involved enriching the model of who searches when and the extent to which old offers can be recovered; Mortensen’s work with Burdett is a nice example. One also might ask whether laissez faire search is efficient or not: given the contemporaneous work of micro theorists like Glenn Loury on mathematically similar problems like the patent race, you might imagine that efficient search is unlikely.

Beyond the efficiency of matches themselves is the question of how to split surplus. Consider a labor market. In the absence of search frictions, Shapley (first with Gale, later with Shubik) had shown in the 1960s and early 1970s the existence of stable two-sided matches even when “wages” are included. It turns out these stable matches are tightly linked to the cooperative idea of a core. But what if this matching is dynamic? Firms and workers meet with some probability over time. A match generates surplus. Who gets this surplus? Surely you might imagine that the firm should have to pay a higher wage (more of the surplus) to workers who expect to get good future offers if they do not accept the job today. Now we have something that sounds familiar from non-cooperative game theory: wage is based on the endogenous outside options of the two parties. It turns out that noncooperative game theory had very little to say about bargaining until Rubinstein’s famous bargaining game in 1982 and the powerful extensions by Wolinsky and his coauthors. Mortensen’s dynamic search models were a natural fit for those theoretic developments.

I imagine that when people hear “microfoundations”, they have in mind esoteric calibrated rational expectations models. But microfoundations in the style of Mortensen’s work is much more straightforward: we simply cannot understand even the qualitative nature of counterfactual policy in the absence of models that account for strategic behavior. And thus the role for even high micro theory, which investigates the nature of uniqueness of strategic outcomes (game theory) and the potential for a planner to improve welfare through alternative rules (mechanism design). Powerful tools indeed, and well used by Mortensen.

“The Economic Benefits of Pharmaceutical Innovations: The Case of Cox-2 Inhibitors,” C. Garthwaite (2012)

Cost-benefit analysis and comparative effectiveness are the big buzzwords in medical policy these days. If we are going to see 5% annual real per-capita increases in medical spending, we better be getting something for all that effort. The usual way to study cost effectiveness is with QALYs, Quality-Adjusted Life Years. The idea is that a medicine which makes you live longer, with less pain, is worth more, and we can use alternative sources (such as willingness to accept jobs with higher injury risk) to get numerical values on each component of the QALY.

But medicine has other economic effects, as Craig Garthwaite (from here at Kellogg) reminds us in a recent paper of his. One major impact is through the labor market: the disabled or those with chronic pain choose to work less. Garthwaite considers the case of Vioxx. Vioxx was a very effective remedy for long-term pain, which (it was thought) could be used without the gastrointestinal side effects of ibuprofen or naproxen. It rapidly become very widely prescribed. However, evidence began to accumulate which suggested that Vioxx also caused serious heart problems, and the pill was taken off the market in 2004. Alternative joint pain medications for long term use weren’t really comparable (though, having taken naproxen briefly for a joint injury, I assure you it is basically a miracle drug.)

We have a great panel on medical spending called MEPS which includes age, medical history, prescriptions, income, and labor supply decisions. That is, we have everything we need for a quick diff-in-diff. Take those with joint pain and those without, before Vioxx leaves the market and after. We see parallel trends in labor supply before Vioxx is removed (though of course, those with joint pain are on average older, more female, and less educated, hence much less likely to work). The year Vioxx is removed, labor supply drops 10 percent among those with joint pain, and even more if we look ahead a few periods after Vioxx is taken off the market.

For more precision, let’s do a two-stage IV on the panel data, first estimating use of any joint pain drug conditioning on the Vioxx removal and the presence of joint pain, then labor supply conditional on use of an joint pain drug. Use of any joint pain drug fell about 50% in the panel following the removal of Vioxx. Labor supply of those with joint pain is about 22 percentage points higher when Vioxx is available in the individual fixed effects IV, meaning a 54% decline in probability of working for those who were taking chronic joint pain drugs before Vioxx was removed. How big an economic effect is this? About 3% of the work force are elderly folks reporting some kind of joint pain, and 20% of them found the pain serious enough to have prescription joint pain medication. If 54% of that group leaves the labor force, this means overall labor supply changed by .35 percentage points because of Vioxx (accounting for spillovers to related drugs), or $19 billion of labor income disappeared when Vioxx was taken off the market. This is a lot, though of course these estimates are not too precise. The point is that medical cost effectiveness studies, in cases like the one studied here, can miss quite a lot if they fail to account for impacts beyond QALYs.

Final working paper (IDEAS page). Paper published in AEJ: Applied 2012.

%d bloggers like this: