Category Archives: Labor

On Gary Becker

Gary Becker, as you must surely know by now, has passed away. This is an incredible string of bad luck for the University of Chicago. With Coase and Fogel having passed recently, and Director, Stigler and Friedman dying a number of years ago, perhaps Lucas and Heckman are the only remaining giants from Chicago’s Golden Age.

Becker is of course known for using economic methods – by which I mean constrained rational choice – to expand economics beyond questions of pure wealth and prices to question of interest to social science at large. But this contribution is too broad, and he was certainly not the only one pushing such an expansion; the Chicago Law School clearly was doing the same. For an economist, Becker’s principal contribution can be summarized very simply: individuals and households are producers as well as consumers, and rational decisions in production are as interesting to analyze as rational decisions in consumption. As firms must purchase capital to realize their productive potential, humans much purchase human capital to improve their own possible utilities. As firms take actions today which alter constraints tomorrow, so do humans. These may seem to be trite statements, but that are absolutely not: human capital, and dynamic optimization of fixed preferences, offer a radical framework for understanding everything from topics close to Becker’s heart, like educational differences across cultures or the nature of addiction, to the great questions of economics like how the world was able to break free from the dreadful Malthusian constraint.

Today, the fact that labor can augment itself with education is taken for granted, which is a huge shift in how economists think about production. Becker, in his Nobel Prize speech: “Human capital is so uncontroversial nowadays that it may be difficult to appreciate the hostility in the 1950s and 1960s toward the approach that went with the term. The very concept of human capital was alleged to be demeaning because it treated people as machines. To approach schooling as an investment rather than a cultural experience was considered unfeeling and extremely narrow. As a result, I hesitated a long time before deciding to call my book Human Capital, and hedged the risk by using a long subtitle. Only gradually did economists, let alone others, accept the concept of human capital as a valuable tool in the analysis of various economic and social issues.”

What do we gain by considering the problem of human capital investment within the household? A huge amount! By using human capital along with economic concepts like “equilibrium” and “private information about types”, we can answer questions like the following. Does racial discrimination wholly reflect differences in tastes? (No – because of statistical discrimination, underinvestment in human capital by groups that suffer discrimination can be self-fulfilling, and, as in Becker’s original discrimination work, different types of industrial organization magnify or ameliorate tastes for discrimination in different ways.) Is the difference between men and women in traditional labor roles a biological matter? (Not necessarily – with gains to specialization, even very small biological differences can generate very large behavioral differences.) What explains many of the strange features of labor markets, such as jobs with long tenure, firm boundaries, etc.? (Firm-specific human capital requires investment, and following that investment there can be scope for hold-up in a world without complete contracts.) The parenthetical explanations in this paragraph require completely different policy responses from previous, more naive explanations of the phenomena at play.

Personally, I find human capital most interesting in understanding the Malthusian world. Malthus conjectured the following: as productivity improved for some reason, excess food will appear. With excess food, people will have more children and population will grow, necessitating even more food. To generate more food, people will begin farming marginal land, until we wind up with precisely the living standards per capita that prevailed before the productivity improvement. We know, by looking out our windows, that the world in 2014 has broken free from Malthus’ dire calculus. But how? The critical factors must be that as productivity improves, population does not grow, or else grows slower than the continued endogenous increases in productivity. Why might that be? The quantity-quality tradeoff. A productivity improvement generates surplus, leading to demand for non-agricultural goods. Increased human capital generates more productivity on those goods. Parents have fewer kids but invest more heavily in their human capital so that they can work in the new sector. Such substitution is only partial, so in order to get wealthy, we need a big initial productivity improvement to generate demand for the goods in the new sector. And thus Malthus is defeated by knowledge.

Finally, a brief word on the origin of human capital. The idea that people take deliberate and costly actions to improve their productivity, and that formal study of this object may be useful, is modern: Mincer and Schultz in the 1950s, and then Becker with his 1962 article and famous 1964 book. That said, economists (to the chagrin of some other social scientists!) have treated humans as a type of capital for much longer. A fascinating 1966 JPE [gated] traces this early history. Petty, Smith, Senior, Mill, von Thunen: they all thought an accounting of national wealth required accounting for the productive value of the people within the nation, and 19th century economists frequently mention that parents invest in their children. These early economists made such claims knowing they were controversial; Walras clarifies that in pure theory “it is proper to abstract completely from considerations of justice and practical expediency” and to regard human beings “exclusively from the point of view of value in exchange.” That is, don’t think we are imagining humans as being nothing other than machines for production; rather, human capital is just a useful concept when discussing topics like national wealth. Becker, unlike the caricature where he is the arch-neoliberal, was absolutely not the first to “dehumanize” people by rationalizing decisions like marriage or education in a cost-benefit framework; rather, he is great because he was the first to show how powerful an analytical concept such dehumanization could be!

Dale Mortensen as Micro Theorist

Northwestern’s sole Nobel Laureate in economics, Dale Mortensen, passed overnight; he remained active as a teacher and researcher over the past few years, though I’d be hearing word through the grapevine about his declining health over the past few months. Surely everyone knows Mortensen the macroeconomist for his work on search models in the labor market. There is something odd here, though: Northwestern has really never been known as a hotbed of labor research. To the extent that researchers rely on their coworkers to generate and work through ideas, how exactly did Mortensen became such a productive and influential researcher?

Here’s an interpretation: Mortensen’s critical contribution to economics is as the vector by which important ideas in micro theory entered real world macro; his first well-known paper is literally published in a 1970 book called “Microeconomic Foundations of Employment and Inflation Theory.” Mortensen had the good fortune to be a labor economist working in the 1970s and 1980s at a school with a frankly incredible collection of microeconomic theorists; during those two decades, Myerson, Milgrom, Loury, Schwartz, Kamien, Judd, Matt Jackson, Kalai, Wolinsky, Satterthwaite, Reinganum and many others were associated with Northwestern. And this was a rare condition! Game theory is everywhere today, and pioneers in that field (von Neumann, Nash, Blackwell, etc.) were active in the middle of the century. Nonetheless, by the late 1970s, game theory in the social sciences was close to dead. Paul Samuelson, the great theorist, wrote essentially nothing using game theory between the early 1950s and the 1990s. Quickly scanning the American Economic Review from 1970-1974, I find, at best, one article per year that can be called game-theoretic.

What is the link between Mortensen’s work and developments in microeconomic theory? The essential labor market insight of search models (an insight which predates Mortensen) is that the number of hires and layoffs is substantial even in the depth of a recession. That is, the rise in the unemployment rate cannot simply be because the marginal revenue of the potential workers is always less than the cost, since huge numbers of the unemployed are hired during recessions (as others are fired). Therefore, a model which explains changes in churn rather than changes in the aggregate rate seems qualitatively important if we are to develop policies to address unemployment. This suggests that there might be some use in a model where workers and firms search for each other, perhaps with costs or other frictions. Early models along this line by Mortensen and others were generally one-sided and hence non-strategic: they had the flavor of optimal stopping problems.

Unfortunately, Diamond in a 1971 JET pointed out that Nash equilibrium in two-sided search leads to a conclusion that all workers are paid their reservation wage: all employers pay the reservation wage, workers believe this to be true hence do not engage in costly search to switch jobs, hence the belief is accurate and nobody can profitably deviate. Getting around the “Diamond Paradox” involved enriching the model of who searches when and the extent to which old offers can be recovered; Mortensen’s work with Burdett is a nice example. One also might ask whether laissez faire search is efficient or not: given the contemporaneous work of micro theorists like Glenn Loury on mathematically similar problems like the patent race, you might imagine that efficient search is unlikely.

Beyond the efficiency of matches themselves is the question of how to split surplus. Consider a labor market. In the absence of search frictions, Shapley (first with Gale, later with Shubik) had shown in the 1960s and early 1970s the existence of stable two-sided matches even when “wages” are included. It turns out these stable matches are tightly linked to the cooperative idea of a core. But what if this matching is dynamic? Firms and workers meet with some probability over time. A match generates surplus. Who gets this surplus? Surely you might imagine that the firm should have to pay a higher wage (more of the surplus) to workers who expect to get good future offers if they do not accept the job today. Now we have something that sounds familiar from non-cooperative game theory: wage is based on the endogenous outside options of the two parties. It turns out that noncooperative game theory had very little to say about bargaining until Rubinstein’s famous bargaining game in 1982 and the powerful extensions by Wolinsky and his coauthors. Mortensen’s dynamic search models were a natural fit for those theoretic developments.

I imagine that when people hear “microfoundations”, they have in mind esoteric calibrated rational expectations models. But microfoundations in the style of Mortensen’s work is much more straightforward: we simply cannot understand even the qualitative nature of counterfactual policy in the absence of models that account for strategic behavior. And thus the role for even high micro theory, which investigates the nature of uniqueness of strategic outcomes (game theory) and the potential for a planner to improve welfare through alternative rules (mechanism design). Powerful tools indeed, and well used by Mortensen.

“The Economic Benefits of Pharmaceutical Innovations: The Case of Cox-2 Inhibitors,” C. Garthwaite (2012)

Cost-benefit analysis and comparative effectiveness are the big buzzwords in medical policy these days. If we are going to see 5% annual real per-capita increases in medical spending, we better be getting something for all that effort. The usual way to study cost effectiveness is with QALYs, Quality-Adjusted Life Years. The idea is that a medicine which makes you live longer, with less pain, is worth more, and we can use alternative sources (such as willingness to accept jobs with higher injury risk) to get numerical values on each component of the QALY.

But medicine has other economic effects, as Craig Garthwaite (from here at Kellogg) reminds us in a recent paper of his. One major impact is through the labor market: the disabled or those with chronic pain choose to work less. Garthwaite considers the case of Vioxx. Vioxx was a very effective remedy for long-term pain, which (it was thought) could be used without the gastrointestinal side effects of ibuprofen or naproxen. It rapidly become very widely prescribed. However, evidence began to accumulate which suggested that Vioxx also caused serious heart problems, and the pill was taken off the market in 2004. Alternative joint pain medications for long term use weren’t really comparable (though, having taken naproxen briefly for a joint injury, I assure you it is basically a miracle drug.)

We have a great panel on medical spending called MEPS which includes age, medical history, prescriptions, income, and labor supply decisions. That is, we have everything we need for a quick diff-in-diff. Take those with joint pain and those without, before Vioxx leaves the market and after. We see parallel trends in labor supply before Vioxx is removed (though of course, those with joint pain are on average older, more female, and less educated, hence much less likely to work). The year Vioxx is removed, labor supply drops 10 percent among those with joint pain, and even more if we look ahead a few periods after Vioxx is taken off the market.

For more precision, let’s do a two-stage IV on the panel data, first estimating use of any joint pain drug conditioning on the Vioxx removal and the presence of joint pain, then labor supply conditional on use of an joint pain drug. Use of any joint pain drug fell about 50% in the panel following the removal of Vioxx. Labor supply of those with joint pain is about 22 percentage points higher when Vioxx is available in the individual fixed effects IV, meaning a 54% decline in probability of working for those who were taking chronic joint pain drugs before Vioxx was removed. How big an economic effect is this? About 3% of the work force are elderly folks reporting some kind of joint pain, and 20% of them found the pain serious enough to have prescription joint pain medication. If 54% of that group leaves the labor force, this means overall labor supply changed by .35 percentage points because of Vioxx (accounting for spillovers to related drugs), or $19 billion of labor income disappeared when Vioxx was taken off the market. This is a lot, though of course these estimates are not too precise. The point is that medical cost effectiveness studies, in cases like the one studied here, can miss quite a lot if they fail to account for impacts beyond QALYs.

Final working paper (IDEAS page). Paper published in AEJ: Applied 2012.

“Contractability and the Design of Research Agreements,” J. Lerner & U. Malmendier (2010)

Outside research has (as we discussed yesterday) begun to regain prominence as a firm strategy. This is particularly so in biotech: the large drug firms generally do not do the basic research that leads to new products. Rather, they contract this out to independent research firms, then handle the development, licensing and marketing in-house. But such contracts are tough. Not only can do I have trouble writing an enforceable contract that conditions on the effort exerted by the research firm, but the fact that research firms have other projects, and also like to do pure science for prestige reasons, means that they are likely to take my money and use it to fund projects which are not entirely the most preferred of the drug company.

We are in luck: economic theory has a broad array of models of contracting under multitasking worries. Consider the following model of Lerner and Malmendier. The drug firm pays some amount to set up a contract. The research firm then does some research. The drug firm observes the effort of the researcher, who either worked on exactly what the drug company prefers, or on a related project which throws off various side inventions. After the research is performed, the research firm is paid. With perfect ability to contract on effort, this is an easy problem: pay the research firm only if they exert effort on the projects the drug company prefers. When the research project is “tell me whether this compound has this effect”, it might be possible to write such a contract. When the research project is “investigate the properties of this class of compounds and how they might relate to diseases of the heart”, surely no such contract is possible. In that case, the optimal contract may be just to let the research firm work on the broader project it prefers, because at least then the fact that the research firm gets spillovers means that the drug firm can pay the researcher less money. This is clearly second-best.

Can we do better? What about “termination contracts”? After effort is observed, but before development is complete, the drug firm can terminate the contract or not. Payments in the contract can certainly condition on termination. How about the following contract: the drug firm terminates if the research firm works on the broader research project, and it takes the patent rights to the side inventions. Here, if the research firm deviates and works on its own side projects, the drug company gets to keep the patents for those side projects, hence the research firm won’t do such work. And the drug firm further prefers the research firm to work on the assigned project; since termination means that development is not completed, the drug firm won’t just falsely claim that effort was low in order to terminate and seize the side project patents (indeed, on equilibrium path, there are few side patents to seize since the research firm is actually working on the correct project!). The authors show that the contract described here is always optimal if a conditional termination contract is used at all.

Empirically, what does this mean? If I write a research contract for more general research, I should expect more termination rights to be reserved. Further, the liquidity constraint of the research firms matter; if the drug firm could make the research firm pay it back after termination, it would do so, and we could again achieve the first best. So I should expect termination rights to show up particularly for undercapitalized research firms. Lerner and Malmendier create a database from contract data collected by a biotech consulting firm, and show that both of these predictions appear to be borne out. I read these results as in the style of Maskin and Tirole; even when I can’t fully specify all the states of the world in a contract, I can still do a good bit of conditioning.

2008 Working paper (IDEAS version). Final paper in AER 2010. Malmendier will certainly be a factor in the upcoming Clark medal discussion, as she turns 40 this year. Problematically, Nick Bloom (who, says his CV, did his PhD part time?!) also turns 40, and both absolutely deserve the prize. If I were a betting man, I would wager that the just-published-in-the-QJE Does Management Matter in the Third World paper will be the one that puts Bloom over the top, as it’s really the best development paper in many years. That said, I am utterly confused that Finkelstein won last year given that Malmendier and Bloom are both up for their last shot this year. Finkelstein is a great economist, no doubt, but she works in a very similar field to Malmendier, and Malmendier trumps her by any conceivable metric (citations, top cited papers, overall impact, etc.). I thought they switched the Clark Medal to an every-year affair just to avoid such a circumstance, such as when Athey, List and Melitz were all piled up in 2007.

I’m curious what a retrospective Clark Medal would look like, taking into account only research that was done as of the voting year, but allowing us to use our knowledge of the long-run impact of that research. Since 2001, Duflo 2010 and Acemoglu 2005 are locks. I think Rabin keeps his in 2001. Guido Imbens takes Levitt’s spot in 2003. List takes 2007, with Melitz and Athey just missing out (though both are supremely deserving!). Saez keeps 2009. Malmendier takes 2011. Bloom takes 2012. Raj Chetty takes 2013 – still young, but already an obvious lock to win. What’s interesting about this list is just how dominant young folks have been in micro (especially empirical and applied theory); these are essentially the best people working in that area, whereas macro and metrics are still by and large dominated by an older generation.

“Inefficient Hiring in Entry-Level Labor Markets,” A. Pallais (2012)

It’s job market season again. I’m just back from a winter trip in Central Europe (though, being an economist, I skipped the castles and cathedrals, instead going to Schumpeter’s favorite Viennese hiking trail and von Neumann’s boyhood home in Budapest) and have a lot of papers to post about, but given that Pallais’ paper is from 2011’s job market, I should clear it off the docket. Her paper was, I thought, a clever use of a field experiment (and I freely admit by bias in favor of theoretically sound field experiments rather than laboratory exercises when considering empirical quantities).

Here’s the basic theoretical problem. There are a bunch of candidates for a job, some young and some old. The old workers have had their productivity revealed to some extent by their past job experience. For young workers, employers can only see a very noisy signal of their productivity. It involves a small cost to hire workers; they must be trained, etc. In equilibrium, firms will hire young workers who have expected productivity above the firm’s cost. Is this socially efficient? No, because of a simple information externality. The social planner would hire all young workers whose productivity plus the value of information revealed during their young tenure is above the firm’s cost. That is, private firms will not take into account that their hiring of a worker creates a positive externality from information that allows for better worker-firm matches in future periods. If yound workers could pay firms to work for them, then this might fix the problem to some extent, though in general such arrangements are not legal (though on this point, see my comment in the final paragraph). Perhaps this might explain the high levels of unemployment among the young, and the fact that absence from the labor market for young workers at the start of their career is particularly damaging?

How important is this? It’s tough in a lot of real world data to separate the benefits to workers of having their underlying revealed by early job experience from workers upgrading their skills during their first job. It is also tough to see the general equilibrium effects: if the government assists some young workers in getting hired, does this lead to less unemployment among young workers in future periods or do these assisted workers simply crowd out others that would have been hired in the absence of the intervention? Pallais uses an online job market similar to mechanical turk. Basically, on the site you can hire workers to perform small tasks like data entry. They request a wage and you can hire them or not. Previous hires are public, as are optional ratings and comments by the employers. Empirical data on past interventions is somewhat ambiguous.

Pallais hires a huge number of workers to do data entry. She randomly divides the applicants into three groups: those she doesn’t hire, those she hires and gives only minimal feedback, and those she hires and provides detailed comments. The task is ten hours of simple data entry with no training, so it’s tough to imagine anyone would infer the workers’ underlying human capital has improved. Other employers can see that Pallais has made a hire as soon as the contract begins, but the comments are added later; there is no effect on workers’ job offers until after the comments appear. And the effect appears substantial. Just being hired and getting a brief comment has a small impact on worker’s future wages and employment. A longer, positive comment has what looks like an enormous impact on the worker’s future employment and wages. Though the treatment does lower wages received by other people on data entry jobs by increasing the supply of certified workers, the overall increase in welfare from more hiring of young workers trumps the lower wages.

Interesting, but two comments. First, for some reason the draft of this paper I read seems to suggest some sort of idea that this sorting is good for workers, if only between the lines. But it needn’t be so! A simple model: all firms are identical, and have cost .4 of hiring a worker. Workers have skills drawn from a uniform [0,1] distribution. No signals are received in the first period. Therefore, all workers have expected skill .5, and all are hired at wage .1 (by the no profit condition in a competitive labor demand market). After the first hiring, skill level is completely revealed. Therefore, only 60% of workers are hired in the second period, at a wage equal to their skill minus .4. A policy that ex-ante would have revealed the skill of young workers would have decreased employment among young workers by 40 percent! Note that this would be the efficient outcome, so a social planner who cares about total welfare would still want to reveal the skill, even though the social planner who cares only about employment would not do so.

Second, to the extent that skill revelation is important, young workers with private information about their skills ought self-select. Those who believe themselves to be high type should choose jobs which frequently throw off public signals about their underlying quality (i.e., firms that promote good young folks quickly, industries like sales with easily verifiable output, etc.). Those who believe themselves low type should select into jobs without such signals. If everyone is rational and knows their own type, you can see some unraveling will happen here. What has the empirical career concerns literature learned about such selection?

November 2011 working paper (No IDEAS version). I see on her CV that this paper is currently R&Red at AER.

Learning and Liberty Ships, P. Thompson

(Note: This post refers to “How Much Did the Liberty Shipbuilders Learn? New Evidence for an Old Case Study” (2001) and “How Much Did the Liberty Shipbuilders Forget?” (2007), both by Peter Thompson.)

It’s taken for granted now that organizations “learn” as their workers gain knowledge while producing and “forget” when not actively involved in some project. Identifying the importance of such learning-by-doing and organizational forgetting is quite a challenging empirical task. We would need a case where an easily measurable final product was produced over and over by different groups using the same capital and technology, with data fully recorded. And a 1945 article by a man named Searle found just an example: the US Navy Liberty Ships. These standardized ships were produced by the thousand by a couple dozen shipyards during World War II. Searle showed clearly that organizations get better at making ships as they accumulate experience, and the productivity gain of such learning-by-doing is enormous. His data was used in a more rigorous manner by researchers in the decades afterward, generally confirming the learning-by-doing and also showing that shipyards which stopped producing Liberty ships for a month or two very quickly saw their productivity plummet.

But rarely is the real world so clean. Peter Thompson, in this pair of papers (as well as a third published in the AER but discussed here), throws cold water on both the claim that organizations learn rapidly and that they forget just as rapidly. The problem is two fold. First, capital at the shipyards was assumed to be roughly constant. In fact, it was not. Almost all of the Liberty shipyards took some time to gear up their equipment when they began construction. Peter dug up some basic information on capital at each yard from deep in the national archives. Indeed, the terminal capital stock at each yard was three times the initial capital on average. Including a measure of capital in the equation estimating learning-by-doing reduces the importance of learning-by-doing by half.

It gets worse. Fractures were found frequently, accounting for more than 60% of ships built at the most sloppy yard. Speed was encouraged by contract, and hence some of the “learning-by-doing” may simply have been learning how to get away with low quality welding and other tricks. Thompson adjusts the time it took to build each ship to account for an estimate of the repair time required on average for each yard at each point in time. Fixing this measurement error further reduces productivity growth due to learning-by-doing by six percent. The upshot? Organizational learning is real, but the magnitudes everyone knows from the Searle data are vastly overstated. This matters: Bob Lucas, in his well-known East Asian growth miracle paper, notes that worldwide innovation, human capital and physical capital are not enough to account for sustained 6-7% growth like we saw in places like Korea in the 70s and 80s. He suggests that learning-by-doing as firms move up the export-goods quality ladder might account for such rapid growth. But such a growth miracle requires quite rapid on the job productivity increases. (The Lucas paper is also great historical reading: he notes that rapid growth in Korea and other tigers – in 1991, as rich as Mexico and Yugoslavia, what a miracle! – will continue, except, perhaps, in the sad case of Hong Kong!)

Thompson also investigates organizational forgetting. Old estimates using Liberty ship data find worker productivity on Liberty ships falling a full 25% per month when the workers were not building Liberty ships. Perhaps this is because the shipyards’ “institutional memory” was insufficient to transmit the tricks that had been learned, or because labor turnover meant good workers left in the interim period. The mystery of organizational forgetting in Liberty yards turns out to have a simpler explanation: measurement error. Yards would work on Liberty ships, then break for a few months to work on a special product or custom ship of some kind, then return to the Liberty. But actual production was not so discontinuous: some capital and labor transitioned (in a way not noticed before) back to the Liberty ships with delay. This appears in the data as decreased productivity right after a return to Liberty production, with rapid “learning” to get back to the frontier. Any estimate of such a nonlinear quantity is bound to be vague, but Peter’s specifications give organizational forgetting in Liberty ship production of 3-5% per month, and finds little evidence that this is related to labor turnover. This estimate is similar to other recent production line productivity forgetting estimates, such as that found in Benkard’s 2000 AER on the aircraft industry.

How Much did the Liberty Shipbuilders Learn? (final published version (IDEAS page). Final version published in JPE 109.1 2001.

How Much did the Liberty Shipbuilders Forget? (2005 working paper) (IDEAS page). Final paper in Management Science 53.6, 2007.

“When the Levee Breaks: Black Migration and Economic Development in the American South,” R. Hornbeck and S. Naidu (2012)

Going back at least to Marx, surplus labor particularly in the countryside has been considered the enemy of labor-saving technological progress. With boundless countryside labor, either because of force (serfdom, slavery, etc.) or other limited opportunities for migration, landowners can lack the incentive to adopt new labor-substituting technologies that they might otherwise adopt. This story anecdotally applies to the American South. From 1940 to 1970, a second “Great Migration” of African-Americans fled the South toward industrial cities in the North with high labor demand. Simultaneously, the South began adopting farming technology that had been much more common in the North and Midwest. These African-American workers were often part of a paternalistic relation with their employers which imposed relatively large moving costs on potential migrants before 1940. But is there any cause and effect here? Was the industrial boom in the heartland the cause of modernization in the South?

Naidu and Hornbeck (two of the best young economic historians in the world; more on this shortly) examine this by looking at the 1927 flood of the Mississippi river. During this flood, large number of black workers in the Delta were forced to move to Red Cross camps, where networks formed that led many of the workers to head to cities like Chicago; blatant abuse of the Red Cross system by white planters certainly served as an additional incentive. In the Delta, mule use as well as tractors for transportation of cotton to the gin was very limited.

Imagine that the cost of black labor increases, as happened during the flood due to the ease of labor moving North from the aid camps. In a simple model where black labor, white labor and capital are substitutes, the one-time increase in black wages increases capital use, decreases land value (due to the loss of exploitable black labor paid less than MP due to moving cost), and increases white labor (which was assumed to be part of a national labor market already). The authors examine this model using a difference-in-difference applied to counties which were flooded and other non-flooded counties in the Delta.

Flooded counties lost 14% of their black population after the flood. Flooded counties adopt mules and horses at a higher rate than non-flooded counties by 1930, and quickly replace these farm animals with tractors. The use of tractors causes average farm size to rise in flooded counties over the next 30 years; large average farm size, worldwide, is highly correlated with productive farming. Profits of one large landowner with accessible records sees no change in profits despite the modernization of inputs. There are many robustness checks, but overall this is a convincing case that the South modernized when labor costs were relieved from their artificially low pre-flood level.

August 2012 NBER Working Paper (IDEAS page)

(P.S.: If this type of work interests you, take a quick peek at some of the other work the coauthors have been doing. Hornbeck’s recent AER shows in great detail the slow economic adjustment to the Dust Bowl’s short-run effects, with great relevance to current climate change policy, and his 2010 QJE with Greenstone and Moretti which uses large industrial plant “contests” to study local knowledge spillovers is the state of the art on the question. Naidu has a forthcoming AER on how pseudo-slavery relations (roughly, labor contracts enforced in criminal courts in 19th century Britain) were used to smooth labor market risk, as well as a great 2011 paper showing clear-as-day evidence that someone aware of secret coup plotting in the US during the Cold War was using that knowledge to profit in the stock market. Naidu also does some cool evolutionary game theory work with the always-great Sam Bowles.)

“Training Contracts, Worker Overconfidence, and the Provision of Firm-Sponsored General Training,” M. Hoffman (2011)

It’s job market season again, and I’m always curious to see what type of work by the Young Turks is popular in any given year. The present paper, by Berkeley’s Mitch Hoffman, strikes me as a nice example in the genre “Behavioral Second Best,” a genre best exemplified by Ben Handel’s medical insurance paper, which I see is now R&R at AER (and which surely will be accepted in the end, right?). The classic theory of the Second Best says that, in the presence of two market distortions, fixing one may decrease social welfare. For instance, reducing the tax burden of a firm that pollutes leads to less distortionary investment across market sectors, but more investment into a sector with large negative externalities. The Behavioral Second Best (BSB) is similar: the presence of a market distortion plus a behavioral bias can mean that correcting one of these decreases welfare. In Ben’s medical insurance paper, the reluctance of workers to switch their health plans over time had the side benefit of letting the insurance market offer multiple price/feature bundles without the usual unraveling of a separating equilibrium.

Unsurprisingly, this is a popular genre, so what does it take to write a good paper along these lines? I think there’s no way to do it without a first-rate dataset plus some empirical analysis that convincingly shows an important, real-world effect. Merely pointing out a theoretical curiosity related to BSB is simply uninteresting at this stage. So surely Hoffman’s bevy of top-flight flyouts is related to some careful data work. The question he addresses is an old one: why would firms pay for general training when workers can just leave the firm once they’ve been trained, or else why won’t they ask for higher pay after being trained as in a classic hold-up problem? Workers can’t always pay for training themselves due to credit constraints. Labor market frictions surely explain some of the puzzle – it is not always so easy to take one’s talents to a new employer – but it is tough to imagine these frictions accounting for training valued in the tens of thousands, which is not unusual in many industries. And certainly this is something firms worry about: witness the recent scandal where Apple, Google and other tech companies had a secret do-not-compete-for-labor pact based precisely on the worry that workers they train will flee after training is finished. (As an aside, how is it that some of these executives, particularly Eric Schmidt, are not facing criminal charges here? The quotes documented look like bald-faced admission of criminal activity to me!)

Hoffman proposes a reason why workers may be trainable in piece-rate industries even if they could leave whenever they wished: overconfidence. As a trucker, say, I am paid by how many miles I drive. The trucking firm agrees to hire and train me, but if I leave before date X, I have to pay back the cost of my training. After my training, I am overconfident about the number of miles I will be able to drive. Overconfidence that is not attenuated (or only slowly attenuated) by learning will make me less likely to quit, since the (perceived) value of keeping my trucking job is higher, at any given piece-rate, compared to my best outside option. This makes firms more willing to train me in the first place. In theory, this might improve outcomes for everyone: the firm gets more profit from better-trained workers, and the workers perhaps can extract a higher wage. Teaching workers to be less confident might make things worse.

Hoffman has a fantastic data set of payroll data from a large trucking company, indicating actual miles driven per week, plus the relevant training contract of each employee, plus weekly subjective reports on how many miles the employee expected to drive. Those reports were secret as far as the employer was concerned, so there is little reason for workers to lie, but Hoffman also runs a side experiment where he pays workers small bonuses for guessing their miles driven correctly; such incentives do not change in a significant way the reported expectations. The average worker is terribly overconfident, and his overconfidence attenuates as he gains experience, but only slowly. Overconfidence is linked to lower quit probabilities at any stage in the training contract, as you would expect Running a counterfactual structural model, Hoffman examines how various contracts will affect quit probabilities, and therefore firm and worker welfare (though for all the usual reasons, you should be skeptical of welfare estimates involving behaviorally biased agents). Eliminating overconfidence massively harms the trucking firm’s profits, as quit rates will increase and training becomes less viable. A government ban on penalties for quitting after you train may actually improve worker welfare for BSB reasons: though these penalties allow for more cases where worker training is possible, overconfidence also means workers are willing to accept huge penalties for quitting in exchange for tiny increases in post-training wages.

My only real quibble here is how the outside option is defined, theoretically. The usual worry with worker training is not hold-up, but the increase in the outside option. The model here is really quite specific to firm-specific training in piece-rate industries. My prior is that such industries are a rather small part of all industries where worker training occurs: examples in the paper involving MBA education surely don’t qualify. That said, I imagine you can tell a Behavioral Second Best story in the more general case, as long as the overconfidence retains some level of firm-specific nature, or as long as there is some divergence between the impact of the overconfidence of MP within the firm and outside the firm. There are many ways to do this: labor search frictions are one. (November 2011 working paper)

A Note on Rogoff and Inequality

Generally, I restrict this weblog solely to discussions of new economics research, with a bias toward theory; there is no shortage of good economics writing on the internet about policy, or the “economics of everyday life”, or economic statistics.

That said, I understand that theory is often esoteric. How, you may wonder, do some of these results apply to “real” economics, or to the “real” world? An editorial by a top-notch empirical economist, Ken Rogoff, is making the rounds on the internet today, and I think this is a great example to show the value of theory.

Essentially, Rogoff argues that growing inequality will in some sense self-correct. This is because “simply put, the greater the premium for highly skilled workers, the greater the incentive to find ways to economize on employing their talents.” Rogoff is considered quite liberal, and supports efforts to expand educational opportunity, but “it would be foolish, if not dangerous, to infer rising inequality in relative incomes in the coming decades by extrapolating from recent trends.”

Unfortunately, theory suggests this argument is not totally correct. As discussed on this blog earlier this year, Samuelson pointed out decades ago that there is, from the perspective of a firm in general equilibrium, no such thing as cheap or expensive factors of production. This is true even if the firm is a monopolist; all we require is free entry in factor markets. The argument is simple: every factor of production is paid their marginal products. A firm has no greater need to economize on high-skilled labor than they do on low-skilled labor or on capital: at the margin, lowering costs is lowering costs.

Constructively, what does this mean for inequality trends? It means that to decrease inequality, either marginal products of different workers need to equalize (perhaps implying more equality of educational opportunity), or factor markets without free entry need to loosen. The second argument strikes me as the important one. Intellectual property, occupation licensing requirements such as those in medicine, skilled immigration restrictions that are biased across sectors of the economy, and high-paying patronage jobs are a few among many distortions on the factor side. Theory tells us that the market will not self-correct these distortions, and therefore contra Rogoff, we should be worried about long-run growth in inequality.

“Search with Multilateral Bargaining,” M. Elliott (2010)

Matthew Elliott is another job market candidate making the rounds this year, and he presented this nice paper on matching today here at Northwestern. In the standard bilateral search model (due to Hosios), firms and workers choose whether or not to enter the job market (paying a cost), then meet sequentially with some probability and bargain over the wage. In these models, there can either be too much entry or too little; an additional unemployed worker entering makes it easier for firms to find an acceptable worker but harder for other unemployed workers to find a job. This famous model (itself an extension of Diamond-Mortensen-Pissarides, they of the 2010 Nobel) has been extended to allow for search costs, on-the-job search and trilateral bargaining, where two firms fight over one worker. Extending it to the most general case, where n firms and m workers, perhaps varying by worker and firm, constructed stochastically, is a problem which required much more advanced tools in network theory.

Elliott provides those results, remaining in the framework of negotiated rather than posted wages; as he notes, this theory is perhaps more applicable to high-skill labor markets where wages are not publicly posted, and workers are heterogeneous. Workers and firms simultaneously decide whether to enter the job market (paying a cost) and how hard to search (in an undirected manner). Workers match stochastically with each firm (who desires to hire one worker) depending on the level of search. Firms then negotiate how to split the surplus the match will generate, and some agent is hired.

If this sounds like Shapley-Shubik assignment to you, you’re on the right track. Because we’re in a Shapley-Shubik world, pairwise stability of the final assignment places us in the core; there are no deviations, even coalitional deviations, available. In a companion paper, Elliott shows that the assignment can be decomposed into the actual assigned links and the “best outside option” link for each agent. The minimum pairwise stable payoff can be found by adding and subtracting the values of each agent’s chain of outside option links.

The results for the labor market are these: there is never too much entry, search can sometimes be too heavy (though never too light), and that the labor market is “fragile”; it can unravel quickly. Entry is efficient because a new entrant will only change payoffs if he forces an old link to sever. By the definition of pairwise stability, the firm and the worker from that link must collectively be getting a higher payoff if they sever, since otherwise they would just reform their old link. That is, new entrants only thicken the market. Unlike in Hosios, since entering firms in Elliott have to bid up the wage of a worker they want in order to “steal” him from his current match, their effect on other firms when they enter does not cause a negative externality: they pay for causing the externality. The same argument in reverse applies to workers. Search is too heavy because having more outside options allows you to, in some sense, negotiate away more of the surplus from your current match. Labor market fragility occurs because, mathematically, everyone is getting payoffs that result from a weighted, connected graph. If one agent decides not to enter (his entry costs rise by epsilon), then the outside option of other agents is lowered. Their own current links are therefore willing to give them less of the surplus. Because of this, they may choose not to enter, and so on down the line. That is, minor shocks to the search process can create the necessary amplifications seen in the business cycle.

It would be nice to extend this type of model of the labor market to its dynamic setting – it’s not clear to me how sensible it is to talk about labor markets unraveling when all choices are being made simultaneously. Nonetheless, this paper does provide continuing metaproof about the usefulness of network theory and matching to a wide range of economic problems. The operations research types still no doubt have a lot to teach economists.

(EDIT: Forgot the link to the working paper initially.


Get every new post delivered to your Inbox.

Join 204 other followers

%d bloggers like this: