Category Archives: Information Econ

The 2018 Fields Medal and its Surprising Connection to Economics!

The Fields Medal and Nevanlinna Prizes were given out today. They represent the highest honor possible for young mathematicians and theoretical computer scientists, and are granted only once every four years. The mathematics involved is often very challenging for outsiders. Indeed, the most prominent of this year’s winners, the German Peter Scholze, is best known for his work on “perfectoid spaces”, and I honestly have no idea how to begin explaining them aside from saying that they are useful in a number of problems in algebraic geometry (the lovely field mapping results in algebra – what numbers solve y=2x – and geometry – noting that those solutions to y=2x form a line). Two of this year’s prizes, however, the Fields given to Alessio Figalli and the Nevanlinna to Constantinos Daskalakis, have a very tight connection to an utterly core question in economics. Indeed, both of those men have published work in economics journals!

The problem of interest concerns how best to sell an object. If you are a monopolist hoping to sell one item to one consumer, where the consumer’s valuation of the object is only known to the consumer but commonly known to come from a distribution F, the mechanism that maximizes revenue is of course the Myerson auction from his 1981 paper in Math OR. The solution is simple: make a take it or leave it offer at a minimum price (or “reserve price”) which is a simple function of F. If you are selling one good and there are many buyers, then revenue is maximized by running a second-price auction with the exact same reserve price. In both cases, no potential buyer has any incentive to lie about their true valuation (the auction is “dominant strategy incentive compatible”). And further, seller revenue and expected payments for all players are identical to the Myerson auction in any other mechanism which allocates goods the same way in expectation, with minor caveats. This result is called “revenue equivalence”.

The Myerson paper is an absolute blockbuster. The revelation principle, the revenue equivalence theorem, and a solution to the optimal selling mechanism problem all in the same paper? I would argue it’s the most important result in economics since Arrow-Debreu-McKenzie, with the caveat that many of these ideas were “in the air” in the 1970s with the early ideas of mechanism design and Bayesian game theory. The Myerson result is also really worrying if you are concerned with general economic efficiency. Note that the reserve price means that the seller is best off sometimes not selling the good to anyone, in case all potential buyers have private values below the reserve price. But this is economically inefficient! We know that there exists an allocation mechanism which is socially efficient even when people have private information about their willingness to pay: the Vickrey-Clarke-Groves mechanism. This means that market power plus asymmetric information necessarily destroys social surplus. You may be thinking we know this already: an optimal monopoly price is classic price theory generates deadweight loss. But recall that a perfectly-price-discriminating monopolist sells to everyone whose willingness-to-pay exceeds the seller’s marginal cost of production, hence the only reason monopoly generates deadweight loss in a world with perfect information is that we constrain them to a “mechanism” called a fixed price. Myerson’s result is much worse: letting a monopolist use any mechanism, and price discriminate however they like, asymmetric information necessarily destroys surplus!

Despite this great result, there remain two enormous open problems. First, how should we sell a good when we will interact with the same buyer(s) in the future? Recall the Myerson auction involves bidders truthfully revealing their willingness to pay. Imagine that tomorrow, the seller will sell the same object. Will I reveal my willingness to pay truthfully today? Of course not! If I did, tomorrow the seller would charge the bidder with the highest willingness-to-pay exactly that amount. Ergo, today bidders will shade down their bids. This is called the “ratchet effect”, and despite a lot of progress in dynamic mechanism design, we have still not fully solved for the optimal dynamic mechanism in all cases.

The other challenging problem is one seller selling many goods, where willingness to pay for one good is related to willingness to pay for the others. Consider, for example, selling cable TV. Do you bundle the channels together? Do you offer a menu of possible bundles? This problem is often called “multidimensional screening”, because you are attempting to “screen” buyers such that those with high willingness to pay for a particular good actually pay a high price for that good. The optimal multidimensional screen is a devil of a problem. And it is here that we return to the Fields and Nevanlinna prizes, because they turn out to speak precisely to this problem!

What could possibly be the connection between high-level pure math and this particular pricing problem? The answer comes from the 18th century mathematician Gaspard Monge, founder of the Ecole Polytechnique. He asked the following question: what is the cheapest way to move mass from X to Y, such as moving apples from a bunch of distribution centers to a bunch of supermarkets. It turns out that without convexity or linearity assumptions, this problem is very hard, and it was not solved until the late 20th century. Leonid Kantorovich, the 1975 Nobel winner in economics, paved the way for this result by showing that there is a “dual” problem where instead of looking for the map from X to Y, you look for the probability that a given mass in Y comes from X. This dual turns out to be useful in that there exists an object called a “potential” which helps characterize the optimal transport problem solution in a much more tractable way than searching across any possible map.

Note the link between this problem and our optimal auction problem above, though! Instead of moving mass most cheaply from X to Y, we are looking to maximize revenue by assigning objects Y to people with willingness-to-pay drawn from X. So no surprise, the solution to the optimal transport problem when X has a particular structure and the solution to the revenue maximizing mechanism problem are tightly linked. And luckily for us economists, many of the world’s best mathematicians, including 2010 Fields winner Cedric Villani, and this year’s winner Alessio Figalli, have spent a great deal of effort working on exactly this problem. Ivar Ekeland has a nice series of notes explaining the link between the two problems in more detail.

In a 2017 Econometrica, this year’s Nevanlinna winner Daskalakis and his coauthors Alan Deckelbaum and Christos Tzamos, show precisely how to use strong duality in the optimal transport problem to solve the general optimal mechanism problem when selling multiple goods. The paper is very challenging, requiring some knowledge of measure theory, duality theory, and convex analysis. That said, the conditions they give to check an optimal solution, and the method to find the optimal solution, involve a reasonably straightforward series of inequalities. In particular, the optimal mechanism involves dividing the hypercube of potential types into (perhaps infinite) regions who get assigned the same prices and goods (for example, “you get good A and good B together with probability p at price X”, or “if you are unwilling to pay p1 for A, p2 for B, or p for both together, you get nothing”).

This optimal mechanism has some unusual properties. Remember that the Myerson auction for one buyer is “simple”: make a take it or leave it offer at the reserve price. You may think that if you are selling many items to one buyer, you would likewise choose a reserve price for the whole bundle, particularly when the number of goods with independently distributed values becomes large. For instance, if there are 1000 cable channels, and a buyer has value distributed uniformly between 0 and 10 cents for each channel, then by a limit theorem type argument it’s clear that the willingness to pay for the whole bundle is quite close to 50 bucks. So you may think, just price at a bit lower than 50. However, Daskalakis et al show that when there are sufficiently many goods with i.i.d. uniformly-distributed values, it is never optimal to just set a price for the whole bundle! It is also possible to show that the best mechanism often involves randomization, where buyers who report that they are willing to pay X for item a and Y for item b will only get the items with probability less than 1 at specified price. This is quite contrary to my intuition, which is that in most mechanism problems, we can restrict focus to deterministic assignment. It was well-known that multidimensional screening has weird properties; for example, Hart and Reny show that an increase in buyer valuations can cause seller revenue from the optimal mechanism to fall. The techniques Daskalakis and coauthors develop allow us to state exactly what we ought do in these situations previously unknown in the literature, such as when we know we need mechanisms more complicated than “sell the whole bundle at price p”.

The history of economics has been a long series of taking tools from the frontier of mathematics, from the physics-based analogues of the “marginalists” in the 1870s, to the fixed point theorems of the early game theorists, the linear programming tricks used to analyze competitive equilibrium in the 1950s, and the tropical geometry recently introduced to auction theory by Elizabeth Baldwin and Paul Klemperer. We are now making progress on pricing issues that have stumped some of the great theoretical minds in the history of the field. Multidimensional screening is an incredibly broad topic: how ought we regulate a monopoly with private fixed and marginal costs, how ought we tax agents who have private costs of effort and opportunities, how ought a firm choose wages and benefits, and so on. Knowing the optimum is essential when it comes to understanding when we can use simple, nearly-correct mechanisms. Just in the context of pricing, using related tricks to Daskalakis, Gabriel Carroll showed in a recent Econometrica that bundling should be avoided when the principal has limited knowledge about the correlation structure of types, and my old grad school friend Nima Haghpanah has shown, in a paper with Jason Hartline, that firms should only offer high-quality and low-quality versions of their products if consumers’ values for the high-quality good and their relative value for the low versus high quality good are positively correlated. Neither of these results are trivial to prove. Nonetheless, a hearty cheers to our friends in pure mathematics who continue to provide us with the tools we need to answer questions at the very core of economic life!

Advertisements

“Eliminating Uncertainty in Market Access: The Impact of New Bridges in Rural Nicaragua,” W. Brooks & K. Donovan (2018)

It’s NBER Summer Institute season, when every bar and restaurant in East Cambridge, from Helmand to Lord Hobo, is filled with our tribe. The air hums with discussions of Lagrangians and HANKs and robust estimators. And the number of great papers presented, discussed, or otherwise floating around inspires.

The paper we’re discussing today, by Wyatt Brooks at Notre Dame and Kevin Donovan at Yale SOM, uses a great combination of dynamic general equilibrium theory and a totally insane quasi-randomized experiment to help answer an old question: how beneficial is it for villages to be connected to the broader economy? The fundamental insight requires two ideas that are second nature for economists, but are incredibly controversial outside our profession.

First, going back to Nobel winner Arthur Lewis if not much earlier, economists have argued that “structural transformation”, the shift out of low-productivity agriculture to urban areas and non-ag sectors, is fundamental to economic growth. Recent work by Hicks et al is a bit more measured – the individuals who benefit from leaving agriculture generally already have, so Lenin-type forced industrialization is a bad idea! – but nonetheless barriers to that movement are still harmful to growth, even when those barriers are largely cultural as in the forthcoming JPE by Melanie Morton and the well-named Gharad Bryan. What’s so bad about the ag sector? In the developing world, it tends to be small-plot, quite-inefficient, staple-crop production, unlike the growth-generating positive-externality-filled, increasing-returns-type sectors (on this point, Romer 1990). There are zero examples of countries becoming rich without their labor force shifting dramatically out of agriculture. The intuition of many in the public, that Gandhi was right about the village economy and that structural transformation just means dreadful slums, is the intuition of people who lack respect for individual agency. The slums may be bad, but look how they fill up everywhere they exist! Ergo, how bad must the alternative be?

The second related misunderstanding of the public is that credit is unimportant. For folks near subsistence, the danger of economic shocks pushing you near that dangerous cutpoint is so fundamental that it leads to all sorts of otherwise odd behavior. Consider the response of my ancestors (and presumably the author of today’s paper’s ancestors, given that he is a Prof. Donovan) when potato blight hit. Potatoes are an input to growing more potatoes tomorrow, but near subsistence, you have no choice but to eat your “savings” away after bad shocks. This obviously causes problems in the future, prolonging the famine. But even worse, to avoid getting in a situation where you eat all your savings, you save more and invest less than you otherwise would. Empirically, Karlan et al QJE 2014 show large demand for savings instruments in Ghana, and Cynthia Kinnan shows why insurance markets in the developing world are incomplete despite large welfare gains. Indeed, many countries, including India, make it illegal to insure oneself against certain types of negative shocks, as Mobarak and Rosenzweig show. The need to save for low probability, really negative, shocks may even lead people to invest in assets with highly negative annual returns; on this, see the wonderfully-titled Continued Existence of Cows Disproves Central Tenets of Capitalism? This is all to say: the rise of credit and insurance markets unlocks much more productive activity, especially in the developing world, and it is not merely the den of exploitative lenders.

Ok, so insurance against bad shocks matters, and getting out of low-productivity agriculture may matter as well. Let’s imagine you live in a tiny village which is often separated from bigger towns, geographically. What would happen if you somehow lowered the cost of reaching those towns? Well, we’d expect goods-trade to radically change – see the earlier post on Dave Donaldson’s work, or the nice paper on Brazilian roads by Morten and Oliveria. But the benefits of reducing isolation go well beyond just getting better prices for goods.

Why? In the developing world, most people have multiple jobs. They farm during the season, work in the market on occasion, do construction, work as a migrant, and so on. Imagine that in the village, most jobs are just farmwork, and outside, there is always the change for day work at a fixed wage. In autarky, I just work on the farm, perhaps my own. I need to keep a bunch of savings because sometimes farms get a bunch of bad shocks: a fire burns my crops, or an elephant stomps on them. Running out of savings risks death, and there is no crop insurance, so I save precautionarily. Saving means I don’t have as much to spend on fertilizer or pesticide, so my yields are lower.

If I can access the outside world, then when my farm gets bad shocks and my savings runs low, I leave the village and take day work to build them back up. Since I know I will have that option, I don’t need to save as much, and hence I can buy more fertilizer. Now, the wage for farmers in the village (including the implicit wage that would keep me on my own farm) needs to be higher since some of these ex-farmers will go work in town, shifting village labor supply left. This higher wage pushes the amount of fertilizer I will buy down, since high wages reduce the marginal productivity of farm improvements. Whether fertilizer use goes up or down is therefore an empirical question, but at least we can say that those who use more fertilizer, those who react more to bad shocks by working outside the village, and those whose savings drops the most should be the same farmers. Either way, the village winds up richer both for the direct reason of having an outside option, and for the indirect reason of being able to reduce precautionary savings. That is, the harm is coming both from the first moment, the average shock to agricultural productivity, but also the second moment, its variance.

How much does this matter is practice? Brooks and Donovan worked with a NGO that physically builds bridges in remote areas. In Nicaragua, floods during the harvest season are common, isolating villages for days at a time when the riverbed along the path to market turns into a raging torrent. In this area, bridges are unnecessary when the riverbed is dry: the land is fairly flat, and the bridge barely reduces travel time when the riverbed isn’t flooded. These floods generally occur exactly during the growing season, after fertilizer is bought, but before crops are harvested, so the goods market in both inputs and outputs is essentially unaffected. And there is nice quasirandom variation: of 15 villages which the NGO selected as needing a bridge, 9 were ruled out after a visit by a technical advisor found the soil and topography unsuitable for the NGO’s relatively inexpensive bridge.

The authors survey villages the year before and the two years after the bridges are built, as well as surveying a subset of villagers with cell phones every two weeks in a particular year. Although N=15 seems worrying for power, the within-village differences in labor market behavior are sufficient that properly bootstrapped estimates can still infer interesting effects. And what do you find? Villages with bridges have many men shift from working in the village to outside in a given week, the percentage of women working outside nearly doubles with most of the women entering the labor force in order to work, the wages inside the village rise while wages outside the village do not, the use of fertilizer rises, village farm profits rise 76%, and the effect of all this is most pronounced on poorer households physically close to the bridge.

All this is exactly in line with the dynamic general equilibrium model sketched out above. If you assumed that bridges were just about market access for goods, you would have missed all of this. If you assumed the only benefit was additional wages outside the village, you would miss a full 1/3 of the benefit: the general equilibrium effect of shifting out workers who are particularly capable working outside the village causes wages to rise for the farm workers who remain at home. These particular bridges show an internal rate of return of nearly 20% even though they do nothing to improve market access for either inputs and outputs! And there are, of course, further utility benefits from reducing risk, even when that risk reduction does not show up in income through the channel of increased investment.

November 2017 working paper, currently R&R at Econometrica (RePEc IDEAS version. Both authors have a number of other really interesting drafts, of which I’ll mention two. Brooks, in a working paper with Joseph Kaposki and Yao Li, identify a really interesting harm of industrial clusters, but one that Adam Smith would have surely identified: they make collusion easier. Put all the firms in an industry in the same place, and establish regular opportunities for their managers to meet, and you wind up getting much less variance in markups than firms which are induced to locate in these clusters! Donovan, in a recent RED with my friend Chris Herrington, calibrates a model to explain why both college attendance and the relative cognitive ability of college grads rose during the 20th century. It’s not as simple as you might think: a decrease in costs, through student loans of otherwise, only affects marginal students, who are cognitively worse than the average existing college student. It turns out you also need a rising college premium and more precise signals of high schoolers’ academic abilities to get both patterns. Models doing work to extract insight from data – as always, this is the fundamental reason why economics is the queen of the social sciences.

Nobel Prize 2016 Part II: Oliver Hart

The Nobel Prize in Economics was given yesterday to two wonderful theorists, Bengt Holmstrom and Oliver Hart. I wrote a day ago about Holmstrom’s contributions, many of which are simply foundational to modern mechanism design and its applications. Oliver Hart’s contribution is more subtle and hence more of a challenge to describe to a nonspecialist; I am sure of this because no concept gives my undergraduate students more headaches than Hart’s “residual control right” theory of the firm. Even stranger, much of Hart’s recent work repudiates the importance of his most famous articles, a point that appears to have been entirely lost on every newspaper discussion of Hart that I’ve seen (including otherwise very nice discussions like Applebaum’s in the New York Times). A major reason he has changed his beliefs, and his research agenda, so radically is not simply the whims of age or the pressures of politics, but rather the impact of a devastatingly clever, and devastatingly esoteric, argument made by the Nobel winners Eric Maskin and Jean Tirole. To see exactly what’s going on in Hart’s work, and why there remains many very important unsolved questions in this area, let’s quickly survey what economists mean by “theory of the firm”.

The fundamental strangeness of firms goes back to Coase. Markets are amazing. We have wonderful theorems going back to Hurwicz about how competitive market prices coordinate activity efficiently even when individuals only have very limited information about how various things can be produced by an economy. A pencil somehow involves graphite being mined, forests being explored and exploited, rubber being harvested and produced, the raw materials brought to a factory where a machine puts the pencil together, ships and trains bringing the pencil to retail stores, and yet this decentralized activity produces a pencil costing ten cents. This is the case even though not a single individual anywhere in the world knows how all of those processes up the supply chain operate! Yet, as Coase pointed out, a huge amount of economic activity (including the majority of international trade) is not coordinated via the market, but rather through top-down Communist-style bureaucracies called firms. Why on Earth do these persistent organizations exist at all? When should firms merge and when should they divest themselves of their parts? These questions make up the theory of the firm.

Coase’s early answer is that something called transaction costs exist, and that they are particularly high outside the firm. That is, market transactions are not free. Firm size is determined at the point where the problems of bureaucracy within the firm overwhelm the benefits of reducing transaction costs from regular transactions. There are two major problems here. First, who knows what a “transaction cost” or a “bureaucratic cost” is, and why they differ across organizational forms: the explanation borders on tautology. Second, as the wonderful paper by Alchian and Demsetz in 1972 points out, there is no reason we should assume firms have some special ability to direct or punish their workers. If your supplier does something you don’t like, you can keep them on, or fire them, or renegotiate. If your in-house department does something you don’t like, you can keep them on, or fire them, or renegotiate. The problem of providing suitable incentives – the contracting problem – does not simply disappear because some activity is brought within the boundary of the firm.

Oliver Williamson, a recent Nobel winner joint with Elinor Ostrom, has a more formal transaction cost theory: some relationships generate joint rents higher than could be generated if we split ways, unforeseen things occur that make us want to renegotiate our contract, and the cost of that renegotiation may be lower if workers or suppliers are internal to a firm. “Unforeseen things” may include anything which cannot be measured ex-post by a court or other mediator, since that is ultimately who would enforce any contract. It is not that everyday activities have different transaction costs, but that the negotiations which produce contracts themselves are easier to handle in a more persistent relationship. As in Coase, the question of why firms do not simply grow to an enormous size is largely dealt with by off-hand references to “bureaucratic costs” whose nature was largely informal. Though informal, the idea that something like transaction costs might matter seemed intuitive and had some empirical support – firms are larger in the developing world because weaker legal systems means more “unforeseen things” will occur outside the scope of a contract, hence the differential costs of holdup or renegotiation inside and outside the firm are first order when deciding on firm size. That said, the Alchian-Demsetz critique, and the question of what a “bureaucratic cost” is, are worrying. And as Eric van den Steen points out in a 2010 AER, can anyone who has tried to order paper through their procurement office versus just popping in to Staples really believe that the reason firms exist is to lessen the cost of intrafirm activities?

Grossman and Hart (1986) argue that the distinction that really makes a firm a firm is that it owns assets. They retain the idea that contracts may be incomplete – at some point, I will disagree with my suppliers, or my workers, or my branch manager, about what should be done, either because a state of the world has arrived not covered by our contract, or because it is in our first-best mutual interest to renegotiate that contract. They retain the idea that there are relationship-specific rents, so I care about maintaining this particular relationship. But rather than rely on transaction costs, they simply point out that the owner of the asset is in a much better bargaining position when this disagreement occurs. Therefore, the owner of the asset will get a bigger percentage of rents after renegotiation. Hence the person who owns an asset should be the one whose incentive to improve the value of the asset is most sensitive to that future split of rents.

Baker and Hubbard (2004) provide a nice empirical example: when on-board computers to monitor how long-haul trucks were driven began to diffuse, ownership of those trucks shifted from owner-operators to trucking firms. Before the computer, if the trucking firm owns the truck, it is hard to contract on how hard the truck will be driven or how poorly it will be treated by the driver. If the driver owns the truck, it is hard to contract on how much effort the trucking firm dispatcher will exert ensuring the truck isn’t sitting empty for days, or following a particularly efficient route. The computer solves the first problem, meaning that only the trucking firm is taking actions relevant to the joint relationship which are highly likely to be affected by whether they own the truck or not. In Grossman and Hart’s “residual control rights” theory, then, the introduction of the computer should mean the truck ought, post-computer, be owned by the trucking firm. If these residual control rights are unimportant – there is no relationship-specific rent and no incompleteness in contracting – then the ability to shop around for the best relationship is more valuable than the control rights asset ownership provides. Hart and Moore (1990) extends this basic model to the case where there are many assets and many firms, suggesting critically that sole ownership of assets which are highly complementary in production is optimal. Asset ownership affects outside options when the contract is incomplete by changing bargaining power, and splitting ownership of complementary assets gives multiple agents weak bargaining power and hence little incentive to invest in maintaining the quality of, or improving, the assets. Hart, Schleifer and Vishny (1997) provide a great example of residual control rights applied to the question of why governments should run prisons but not garbage collection. (A brief aside: note the role that bargaining power plays in all of Hart’s theories. We do not have a “perfect” – in a sense that can be made formal – model of bargaining, and Hart tends to use bargaining solutions from cooperative game theory like the Shapley value. After Shapley’s prize alongside Roth a few years ago, this makes multiple prizes heavily influenced by cooperative games applied to unexpected problems. Perhaps the theory of cooperative games ought still be taught with vigor in PhD programs!)

There are, of course, many other theories of the firm. The idea that firms in some industries are big because there are large fixed costs to enter at the minimum efficient scale goes back to Marshall. The agency theory of the firm going back at least to Jensen and Meckling focuses on the problem of providing incentives for workers within a firm to actually profit maximize; as I noted yesterday, Holmstrom and Milgrom’s multitasking is a great example of this, with tasks being split across firms so as to allow some types of workers to be given high powered incentives and others flat salaries. More recent work by Bob Gibbons, Rebecca Henderson, Jon Levin and others on relational contracting discusses how the nexus of self-enforcing beliefs about how hard work today translates into rewards tomorrow can substitute for formal contracts, and how the credibility of these “relational contracts” can vary across firms and depend on their history.

Here’s the kicker, though. A striking blow was dealt to all theories which rely on the incompleteness or nonverifiability of contracts by a brilliant paper of Maskin and Tirole (1999) in the Review of Economic Studies. Theories relying on incomplete contracts generally just hand-waved that there are always events which are unforeseeable ex-ante or impossible to verify in court ex-post, and hence there will always scope for disagreement about what to do when those events occur. But, as Maskin and Tirole correctly point out, agent don’t care about anything in these unforeseeable/unverifiable states except for what the states imply about our mutual valuations from carrying on with a relationship. Therefore, every “incomplete contract” should just involve the parties deciding in advance that if a state of the world arrives where you value keeping our relationship in that state at 12 and I value it at 10, then we should split that joint value of 22 at whatever level induces optimal actions today. Do this same ex-ante contracting for all future profit levels, and we are done. Of course, there is still the problem of ensuring incentive compatibility – why would the agents tell the truth about their valuations when that unforeseen event occurs? I will omit the details here, but you should read the original paper where Maskin and Tirole show a (somewhat convoluted but still working) mechanism that induces truthful revelation of private value by each agent. Taking the model’s insight seriously but the exact mechanism less seriously, the paper basically suggests that incomplete contracts don’t matter if we can truthfully figure out ex-post who values our relationship at what amount, and there are many real-world institutions like mediators who do precisely that. If, as Maskin and Tirole prove (and Maskin described more simply in a short note), incomplete contracts aren’t a real problem, we are back to square one – why have persistent organizations called firms?

What should we do? Some theorists have tried to fight off Maskin and Tirole by suggesting that their precise mechanism is not terribly robust to, for instance, assumptions about higher-order beliefs (e.g., Aghion et al (2012) in the QJE). But these quibbles do not contradict the far more basic insight of Maskin and Tirole, that situations we think of empirically as “hard to describe” or “unlikely to occur or be foreseen”, are not sufficient to justify the relevance of incomplete contracts unless we also have some reason to think that all mechanisms which split rent on the basis of future profit, like a mediator, are unavailable. Note that real world contracts regularly include provisions that ex-ante describe how contractual disagreement ex-post should be handled.

Hart’s response, and this is both clear from his CV and from his recent papers and presentations, is to ditch incompleteness as the fundamental reason firms exist. Hart and Moore’s 2007 AER P&P and 2006 QJE are very clear:

Although the incomplete contracts literature has generated some useful insights about firm boundaries, it has some shortcomings. Three that seem particularly important to us are the following. First, the emphasis on noncontractible ex ante investments seems overplayed: although such investments are surely important, it is hard to believe that they are the sole drivers of organizational form. Second, and related, the approach is ill suited to studying the internal organization of firms, a topic of great interest and importance. The reason is that the Coasian renegotiation perspective suggests that the relevant parties will sit down together ex post and bargain to an efficient outcome using side payments: given this, it is hard to see why authority, hierarchy, delegation, or indeed anything apart from asset ownership matters. Finally, the approach has some foundational weaknesses [pointed out by Maskin and Tirole (1999)].

To my knowledge, Oliver Hart has written zero papers since Maskin-Tirole was published which attempt to explain any policy or empirical fact on the basis of residual control rights and their necessary incomplete contracts. Instead, he has been primarily working on theories which depend on reference points, a behavioral idea that when disagreements occur between parties, the ex-ante contracts are useful because they suggest “fair” divisions of rent, and induce shading and other destructive actions when those divisions are not given. These behavioral agents may very well disagree about what the ex-ante contract means for “fairness” ex-post. The primary result is that flexible contracts (e.g., contracts which deliberately leave lots of incompleteness) can adjust easily to changes in the world but will induce spiteful shading by at least one agent, while rigid contracts do not permit this shading but do cause parties to pursue suboptimal actions in some states of the world. This perspective has been applied by Hart to many questions over the past decade, such as why it can be credible to delegate decision making authority to agents; if you try to seize it back, the agent will feel aggrieved and will shade effort. These responses are hard, or perhaps impossible, to justify when agents are perfectly rational, and of course the Maskin-Tirole critique would apply if agents were purely rational.

So where does all this leave us concerning the initial problem of why firms exist in a sea of decentralized markets? In my view, we have many clever ideas, but still do not have the perfect theory. A perfect theory of the firm would need to be able to explain why firms are the size they are, why they own what they do, why they are organized as they are, why they persist over time, and why interfirm incentives look the way they do. It almost certainly would need its mechanisms to work if we assumed all agents were highly, or perfectly, rational. Since patterns of asset ownership are fundamental, it needs to go well beyond the type of hand-waving that makes up many “resource” type theories. (Firms exist because they create a corporate culture! Firms exist because some firms just are better at doing X and can’t be replicated! These are outcomes, not explanations.) I believe that there are reasons why the costs of maintaining relationships – transaction costs – endogenously differ within and outside firms, and that Hart is correct is focusing our attention on how asset ownership and decision making authority affects incentives to invest, but these theories even in their most endogenous form cannot do everything we wanted a theory of the firm to accomplish. I think that somehow reputation – and hence relational contracts – must play a fundamental role, and that the nexus of conflicting incentives among agents within an organization, as described by Holmstrom, must as well. But we still lack the precise insight to clear up this muddle, and give us a straightforward explanation for why we seem to need “little Communist bureaucracies” to assist our otherwise decentralized and almost magical market system.

Nobel Prize 2016 Part I: Bengt Holmstrom

The Nobel Prize in Economics has been announced, and what a deserving prize it is: Bengt Holmstrom and Oliver Hart have won for the theory of contracts. The name of this research weblog is “A Fine Theorem”, and it would be hard to find two economists whose work is more likely to elicit such a description! Both are incredibly deserving; more than five years ago on this site, I discussed how crazy it was that Holmstrom had yet to win!. The only shock is the combination: a more natural prize would have been Holmstrom with Paul Milgrom and Robert Wilson for modern applied mechanism design, and Oliver Hart with John Moore and Sandy Grossman for the theory of the firm. The contributions of Holmstrom and Hart are so vast that I’m splitting this post into two, so as to properly cover the incredible intellectual accomplishments of these two economists.

The Finnish economist Bengt Holmstrom did his PhD in operations research at Stanford, advised by Robert Wilson, and began his career at my alma mater, the tiny department of Managerial Economics and Decision Sciences at Northwestern’s Kellogg School. To say MEDS struck gold with their hires in this era is an extreme understatement: in 1978 and 1979 alone, they hired Holmstrom and his classmate Paul Milgrom (another Wilson student from Stanford), hired Nancy Stokey promoted Nobel laureate Roger Myerson to Associate Professor, and tenured an adviser of mine, Mark Satterthwaite. And this list doesn’t even include other faculty in the late 1970s and early 1980s like eminent contract theorist John Roberts, behavioralist Colin Camerer, mechanism designer John Ledyard or game theorist Ehud Kalai. This group was essentially put together by two senior economists at Kellogg, Nancy Schwartz and Stanley Reiter, who had the incredible foresight to realize both that applied game theory was finally showing promise of tackling first-order economic questions in a rigorous way, and that the folks with the proper mathematical background to tackle these questions were largely going unhired since they often did their graduate work in operations or mathematics departments rather than traditional economics departments. This market inefficiency, as it were, allowed Nancy and Stan to hire essentially every young scholar in what would become the field of mechanism design, and to develop a graduate program which combined operations, economics, and mathematics in a manner unlike any other place in the world.

From that fantastic group, Holmstrom’s contribution lies most centrally in the area of formal contract design. Imagine that you want someone – an employee, a child, a subordinate division, an aid contractor, or more generally an agent – to perform a task. How should you induce them to do this? If the task is “simple”, meaning the agent’s effort and knowledge about how to perform the task most efficiently is known and observable, you can simply pay a wage, cutting off payment if effort is not being exerted. When only the outcome of work can be observed, if there is no uncertainty in how effort is transformed into outcomes, knowing the outcome is equivalent to knowing effort, and hence optimal effort can be achieved via a bonus payment made on the basis of outcomes. All straightforward so far. The trickier situations, which Holmstrom and his coauthors analyzed at great length, are when neither effort nor outcomes are directly observable.

Consider paying a surgeon. You want to reward the doctor for competent, safe work. However, it is very difficult to observe perfectly what the surgeon is doing at all times, and basing pay on outcomes has a number of problems. First, the patient outcome depends on the effort of not just one surgeon, but on others in the operating room and prep table: team incentives must be provided. Second, the doctor has many ways to shift the balance of effort between reducing costs to the hospital, increasing patient comfort, increasing the quality of the medical outcome, and mentoring young assistant surgeons, so paying on the basis of one or two tasks may distort effort away from other harder-to-measure tasks: there is a multitasking problem. Third, the number of medical mistakes, or the cost of surgery, that a hospital ought expect from a competent surgeon depends on changes in training and technology that are hard to know, and hence a contract may want to adjust payments for its surgeons on the performance of surgeons elsewhere: contracts ought take advantage of relevant information when it is informative about the task being incentivized. Fourth, since surgeons will dislike risk in their salary, the fact that some negative patient outcomes are just bad luck means that you will need to pay the surgeon very high bonuses to overcome their risk aversion: when outcome measures involve uncertainty, optimal contracts will weigh “high-powered” bonuses against “low-powered” insurance against risk. Fifth, the surgeon can be incentivized either by payments today or by keeping their job tomorrow, and worse, these career concerns may cause the surgeon to waste the hospital’s money on tasks which matter to the surgeon’s career beyond the hospital.

Holmstrom wrote the canonical paper on each of these topics. His 1979 paper in the Bell Journal of Economics shows that any information which reduces the uncertainty about what an agent actually did should feature in a contract, since by reducing uncertainty, you reduce the risk premium needed to incentivize the agent to accept the contract. It might seem strange that contracts in many cases do not satisfy this “informativeness principle”. For instance, CEO bonuses are often not indexed to the performance of firms in the same industry. If oil prices rise, essentially all oil firms will be very profitable, and this is true whether or not a particular CEO is a good one. Bertrand and Mullainathan argue that this is because many firms with diverse shareholders are poorly governed!

The simplicity of contracts in the real world may have more prosaic explanations. Jointly with Paul Milgrom, the famous “multitasking” paper published in JLEO in 1991 notes that contracts shift incentives across different tasks in addition to serving as risk-sharing mechanisms and as methods for inducing effort. Since bonuses on task A will cause agents to shift effort away from hard-to-measure task B, it may be optimal to avoid strong incentives at all (just pay teachers a salary rather than a bonus based only on test performance) or to split job tasks (pay bonuses to teacher A who is told to focus only on math test scores, and pay salary to teacher B who is meant to serve as a mentor). That outcomes are generated by teams also motivates simpler contracts. Holmstrom’s 1982 article on incentives in teams, published in the Bell Journal, points out that if both my effort and yours is required to produce a good outcome, then the marginal product of our efforts are both equal to the entire value of what is produced, hence there is not enough output to pay each of us our marginal product. What can be done? Alchian and Demsetz had noticed this problem in 1972, arguing that firms exist to monitor the effort of individuals working in teams. With perfect knowledge of who does what, you can simply pay the workers a wage sufficient to make the optimal effort, then collect the residual as profit. Holmstrom notes that the monitoring isn’t the important bit: rather, even shareholder controlled firms where shareholders do no monitoring at all are useful. The reason is that shareholders can be residual claimants for profit, and hence there is no need to fully distribute profit to members of the team. Free-riding can therefore be eliminated by simply paying team members a wage of X if the team outcome is optimal, and 0 otherwise. Even a slight bit of shirking by a single agent drops their payment precipitously (which is impossible if all profits generated by the team are shared by the team), so the agents will not shirk. Of course, when there is uncertainty about how team effort transforms into outcomes, this harsh penalty will not work, and hence incentive problems may require team sizes to be smaller than that which is first-best efficient. A third justification for simple contracts is career concerns: agents work hard today to try to signal to the market that they are high quality, and do so even if they are paid a fixed wage. This argument had been made less formally by Fama, but Holmstrom (in a 1982 working paper finally published in 1999 in RESTUD) showed that this concern about the market only completely mitigates moral hazard if outcomes within a firm were fully observable to the market, or the future is not discounted at all, or there is no uncertainty about agent’s abilities. Indeed, career concerns can make effort provision worse; for example, agents may take actions to signal quality to the market which are negative for their current firm! A final explanation for simple contracts comes from Holmstrom’s 1987 paper with Milgrom in Econometrica. They argue that simple “linear” contracts, with a wage and a bonus based linearly on output, are more “robust” methods of solving moral hazard because they are less susceptible to manipulation by agents when the environment is not perfectly known. Michael Powell, a student of Holmstrom’s now at Northwestern, has a great set of PhD notes providing details of these models.

These ideas are reasonably intuitive, but the way Holmstrom answered them is not. Think about how an economist before the 1970s, like Adam Smith in his famous discussion of the inefficiency of sharecropping, might have dealt with these problems. These economists had few tools to deal with asymmetric information, so although economists like George Stigler analyzed the economic value of information, the question of how to elicit information useful to a contract could not be discussed in any systematic way. These economists would have been burdened by the fact that the number of contracts one could write are infinite, so beyond saying that under a contract of type X does not equate marginal cost to marginal revenue, the question of which “second-best” contract is optimal is extraordinarily difficult to answer in the absence of beautiful tricks like the revelation principle partially developed by Holmstrom himself. To develop those tricks, a theory of how individuals would respond to changes in their joint incentives over time was needed; the ideas of Bayesian equilibria and subgame perfection, developed by Harsanyi and Selten, were unknown before the 1960s. The accretion of tools developed by pure theory finally permitted, in the late 1970s and early 1980s, an absolute explosion of developments of great use to understanding the economic world. Consider, for example, the many results in antitrust provided by Nobel winner Jean Tirole, discussed here two years ago.

Holmstrom’s work has provided me with a great deal of understanding of why innovation management looks the way it does. For instance, why would a risk neutral firm not work enough on high-variance moonshot-type R&D projects, a question Holmstrom asks in his 1989 JEBO Agency Costs and Innovation? Four reasons. First, in Holmstrom and Milgrom’s 1987 linear contracts paper, optimal risk sharing leads to more distortion by agents the riskier the project being incentivized, so firms may choose lower expected value projects even if they themselves are risk neutral. Second, firms build reputation in capital markets just as workers do with career concerns, and high variance output projects are more costly in terms of the future value of that reputation when the interest rate on capital is lower (e.g., when firms are large and old). Third, when R&D workers can potentially pursue many different projects, multitasking suggests that workers should be given small and very specific tasks so as to lessen the potential for bonus payments to shift worker effort across projects. Smaller firms with fewer resources may naturally have limits on the types of research a worker could pursue, which surprisingly makes it easier to provide strong incentives for research effort on the remaining possible projects. Fourth, multitasking suggests agent’s tasks should be limited, and that high variance tasks should be assigned to the same agent, which provides a role for decentralizing research into large firms providing incremental, safe research, and small firms performing high-variance research. That many aspects of firm organization depend on the swirl of conflicting incentives the firm and the market provide is a topic Holmstrom has also discussed at length, especially in his beautiful paper “The Firm as an Incentive System”; I shall reserve discussion of that paper for a subsequent post on Oliver Hart.

Two final light notes on Holmstrom. First, he is the source of one of my favorite stories about Paul Samuelson, the greatest economic theorist of all time. Samuelson was known for having a steel trap of a mind. At a light trivia session during a house party for young faculty at MIT, Holmstrom snuck in a question, as a joke, asking for the name of the third President of independent Finland. Samuelson not only knew the name, but apparently was also able to digress on the man’s accomplishments! Second, I mentioned at the beginning of this post the illustrious roster of theorists who once sat at MEDS. Business school students are often very hesitant to deal with formal models, partially because they lack a technical background but also because there is a trend of “dumbing down” in business education whereby many schools (of course, not including my current department at The University of Toronto Rotman!) are more worried about student satisfaction than student learning. With perhaps Stanford GSB as an exception, it is inconceivable that any school today, Northwestern included, would gather such an incredible collection of minds working on abstract topics whose applicability to tangible business questions might lie years in the future. Indeed, I could name a number of so-called “top” business schools who have nobody on their faculty who has made any contribution of note to theory! There is a great opportunity for a Nancy Schwartz or Stan Reiter of today to build a business school whose students will have the ultimate reputation for rigorous analysis of social scientific questions.

Yuliy Sannikov and the Continuous Time Approach to Dynamic Contracting

The John Bates Clark Award, given to the best economist in the United States under 40, was given to Princeton’s Yuliy Sannikov today. The JBC has, in recent years, been tilted quite heavily toward applied empirical microeconomics, but the prize for Sannikov breaks that streak in striking fashion. Sannikov, it can be fairly said, is a mathematical genius and a high theorist of the first order. He is one of a very small number of people to win three gold medals at the International Math Olympiad – perhaps only Gabriel Carroll, another excellent young theorist, has an equally impressive mathematical background in his youth. Sannikov’s most famous work is in the pure theory of dynamic contracting, which I will spend most of this post discussing, but the methods he has developed turn out to have interesting uses in corporate finance and in macroeconomic models that wish to incorporate a financial sector without using linearization techniques that rob such models of much of their richness. A quick warning: Sannikov’s work is not for the faint of heart, and certainly not for those scared of an equation or two. Economists – and I count myself among this group – are generally scared of differential equations, as they don’t appear in most branches of economic theory (with exceptions, of course: Romer’s 1986 work on endogenous growth, the turnpike theorems, the theory of evolutionary games, etc.). As his work is incredibly technical, I will do my best to provide an overview of his basic technique and its uses without writing down a bunch of equations, but there really is no substitute for going to the mathematics itself if you find these ideas interesting.

The idea of dynamic contracting is an old one. Assume that a risk-neutral principal can commit to a contract that pays an agent on the basis of observed output, with that output being generated this year, next year, and so on. A risk-averse agent takes an unobservable action in every period, which affects output subject to some uncertainty. Payoffs in the future are discounted. Take the simplest possible case: there are two periods, an agent can either work hard or not, output is either 1 or 0, and the probability it is 1 is higher if the agent works hard than otherwise. The first big idea in the dynamic moral hazard of the late 1970s and early 1980s (in particular, Rogerson 1985 Econometrica, Lambert 1983 Bell J. Econ, Lazear and Moore 1984 QJE) is that the optimal contract will condition period 2 payoffs on whether there was a good or bad outcome in period 1; that is, payoffs are history-dependent. The idea is that you can use payoffs in period 2 to induce effort in period 1 (because continuation value increases) and in period 2 (because there is a gap between the payment following good or bad outcomes in that period), getting more bang for your buck. Get your employee to work hard today by dangling a chance at a big promotion opportunity tomorrow, then actually give them the promotion if they work hard tomorrow.

The second big result is that dynamic moral hazard (caveat: at least in cases where saving isn’t possible) isn’t such a problem. In a one-shot moral hazard problem, there is a tradeoff between risk aversion and high powered incentives. I either give you a big bonus when things go well and none if things go poorly (in which case you are induced to work hard, but may be unhappy because much of the bonus is based on things you can’t control), or I give you a fixed salary and hence you have no incentive to work hard. The reason this tradeoff disappears in a dynamic context is that when the agent takes actions over and over and over again, the principle can, using a Law of Large Numbers type argument, figure out exactly the frequency at which the agent has been slacking off. Further, when the agent isn’t slacking off, the uncertainty in output each period is just i.i.d., hence the principal can smooth out the agent’s bad luck, and hence as the discount rate goes to zero there is no tradeoff between providing incentives and the agent’s dislike of risk. Both of these results will hold even in infinite period models, where we just need to realize that all the agent cares about is her expected continuation value following every action, and hence we can analyze infinitely long problems in a very similar way to two period problems (Spear and Srivistava 1987).

Sannikov revisited this literature by solving for optimal or near-to-optimal contracts when agents take actions in continuous rather than discrete time. Note that the older literature generally used dynamic programming arguments and took the discount rate to a limit of zero in order to get interested results. These dynamic programs generally were solved using approximations that formed linear programs, and hence precise intuition of why the model was generating particular results in particular circumstances wasn’t obvious. Comparative statics in particular were tough – I can tell you whether an efficient contract exists, but it is tough to know how that efficient contract changes as the environment changes. Further, situations where discounting is positive are surely of independent interest – workers generally get performance reviews every year, contractors generally do not renegotiate continuously, etc. Sannikov wrote a model where an agent takes actions that control the mean of output continuously over time with Brownian motion drift (a nice analogue of the agent taking an action that each period generates some output that depends on the action and some random term). The agent has the usual decreasing marginal utility of income, so as the agent gets richer over time, it becomes tougher to incentivize the agent with a few extra bucks of payment.

Solving for the optimal contract essentially involves solving two embedded dynamic optimization problems. The agent optimizes effort over time given the contract the principal committed to, and hence the agent chooses an optimal dynamic history-dependent contract given what the agent will do in response. The space of possible history-dependent contracts is enormous. Sannikov shows that you can massively simplify, and solve analytically, for the optimal contract using a four step argument.

First, as in the discrete time approach, we can simplify things by noting that the agent only cares about their continuous-time continuation value following every action they make. The continuation value turns out to be a martingale (conditioning on history, my expectation of the continuation value tomorrow is just my continuation value today), and is basically just a ledger of my promises that I have made to the agent in the future on the basis of what happened in the past. Therefore, to solve for the optimal contract, I should just solve for the optimal stochastic process that determines the continuation value over time. The Martingale Representation Theorem tells me exactly and uniquely what that stochastic process must look like, under the constraint that the continuation value accurately “tracks” past promises. This stochastic process turns out to have a particular analytic form with natural properties (e.g., if you pay flow utility today, you can pay less tomorrow) that depend on the actions the agents take. Second, plug the agent’s incentive compatibility constraint into our equation for the stochastic process that determines the continuation value over time. Third, we just maximize profits for the principal given the stochastic process determining continuation payoffs that must be given to the agent. The principal’s problem determines an HJB equation which can be solved using Ito’s rule plus some effort checking boundary conditions – I’m afraid these details are far too complex for a blog post. But the basic idea is that we wind up with an analytic expression for the optimal way to control the agent’s continuation value over time, and we can throw all sorts of comparative statics right at that equation.

What does this method give us? Because the continuation value and the flow payoffs can be constructed analytically even for positive discount rates, we can actually answer questions like: should you use long-term incentives (continuation value) or short-term incentives (flow payoffs) more when, e.g., your workers have a good outside option? What happens as the discount rate increases? What happens if the uncertainty in the mapping between the agent’s actions and output increases? Answering questions of these types is very challenging, if not impossible, in a discrete time setting.

Though I’ve presented the basic Sannikov method in terms of incentives for workers, dynamic moral hazard – that certain unobservable actions control prices, or output, or other economic parameters, and hence how various institutions or contracts affect those unobservable actions – is a widespread problem. Brunnermeier and Sannikov have a nice recent AER which builds on the intuition of Kiyotaki-Moore models of the macroeconomy with financial acceleration. The essential idea is that small shocks in the financial sector may cause bigger real economy shocks due to deleveraging. Brunnermeier and Sannikov use the continuous-time approach to show important nonlinearities: minor financial shocks don’t do very much since investors and firms rely on their existing wealth, but major shocks off the steady state require capital sales which further depress asset prices and lead to further fire sales. A particularly interesting result is that exogenous risk is low – the economy isn’t very volatile – then there isn’t much precautionary savings, and so a shock that hits the economy will cause major harmful deleveraging and hence endogenous risk. That is, the very calmness of the world economy since 1983 may have made the eventual recession in 2008 worse due to endogenous choices of cash versus asset holdings. Further, capital requirements may actually be harmful if they aren’t reduced following shocks, since those very capital requirements will force banks to deleverage, accelerating the downturn started by the shock.

Sannikov’s entire oeuvre is essentially a graduate course in a new technique, so if you find the results described above interesting, it is worth digging deep into his CV. He is a great choice for the Clark medal, particularly given the deep and rigorous application he has applied his theory to in recent years. There really is no simple version of his results, but his 2012 survey, his recent working paper on moral hazard in labor contracts, and his dissertation work published in Econometrica in 2007 are most relevant. In related work, we’ve previously discussed on this site David Rahman’s model of collusion with continuous-time information flow, a problem very much related to work by Sannikov and his coauthor Andrzej Skrzypacz, as well as Aislinn Bohren’s model of reputation which is related to the single longest theory paper I’ve ever seen, Sannikov and Feingold’s Econometrica on the possibility of “fooling people” by pretending to be a type that you are not. I also like that this year’s JBC makes me look like a good prognosticator: Sannikov is one of a handful of names I’d listed as particularly deserving just two years ago when Gentzkow won!

“The Contributions of the Economics of Information to Twentieth Century Economics,” J. Stiglitz (2000)

There have been three major methodological developments in economics since 1970. First, following the Lucas Critique we are reluctant to accept policy advice which is not the result of directed behavior on the part of individuals and firms. Second, developments in game theory have made it possible to reformulate questions like “why do firms exist?”, “what will result from regulating a particular industry in a particular way?”, “what can I infer about the state of the world from an offer to trade?”, among many others. Third, imperfect and asymmetric information was shown to be of first-order importance for analyzing economic problems.

Why is information so important? Prices, Hayek taught us, solve the problem of asymmetric information about scarcity. Knowing the price vector is a sufficient statistic for knowing everything about production processes in every firm, as far as generating efficient behavior is concerned. The simple existence of asymmetric information, then, is not obviously a problem for economic efficiency. And if asymmetric information about big things like scarcity across society does not obviously matter, then how could imperfect information about minor things matter? A shopper, for instance, may not know exactly the price of every car at every dealership. But “Natura non facit saltum”, Marshall once claimed: nature does not make leaps. Tiny deviations from the assumptions of general equilibrium do not have large consequences.

But Marshall was wrong: nature does make leaps when it comes to information. The search model of Peter Diamond, most famously, showed that arbitrarily small search costs lead to firms charging the monopoly price in equilibrium, hence a welfare loss completely out of proportion to the search costs. That is, information costs and asymmetries, even very small ones, can theoretically be very problematic for the Arrow-Debreu welfare properties.

Even more interesting, we learned that prices are more powerful than we’d believed. They convey information about scarcity, yes, but also information about other people’s own information or effort. Consider, for instance, efficiency wages. A high wage is not merely a signal of scarcity for a particular type of labor, but is simultaneously an effort inducement mechanism. Given this dual role, it is perhaps not surprising that general equilibrium is no longer Pareto optimal, even if the planner is as constrained informationally as each agent.

How is this? Decentralized economies may, given information cost constraints, exert too much effort searching, or generate inefficient separating equilibrium that unravel trades. The beautiful equity/efficiency separation of the Second Welfare Theorem does not hold in a world of imperfect information. A simple example on this point is that it is often useful to allow some agents suffering moral hazard worries to “buy the firm”, mitigating the incentive problem, but limited liability means this may not happen unless those particular agents begin with a large endowment. That is, a different endowment, where the agents suffering extreme moral hazard problems begin with more money and are able to “buy the firm”, leads to more efficient production (potentially in a Pareto sense) than an endowment where those workers must be provided with information rents in an economy-distorting manner.

It is a strange fact that many social scientists feel economics to some extent stopped progressing by the 1970s. All the important basic results were, in some sense, known. How untrue this is! Imagine labor without search models, trade without monopolistic competitive equilibria, IO or monetary policy without mechanism design, finance without formal models of price discovery and equilibrium noise trading: all would be impossible given the tools we had in 1970. The explanations that preceded modern game theoretic and information-laden explanations are quite extraordinary: Marshall observed that managers have interests different from owners, yet nonetheless are “well-behaved” in running firms in a way acceptable to the owner. His explanation was to credit British upbringing and morals! As Stiglitz notes, this is not an explanation we would accept today. Rather, firms have used a number of intriguing mechanisms to structure incentives in a way that limits agency problems, and we now possess the tools to analyze these mechanisms rigorously.

Final 2000 QJE (RePEc IDEAS)

“Optimal Contracts for Experimentation,” M. Halac, N. Kartik & Q. Liu (2013)

Innovative activities have features not possessed by more standard modes of production. The eventual output, and its value, are subject to a lot of uncertainty. Effort can be difficult to monitor – it is often the case that the researcher knows more than management about what good science should look like. The inherent skill of the scientist is hard to observe. Output is generally only observed in discrete bunches.

These features make contracting for researchers inherently challenging. The classic reference here is Holmstrom’s 1989 JEBO, which just applies his great 1980s incentive contract papers to innovative activities. Take a risk-neutral firm. They should just work on the highest expected value project, right? Well, if workers are risk averse and supply unobserved effort, the optimal contract balances moral hazard (I would love to just pay you based on your output) and risk insurance (I would have to pay you to bear risk about the eventual output of the project). It turns out that the more uncertainty a project has, the more inefficient the information-constrained optimal contract becomes, so that even risk-neutral firms are biased toward relatively safe, lower expected value projects. Incentives within the firm matter in many other ways, as Holmstrom also points out: giving employee multiple tasks when effort is unobserved makes it harder to provide proper incentives because the opportunity cost of a given project goes up, firms with a good reputation in capital markets will be reluctant to pursue risky projects since the option value of variance in reputation is lower (a la Doug Diamond’s 1989 JPE), and so on. Nonetheless, the first order problem of providing incentives for a single researcher on a single project is hard enough!

Holmstrom’s model doesn’t have any adverse selection, however: both employer and employee know what expected output will result from a given amount of effort. Nor is Holmstrom’s problem dynamic. Marina Halac, Navin Kartik and Qingmin Liu have taken up the unenviable task of solving the dynamic researcher contracting problem under adverse selection and moral hazard. Let a researcher be either a high type or a low type. In every period, the researcher can work on a risky project at cost c, or shirk at no cost. The project is either feasible or not, with probability b. If the employee shirks, or the project is bad, there will be no invention this period. If the employee works, the project is feasible, and the employee is a high type, the project succeeds with probability L1, and if the employee is low type, with probability L2<L1. Note that as time goes on, if the employee works on the risk project, they continually update their beliefs about b. If enough time passes without an invention, belief about b becomes low enough that everyone (efficiently) stops working on the risky project. The firm's goal is to get employees to exert optimal effort for the optimal number of period given their type.

Here’s where things really get tricky. Who, in expectation and assuming efficient behavior, stops working on the risky project earlier conditional on not having finished the invention, the high type or the low type? On the one hand, for any belief about b, the high type is more likely to invent, hence since costs are identical for both types, the high type should expect to keep working longer. On the other hand, the high type learns more quickly whether the project is bad, and hence his belief about b declines more rapidly, so he ought expect to work for less time. That either case is possible makes solving for the optimal contract a real challenge, because I need to write the contracts for each type such that the low type does not ever prefer the high type payoffs and vice versa. To know whether these contracts are incentive compatible, I have to know what agents will do if they deviate to the “wrong” contract. The usual trick here is to use a single crossing result along the lines of “for any contract with properties P, action Y is more likely for higher types”. In the dynamic researcher problem, since efficient stopping times can vary nonmonotically with researcher type, the single crossing trick doesn’t look so useful.

The “simple” (where simple means a 30 page proof) case is when the higher types efficiently work longer in expectation. The information-constrained optimum involves inducing the high type to work efficiently, while providing the low type too little incentive to work for the efficient amount of time. Essentially, the high type is willing to work for less money per period if only you knew who he was. Asymmetric information means the high type can extract information rents. By reducing the incentive for the low type to work in later periods, the high type information rent is reduced, and hence the optimal mechanism trades off lower total surplus generated by the low type against lower information rents paid to the high type.

This constrained-optimal outcome can be implemented by paying scientists up front, and then letting them choose either a contract with progressively increasing penalties for lack of success each period, or a contract with a single large penalty if no success is achieved by the socially efficient high type stopping time. Also, “Penalty contracts” are nice because they remain optimal even if scientists can keep their results secret: since secrecy just means paying more penalties, everyone has an incentive to reveal their invention as soon as they create it. The proof is worth going through if you’re into dynamic mechanism design; essentially, the authors are using a clever set of relaxed problems where a form of single crossing will hold, then showing that mechanism is feasible even under the actual problem constraints.

Finally, note that if there is only moral hazard (scientist type is observable) or only adverse selection (effort is observable), the efficient outcome is easy. With moral hazard, just make the agent pay the expected surplus up front, and then provide a bonus to him each period equal to the firm’s profit from an invention occurring then; we usually say in this case that “the firm is sold to the employee”. With adverse selection, we can contract on optimal effort, using total surplus to screen types as in the correlated information mechanism design literature. Even though the “distortion only at the bottom” result looks familiar from static adverse selection, the rationale here is different.

Sept 2013 working paper (No RePEc IDEAS version). The article appears to be under R&R at ReStud.

“Competition in Persuasion,” M. Gentzkow & E. Kamenica (2012)

How’s this for fortuitous timing: I’d literally just gone through this paper by Gentzkow and Kamenica yesterday, and this morning it was announced that Gentzkow is the winner of the 2014 Clark Medal! More on the Clark in a bit, but first, let’s do some theory.

This paper is essentially the multiple sender version of the great Bayesian Persuasion paper by the same authors (discussed on this site a couple years ago). There are a group of experts who can (under commitment to only sending true signals) send costless signals about the realization of the state. Given the information received, the agent makes a decision, and each expert gets some utility depending on that decision. For example, the senders might be a prosecutor and a defense attorney who know the guilt of a suspect, and the agent a judge. The judge convicts if p(guilty)>=.5, the prosecutor wants to maximize convictions regardless of underlying guilt, and vice versa for the defense attorney. Here’s the question: if we have more experts, or less collusive experts, or experts with less aligned interests, is more information revealed?

A lot of our political philosophy is predicated on more competition in information revelation leading to more information actually being revealed, but this is actually a fairly subtle theoretical question! For one, John Stuart Mill and others of his persuasion would need some way of discussing how people competing to reveal information strategically interact, and to the extent that this strategic interaction is non-unique, they would need a way for “ordering” sets of potentially revealed information. We are lucky in 2014, thanks to our friends Nash and Topkis, to be able to nicely deal with each of those concerns.

The trick to solving this model (basically every proof in the paper comes down to algebra and some simple results from set theory; they are clever but not technically challenging) is the main result from the Bayesian Persuasion paper. Draw a graph with the agent’s posterior belief on the X-axis, and the utility (call this u) the sender gets from actions based on each posterior on the y-axis. Now draw the smallest concave function (call it V) that is everywhere greater than u. If V is strictly greater than u at the prior p, then a sender can improve her payoff by revealing information. Take the case of the judge and the prosecutor. If the judge has the prior that everyone brought before them is guilty with probability .6, then the prosecutor never reveals information about any suspect, and the judge always convicts (giving the prosecutor utility 1 rather than 0 from an acquittal). If, however, the judge’s prior is that everyone is guilty with .4, then the prosecutor can mix such that 80 percent of criminals are convicted by judiciously revealing information. How? Just take 2/3 of the innocent people, and all of the guilty people, and send signals that each of these people is guilty with p=.5, and give the judge information on the other 1/3 of innocent people that they are innocent with probability 1. This is plausible in a Bayesian sense. The judge will convict all of the folks where p(guilty)=.5, meaning 80 percent of all suspects are convicted. If you draw the graph described above with u=1 when the judge convicts and u=0 otherwise, it is clear that V>u if and only if p<.5, hence information is only revealed in that case.

What about when there are multiple senders with different utilities u? It is somewhat intuitive: more information is always, almost by definition, informative for the agent (remember Blackwell!). If there is any sender who can improve their payoff by revealing information given what has been revealed thus far, then we are not in equilibrium, and some sender has the incentive to deviate by revealing more information. Therefore, adding more senders increases the amount of information revealed and “shrinks” the set of beliefs that the agent might wind up holding (and, further, the authors show that any Bayesian plausible beliefs where no sender can further reveal information to improve their payoff is an equilibrium). We still have a number of technical details concerning multiplicity of equilibria to deal with, but the authors show that these results hold in a set order sense as well. This theorem is actually great: to check equilibrium information revelation, I only need to check where V and u diverge sender by sender, without worrying about complex strategic interactions. Because of that simplicity, it ends up being very easy to show that removing collusion among senders, or increasing the number of senders, will improve information revelation in equilibrium.

September 2012 working paper (IDEAS version). A brief word on the Clark medal. Gentzkow is a fine choice, particularly for his Bayesian persuasion papers, which are already very influential. I have no doubt that 30 years from now, you will still see the 2011 paper on many PhD syllabi. That said, the Clark medal announcement is very strange. It focuses very heavily on his empirical work on newspapers and TV, and mentions his hugely influential theory as a small aside! This means that five of the last six Clark medal winners, everyone but Levin and his relational incentive contracts, have been cited primarily for MIT/QJE-style theory-light empirical microeconomics. Even though I personally am primarily an applied microeconomist, I still see this as a very odd trend: no prizes for Chernozhukov or Tamer in metrics, or Sannikov in theory, or Farhi and Werning in macro, or Melitz and Costinot in trade, or Donaldson and Nunn in history? I understand these papers are harder to explain to the media, but it is not a good thing when the second most prominent prize in our profession is essentially ignoring 90% of what economists actually do.

“Wall Street and Silicon Valley: A Delicate Interaction,” G.-M. Angeletos, G. Lorenzoni & A. Pavan (2012)

The Keynesian Beauty Contest – is there any better example of an “old” concept in economics that, when read in its original form, is just screaming out for a modern analysis? You’ve got coordination problems, higher-order beliefs, signal extraction about underlying fundamentals, optimal policy response by a planner herself informationally constrained: all of these, of course, problems that have consumed micro theorists over the past few decades. The general problem of irrational exuberance when we start to model things formally, though, is that it turns out to be very difficult to generate “irrational” actions by rational, forward-looking agents. Angeletos et al have a very nice model that can generate irrational-looking asset price movements even when all agents are perfectly rational, based on the idea of information frictions between the real and financial sector.

Here is the basic plot. Entrepreneurs get an individual signal and a correlated signal about the “real” state of the economy (the correlation in error about fundamentals may be a reduced-form measure of previous herding, for instance). The entrepreneurs then make a costly investment. In the next period, some percentage of the entrepreneurs have to sell their asset on a competitive market. This may represent, say, idiosyncratic liquidity shocks, but really it is just in the model to abstract away from the finance sector learning about entrepreneur signals based on the extensive margin choice of whether to sell or not. The price paid for the asset depends on the financial sector’s beliefs about the real state of the economy, which come from a public noisy signal and the trader’s observations about how much investment was made by entrepreneurs. Note that the price traders pay is partially a function of trader beliefs about the state of the economy derived from the total investment made by entrepreneurs, and the total investment made is partially a function of the price at which entrepreneurs expect to be able to sell capital should a liquidity crisis hit a given firm. That is, higher order beliefs of both the traders and entrepreneurs about what the other aggregate class will do determine equilibrium investment and prices.

What does this imply? Capital investment is higher in the first stage if either the state of the world is believed to be good by entrepreneurs, or if the price paid in the following period for assets is expected to be high. Traders will pay a high price for an asset if the state of the world is believed to be good. These traders look at capital investment and essentially see another noisy signal about the state of the world. When an entrepreneur sees a correlated signal that is higher than his private signal, he increases investment due to a rational belief that the state of the world is better, but then increases it even more because of an endogenous strategic complementarity among the entrepreneurs, all of whom prefer higher investment by the class as a whole since that leads to more positive beliefs by traders and hence higher asset prices tomorrow. Of course, traders understand this effect, but a fixed point argument shows that even accounting for the aggregate strategic increase in investment when the correlated signal is high, aggregate capital can be read by traders precisely as a noisy signal of the actual state of the world. This means that when when entrepreneurs invest partially on the basis of a signal correlated among their class (i.e., there are information spillovers), investment is based too heavily on noise. An overweighting of public signals in a type of coordination game is right along the lines of the lesson in Morris and Shin (2002). Note that the individual signals for entrepreneurs are necessary to keep the traders from being able to completely invert the information contained in capital production.

What can a planner who doesn’t observe these signals do? Consider taxing investment as a function of asset prices, where high taxes appear when the market gets particularly frothy. This is good on the one hand: entrepreneurs build too much capital following a high correlated signal because other entrepreneurs will be doing the same and therefore traders will infer the state of the world is high and pay high prices for the asset. Taxing high asset prices lowers the incentive for entrepreneurs to shade capital production up when the correlated signal is good. But this tax will also lower the incentive to produce more capital when the actual state of the world, and not just the correlated signal, is good. The authors discuss how taxing capital and the financial sector separately can help alleviate that concern.

Proving all of this formally, it should be noted, is quite a challenge. And the formality is really a blessing, because we can see what is necessary and what is not if a beauty contest story is to explain excess aggregate volatility. First, we require some correlation in signals in the real sector to get the Morris-Shin effect operating. Second, we do not require the correlation to be on a signal about the real world; it could instead be correlation about a higher order belief held by the financial sector! The correlation merely allows entrepreneurs to figure something out about how much capital they as a class will produce, and hence about what traders in the next period will infer about the state of the world from that aggregate capital production. Instead of a signal that correlates entrepreneur beliefs about the state of the world, then, we could have a correlated signal about higher-order beliefs, say, how traders will interpret how entrepreneurs interpret how traders interpret capital production. The basic mechanism will remain: traders essentially read from aggregate actions of entrepreneurs a noisy signal about the true state of the world. And all this beauty contest logic holds in an otherwise perfectly standard Neokeynesian rational expectations model!

2012 working paper (IDEAS version). This paper used to go by the title “Beauty Contests and Irrational Exuberance”; I prefer the old name!

“The Limits of Price Discrimination,” D. Bergemann, B. Brooks and S. Morris (2013)

Rakesh Vohra, who much to the regret of many of us at MEDS has recently moved on to a new and prestigious position, pointed out a clever paper today by Bergemann, Brooks and Morris (the first and third names you surely know, the second is a theorist on this year’s market). Beyond some clever uses of linear algebra in the proofs, the results of the paper are in and of themselves very interesting. The question is the following: if a regulator, or a third party, can segment consumers by willingness-to-pay and provide that information to a monopolist, what are the effects on welfare and profits?

In a limited sense, this is an old question. Monopolies generate deadweight loss as they sell at a price above marginal cost. Monopolies that can perfectly price discriminate remove that deadweight loss but also steal all of the consumer surplus. Depending on your social welfare function, this may be a good or bad thing. When markets can be segmented (i.e., third degree price discrimination) with no chance of arbitrage, we know that monopolist profits are weakly higher since the uniform monopoly price could be maintained in both markets, but the effect on consumer surplus is ambiguous.

Bergemann et al provide two really interesting results. First, if you can choose the segmentation, it is always possible to segment consumers such that monopoly profits are just the profits gained under the uniform price, but quantity sold is nonetheless efficient. Further, there exist segmentations such that producer surplus P is anything between the uniform price profit P* and the perfect price discrimination profit P**, and such that consumer surplus plus consumer surplus P+C is anything between P* and P**! This seems like magic, but the method is actually pretty intuitive.

Let’s generate the first case, where producer profit is the uniform price profit P* and consumer surplus is maximal, C=P**-P*. In any segmentation, the monopolist can always charge P* to every segment. So if we want consumers to capture all of the surplus, there can’t be “too many” high-value consumers in a segment, since otherwise the monopolist would raise their price above P*. Let there be 3 consumer types, with the total market uniformly distributed across the three, such that valuations are 1, 2 and 3. Let marginal cost be constant at zero. The profit-maximizing price is 2, earning the monopolist 2*(2/3)=4/3. But what if we tell the monopolist that consumers can either be Class A or Class B. Class A consists of all consumers with willingness-to-pay 1 and exactly enough consumers with WTP 2 and 3 that the monopolist is just indifferent between choosing price 1 or price 2 for Class A. Class B consists of the rest of the types 2 and 3 (and since the relative proportion of type 2 and 3 in this Class is the same as in the market as a whole, where we already know the profit maximizing price is 2 with only types 2 and 3 buying, the profit maximizing price remains 2 here). Some quick algebra shows that if Class A consists of all of the WTP 1 consumers and exactly 1/2 of the WTP 2 and 3 consumers, then the monopolist is indifferent between charging 1 and 2 to Class A, and charges 2 to Class B. Therefore, it is an equilibrium for all consumers to buy the good, the monopolist to earn uniform price profits P*, and consumer surplus to be maximized. The paper formally proves that this intuition holds for general assumptions about (possibly continuous) consumer valuations.

The other two “corner cases” for bundles of consumer and producer surplus are also easy to construct. Maximal producer surplus P** with consumer surplus 0 is simply the case of perfect price discrimination: the producer knows every consumer’s exact willingness-to-pay. Uniform price producer surplus P* and consumer surplus 0 is constructed by mixing the very low WTP consumers with all of the very high types (along with some subset of consumers with less extreme valuations), such that the monopolist is indifferent between charging the monopolist price or just charging the high type price so that everyone below the high type does not buy. Then mix the next highest WTP types with low but not quite as low WTP types, and continue iteratively. A simple argument based on a property of convex sets allows mixtures of P and C outside the corner cases; Rakesh has provided an even more intuitive proof than that given in the paper.

Now how do we use this result in policy? At a first pass, since information is always good for the seller (weakly) and ambiguous for the consumer, a policymaker should be particularly worried about bundlers providing information about willingness-to-pay that is expected to drastically lower consumer surplus while only improving rent extraction by sellers a small bit. More works needs to be done in specific cases, but the mathematical setup in this paper provides a very straightforward path for such applied analysis. It seems intuitive that precise information about consumers with willigness-to-pay below the monopoly price is unambiguously good for welfare, whereas information bundles that contain a lot of high WTP consumers but also a relatively large number of lower WTP consumers will lower total quantity sold and hence social surplus.

I am also curious about the limits of price discrimination in the oligopoly case. In general, the ability to price discriminate (even perfectly!) can be very good for consumers under oligopoly. The intuition is that under uniform pricing, I trade-off stealing your buyers by lowering prices against earning less from my current buyers; the ability to price discriminate allows me to target your buyers without worrying about the effect on my own current buyers, hence the reaction curves are steeper, hence consumer surplus tends to increase (see Section 7 of Mark Armstrong’s review of the price discrimination literature). With arbitrary third degree price discrimination, however, I imagine mathematics similar to that in the present paper could prove similarly elucidating.

2013 Working Paper (IDEAS version).

Advertisements
%d bloggers like this: