Today’s Nobel Prize to Paul Milgrom and Robert Wilson is the capstone of an incredibly fruitful research line which began in the 1970s in a few small departments of Operations Research. Game theory, or the mathematical study of strategic interaction, dates back to work by Zermelo, Borel and von Neumann in the early 20th century. The famed book by von Neumann and Morganstern was published in 1944, and widely reviewed as one of the most important social scientific works of the century. And yet, it would be three decades before applications of game theory revolutionized antitrust, organizational policy, political theory, trade, finance, and more. Alongside the “credibility revolution” of causal econometrics, and to a lesser extent behavioral economics, applied game theory has been the most important development in economics in the past half century. The prize to Milgrom and Wilson is likely the final one that will go for early applied game theory, joining those in 1994, 2005, 2007, 2014 and 2016 that elevated the abstractions of the 1940s to today’s rigorous interpretations of so many previously disparate economic fields.
Neither Wilson nor Milgrom were trained in pure economics departments. Wilson came out of the decision sciences program of Howard Raiffa at Harvard, and Milgrom was a student of Wilson’s at Stanford Business School. However, the link between operations research and economics is a long one, with the former field often serving as a vector for new mathematical tools before the latter field was quite ready to accept them. In the middle of the century, the mathematics of optimal control and dynamic programming – how to solve problems where today’s action affects tomorrow’s possibilities – were applied to resource allocation by Kantorovich in the Soviet Union and to market economics problems in the West by Koopmans, Samuelson, Solow, and Dorfman. Luce and Raiffa explained how the theory of games and the ideas of Bayesian decision theory apply to social scientific problems. Stan Reiter’s group first at Purdue, then later with Nancy Schwartz at Kellogg MEDS, formally brought operations researchers and economists into the same department to apply these new mathematics to economic problems.
The real breakthrough, however, was the arrival of Bayesian games and subgame perfection from Harsanyi (1968) and Selten (1965, 1975). These tools in combination allow us to study settings where players signal, make strategic moves, bluff, attempt to deter, and so on. From the perspective of an institutional designer, they allow us, alongside Myerson’s revelation principle, to follow Hayek’s ideas formally and investigate how we should organize an economic activity given the differing information and possible actions of each player. Indeed, the Wilson Doctrine argues that practical application of game theory requires attention to these informational features. There remains a more complete intellectual history to be written here, but Paul Milgrom and Al Roth’s mutual interview of the JEP provides a great sense of the intellectual milieu of the 1970s as they developed their ideas. Wilson, the Teacher, and Milgrom, the Popularizer, were at the heart of showing just how widely these new tools in game theory could be applied.
Let us begin with the Popularizer. Milgrom was born and raised in Michigan, taking part in anti-war and anti-poverty protests as a radical student in Ann Arbor in the late 1960s. The 1960s were a strange time, and so Milgrom went straight from the world of student activism to the equally radical world of…working as an actuary for an insurance company. After enrolling in the MBA program at Stanford in the mid-1970s, he was invited to pursue a PhD under his co-laureate Robert Wilson, who, as we shall see, was pursuing an incredibly lucrative combination of operations research and economics with his students. It is hard to overstate how broad Milgrom’s contributions have been, both theoretically and in practice. But we can get a good taste by looking at four: the multitasking problem and the no-trade theorem on the theoretical side, and medieval guilds and modern spectrum auctions on the applied side.
It is perhaps surprising that Milgrom’s most-cited paper was published in the JLEO, well into his career. But the famed multitasking paper is so incredibly informative. The idea is simple: you can motivate someone either with direct rewards or by changing their opportunity cost. For instance, if you want a policeman to walk the beat more often, then make their office particularly dull and full of paperwork. Workers generally have many tasks they can work on, however, which vary in their relative costs. For example, a cop can slack off, arrest people on nonsense charges, or solve murders. Making their office dull will cause them to sit at their office desk for fewer hours, but it likely won’t cause them to solve murders rather than arrest people on nonsense charges. Why not just pay for the solved murders directly? Often it is impossible to measure or observe everything you want done.
If you “reward A while hoping for B”, as Steven Kerr’s famous management paper puts it, you are likely to get a lot of A. If you pay rewards for total arrests, your cops will cite people for riding bikes with no lights. So what can be done? Milgrom and Holmstrom give a simple model where workers exert effort, do some things you can measure and some which you cannot, and you get a payoff depending on both. If a job has some things you care about which are hard to measure, you should use fewer strong incentives on the things you can measure: by paying cops for arrests, you make the opportunity cost of solving murders for the cops who like doing this higher, because now they are giving up the reward they would get from arresting the bicyclists every time they work a murder! Further, you should give workers working on hard-to-measure tasks little job flexibility. The murder cop paid on salary should need to show her face in the office, while the meter maid getting paid based on how many tickets she gives already has a good reason not to shirk while on the clock. Once you start thinking about multitasking and the interaction of incentives with opportunity costs, you start seeing perverse incentives absolutely everywhere.
Milgrom’s writings on incentives within organizations are without a doubt the literature I draw on most heavily when teaching strategic management. It is a shame that the textbook written alongside John Roberts never caught on. For a taste of their basic view of management, check out “The Firm as an Incentive System”, which formally lays out formal incentives, asset ownership, and task assignments as a system of complements which make organizations function well. The field now known as organizational economics has grown to incorporate ideas like information transmission (Garicano 2000 JPE) and the link between relational contracts and firm culture (e.g., Gibbons and Henderson 2011). Yet there remain many questions on why firms are organized the way they are which are open to an enterprising graduate student with a good theoretical background.
Multitasking has a similar feel to many of Milgrom’s great papers: they provide a framework improving our intuition about some effect in the world, rather than just showing a mathematical curiosity. The same is true of his most famous finance paper, the “no-trade theorem” developed with Nancy Stokey. The idea is ex-post obvious but ex-ante incredibly surprising. Imagine that in the market for corn, there is free exchange, and all trades anyone wants to make (to mitigate risk, for use, to try to trade on private information, etc.) have been made. A farmer one day notices a blight on his crop, and suspects this blight is widespread in the region. Therefore, the supply of corn will fall. Can he profit from this insight? Milgrom-Stokey’s answer is no!
How could this be? Even if everyone had identical prior beliefs about corn supply, conditional on getting this information, the farmer definitely has a higher posterior belief about corn price come harvest season than everyone else. However, we assumed that before the farmer saw the blight, all gains from trade had been exhausted, and that it was common knowledge that this was so. The farmer offering to buy corn at a higher price is informative that the farmer has learned something. If the former price was $5/bushel, and the farmer offers you $7, then you know that he has received private information that the corn will be worth even more than $7, hence you should not sell him any. Now, of course there is trade on information all the time; indeed, there are huge sums spent collecting information so that it can be traded on! However, Milgrom-Stokey makes clear just how clear we have to be about what is causing the “common knowledge that all gains from trade were exhausted” assumption to fail. Models with “noise” traders, or models with heterogeneous prior beliefs (a very subtle philosophical issue), have built on Milgrom-Stokey to understand everything from asset bubbles to the collapse in trade in mortgage backed securities in 2008.
When it comes to practical application, Milgrom’s work on auctions is well-known, and formed the basis of his Nobel citation. How did auctions become so “practical”? There is no question that the rise of applied auction theory, with the economist as designer, has its roots in the privatization wave of the 1990s that followed the end of the Cold War. Governments held valuable assets: water rights, resource tracts, spectrum that was proving important for new technologies like the cell phone. Who was to be given these assets, and at what price? Milgrom’s 1995 Churchill lectures formed the basis for a book, “Putting Auction Theory to Work”, which is now essential reading alongside Klemperer’s “Theory and Practice”, for theorists and practitioners alike. Where it is unique is in its focus on the practical details of running auctions.
This focus is no surprise. Milgrom’s most famous theoretical work in his 1982 Econometrica with Robert Weber on optimal auctions which are partly common-value and partly private-value. That is, consider selling a house, where some of the value is your idiosyncratic taste, and some of the value is whether the house has mold. Milgrom and Weber show a seller should reduce uncertainty as much as possible about the “common” part of the value. If the seller does not know this information or can’t credibly communicate it, then unlike in auctions which don’t have that common component, it matters a lot whether how you run the auction. For instance, with a first-price auction, you may bid low even though you like the house because you worry about winning when other bidders noticed the mold and you didn’t. In a second-price auction, the price you pay incorporates in part that information from others, hence leads to more revenue for the homeseller.
In practical auctions more broadly, complements across multiple goods being sold separately, private information about common components, the potential to collude or form bidder rings, and the regularity with which auctions are held and hence the number of expected bidders are all incredibly important to auctioneer revenue and efficient allocation of the object being sold. I omit further details of precisely what Milgrom did in the many auctions he consulted on, as the popular press will cover this aspect of his work well, but it is not out of the question to say that the social value of better allocation of things like wireless spectrum is on the order of tens of billions of dollars.
One may wonder why we care about auctions at all. Why not just assign the item to whoever we wish, and then let the free market settle things such that the person with the highest willingness-to-pay winds up with the good? It seems natural to think that how the good is allocated matters for how much revenue the government earns – selling the object is better on this count than giving it away – but it turns out that the free market will not in general allocate goods efficiently when sellers and buyers are uncertain about who is willing to pay how much for a given object.
For instance, imagine you own a car, and you think I am willing to pay somewhere between $10,000 and $20,000 to buy it from you. I think you are willing to give up the car for somewhere between $5,000 and $15,000. I know my own valuation, so let’s consider the case where I am willing to pay exactly $10,000. If you are willing to sell for $8,000, it seems reasonable that we can strike a deal. This is not the case: since all you know is that I am willing to pay somewhere between $10,000 and $20,000, you do know you can always get a $2,000 profit by selling at $10,000, but also that it’s incredibly unlikely that I will say no if you charge $10,001, or $11,000, or even more. You therefore will be hesitant to strike the deal to sell for 10 flat. This essential tension is the famed Myerson-Satterthwaite Theorem, and it occurs precisely because the buyer and seller do not know each other’s value for the object. A government auctioning off an object initially, however, can do so efficiently in a much wider set of contexts (see Maskin 2004 JEL for details). The details of auction design cannot be fixed merely by letting the market sort things out ex-post: the post-Cold War asset sales had issues not just of equity, but also efficiency. Since auctions today are used to allocate everything from the right to extract water to carbon emissions permits at the heart of global climate change policy, ensuring we get their design right is not just a minor theoretical concern!
The problem of auction design today is, partly because of Milgrom’s efforts, equally prominent in computer science. Many allocation problems are computational, with players being algorithms. This is true of electricity markets in practice, as well as the allocation of online advertisements, the design of blockchain-like mechanisms for decentralized exchange and record-keeping, and methods for preventing denial of service attacks while permitting legitimate access to internet-connected servers. Even when humans remain in the loop to some extent, we need to guarantee not just an efficient algorithm, but a practically-computable equilibrium. Leyton-Brown, Milgrom and Segal discuss this in the context of a recent spectrum auction. The problem of computability turns out to be an old one: Robert Wilson’s early work was on precisely the problem of computing equilibria. Nonetheless, given its importance in algorithmic implementation of mechanisms, it would be no surprise to see many important results in applied game theory come from computer scientists and not just economists and mathematicians in coming years. This pattern of techniques flowing from their originating field to the one where they have important new applications looks a lot like the trail of applied game theory arriving in economics by way of operations research, does it not?
That deep results in game theory can inform the real world goes beyond cases like auctions, where the economic setting is easy to understand. Consider the case of the long distance trade in the Middle Ages. The fundamental problem is that of the Yuan dynasty folk song: when “heaven is high and the emperor is far away”, what stops the distant city you arrive in from confiscatory taxation, or outright theft, of your goods? Perhaps the threat that you won’t return to trade? This is not enough – you may not return, but other traders will be told, “we had to take the goods from the last guy because he broke some rules, but of course we will treat you fairly!” It was quite common for confiscation to be targeted only at one group – the Genoans in Constantinople, the Jews in Sicily – with all other traders being treated fairly.
The theory of repeated games can help explain what to do. It is easiest to reach efficiency when you punish not only the cheaters, but also punish those who do not themselves punish cheaters. That is, the Genoans need to punish not just the Turks by withdrawing business, but also punish the Saracens who would try to make up the trade after the Genoans pull out. The mechanism to do so is a merchant guild, a monopoly which can enforce boycotts in distant cities by taking away a given merchant’s rights in their own city. Greif, Milgrom and Weingast suggest that because merchant guilds allow cities to credibly commit to avoid confiscation, they benefit the cities themselves by increasing the amount of trade. This explains why cities encouraged the formations of guilds – one does not normally encourage your sellers to form a monopsony!
Enough on the student – let us turn to Milgrom’s advisor, Robert Wilson. Wilson was born in the tiny hamlet of Geneva, Nebraska. As discussed above, his doctoral training at Harvard was from Howard Raiffa and the decision theorists, after which he was hired at Stanford, where he has spent his career. As Milgrom is now also back at Stanford, their paths are so intertwined that the two men now quite literally live on the same street.
Wilson is most famous for his early work applying the advances of game theory in the 1970s to questions in auction design and reputation. His 3 page paper written in 1966 and published in Management Science in 1969 gives an early application of Harsanyi’s theory of Bayesian games to the “winner’s curse”. The winner’s curse means that the winner of an auction for a good with a “common value” – for instance, a tract of land that either has oil or does not – optimally bids less in a first-price auction than what they believe that good to be worth, or else loses money.
One benefit of being an operations researcher is that there is a tight link in that field between academia and industry. Wilson consulted with the Department of the Interior on oil licenses, and with private oil companies on how they bid in these auctions. What he noticed was that managers often shaded down their engineer’s best estimates of the value of an oil tract. The reason why is, as the paper shows, very straightforward. Assume we both get a signal uniformly distributed on [x-1,x+1] about the value of the tract, where x is the true value. Unconditionally, my best estimate of the value of the plot is exactly my signal. However, conditional on winning the auction, my signal was higher than my rivals. Therefore, if I knew my rival’s signal, I would have bid exactly halfway between the two. Of course, I don’t know her signal. But since my payoff is 0 if I don’t win, and my payoff is my bid minus x if I win, there is a straightforward formula, which depends on the distribution of the signals, for how much I should shade my bid. Many teachers have drawn on Bob’s famous example of the winner’s curse by auctioning off a jar of coins in class, the winner inevitably being the poor student who doesn’t realize they should have shaded their bid!
Wilson not only applied these new game theoretic tools, but also developed many of them. This is particularly true in 1982, when he published all three of his most cited papers: a resolution of the “Chain store paradox”, the idea of sequential equilibria, and the “Gang of Four” reputation paper with Kreps, Roberts, and Milgrom. To understand these, we need to understand the problem of non-credible threats.
The chain store paradox goes like this. Consider Walmart facing a sequence of potential competitors. If they stay out, Walmart earns monopoly profits in the town. If they enter, Walmart can either fight (in which case both make losses) or accept the entry (in which case they both earn duopoly profits, lower than what Walmart made as a monopolist). It seems intuitive that Walmart should fight a few early potential competitors to develop a reputation for toughness. Once they’ve done it, no one will enter. But if you think through the subgame perfect equilibrium here, the last firm who could enter knows that after they enter, Walmart is better off accepting the entry. Hence the second-to-last firm reasons that Walmart won’t benefit from establishing a reputation for deterrence, and hence won’t fight it. And likewise for the third-to-last entrant and so on up the line: Walmart never fights because it can’t “credibly” threaten to fight future entrants regardless of what it did in the past.
This seems odd. Kreps and Wilson (JET 1982) make an important contribution to reputation building by assuming there are two types of Walmart CEOs: a tough one who enjoys fighting, and a weak one with the normal payoffs above. Competitors don’t know which Walmart they are facing. If there is even a small chance the rivals think Walmart is tough, then even the weak Walmart may want to fight early rivals by “pretending” to be tougher than they are. Can this work as an equilibrium? We really need a new concept, because we both want the game to be perfect, where at any time, players play Nash equilibria from that point forward, and Bayesian, where players have beliefs about the others’ type and update those beliefs according to the hypothesized equilibrium play. Kreps and Wilson show how to do this in their Econometrica introducing sequential equilibria. The idea here is that equilibria involve strategies and beliefs at every node of the game tree, with both being consistent along the equilibrium path. Beyond having the nice property of allowing us to specifically examine the beliefs at any node, even off the equilibrium path, sequential equilibria are much simpler to compute than similar ideas like trembling hand perfection. Looking both back to Wilson’s early work on how to compute Nash equilibria, and Milgrom’s future work on practical mechanism design, is it not surprising to see the idea of practical tractability appear even back in 1982.
This type of reputation-building applies even to cooperation – or collusion, as cooperating when it is your interest to cheat and colluding when it is in your interest to undercut are the same mathematical problem. The Gang of Four paper by Kreps, Wilson, Milgrom, and Roberts shows that in finite prisoner’s dilemmas, you can get quite a bit of cooperation just with a small probability that your rival is an irrational type who always cooperates as long as you do so. Indeed, the Gang of Four show precisely how close to the end of the game players will cooperate for a given small belief that a rival is the naturally-cooperative type. Now, one may worry that allowing types in this way gives too much leeway for the modeler to justify any behavior, and indeed this is so. Nonetheless, the 1982 papers kicked off an incredibly fruitful search for justifications for reputation building – and given the role of reputation in everything from antitrust to optimal policy from central banks, a rigorous justification is incredibly important to understanding many features of the economic world.
I introduced Robert Wilson as The Teacher. This is not meant to devalue his pure research contributions, but rather to emphasize just how critical he was in developing students at the absolute forefront of applied games. Bengt Holmstrom did his PhD under Wilson in 1978, went to Kellogg MEDS after a short detour in Finland, then moved to Yale and MIT before winning the Nobel Prize. Al Roth studied with Wilson in 1974, was hired at the business school at Illinois, then Pittsburgh, then Harvard and Stanford before winning a Nobel Prize. Paul Milgrom was a 1979 student of Wilson’s, beginning also at MEDS before moving to Yale and Stanford, and winning his own Nobel Prize. This is to say nothing of his students developed later, including the great organizational theorist Bob Gibbons, or his earliest students like Armando Ortega Reichert, whose unpublished dissertation in 1969 contains important early results in auction theory and was an important influence on the limit pricing under incomplete information in Milgrom and Roberts (1982). It is one thing to write papers of Nobel quality. It is something else altogether to produce (at least!) three students who have done the same. And as any teacher is proud of their successful students, surely little is better than winning a Nobel alongside one of them!