Category Archives: Auctions/Allocations

Operations Research and the Rise of Applied Game Theory – A Nobel for Milgrom and Wilson

Today’s Nobel Prize to Paul Milgrom and Robert Wilson is the capstone of an incredibly fruitful research line which began in the 1970s in a few small departments of Operations Research. Game theory, or the mathematical study of strategic interaction, dates back to work by Zermelo, Borel and von Neumann in the early 20th century. The famed book by von Neumann and Morganstern was published in 1944, and widely reviewed as one of the most important social scientific works of the century. And yet, it would be three decades before applications of game theory revolutionized antitrust, organizational policy, political theory, trade, finance, and more. Alongside the “credibility revolution” of causal econometrics, and to a lesser extent behavioral economics, applied game theory has been the most important development in economics in the past half century. The prize to Milgrom and Wilson is likely the final one that will go for early applied game theory, joining those in 1994, 2005, 2007, 2014 and 2016 that elevated the abstractions of the 1940s to today’s rigorous interpretations of so many previously disparate economic fields.

Neither Wilson nor Milgrom were trained in pure economics departments. Wilson came out of the decision sciences program of Howard Raiffa at Harvard, and Milgrom was a student of Wilson’s at Stanford Business School. However, the link between operations research and economics is a long one, with the former field often serving as a vector for new mathematical tools before the latter field was quite ready to accept them. In the middle of the century, the mathematics of optimal control and dynamic programming – how to solve problems where today’s action affects tomorrow’s possibilities – were applied to resource allocation by Kantorovich in the Soviet Union and to market economics problems in the West by Koopmans, Samuelson, Solow, and Dorfman. Luce and Raiffa explained how the theory of games and the ideas of Bayesian decision theory apply to social scientific problems. Stan Reiter’s group first at Purdue, then later with Nancy Schwartz at Kellogg MEDS, formally brought operations researchers and economists into the same department to apply these new mathematics to economic problems.

The real breakthrough, however, was the arrival of Bayesian games and subgame perfection from Harsanyi (1968) and Selten (1965, 1975). These tools in combination allow us to study settings where players signal, make strategic moves, bluff, attempt to deter, and so on. From the perspective of an institutional designer, they allow us, alongside Myerson’s revelation principle, to follow Hayek’s ideas formally and investigate how we should organize an economic activity given the differing information and possible actions of each player. Indeed, the Wilson Doctrine argues that practical application of game theory requires attention to these informational features. There remains a more complete intellectual history to be written here, but Paul Milgrom and Al Roth’s mutual interview of the JEP provides a great sense of the intellectual milieu of the 1970s as they developed their ideas. Wilson, the Teacher, and Milgrom, the Popularizer, were at the heart of showing just how widely these new tools in game theory could be applied.

Let us begin with the Popularizer. Milgrom was born and raised in Michigan, taking part in anti-war and anti-poverty protests as a radical student in Ann Arbor in the late 1960s. The 1960s were a strange time, and so Milgrom went straight from the world of student activism to the equally radical world of…working as an actuary for an insurance company. After enrolling in the MBA program at Stanford in the mid-1970s, he was invited to pursue a PhD under his co-laureate Robert Wilson, who, as we shall see, was pursuing an incredibly lucrative combination of operations research and economics with his students. It is hard to overstate how broad Milgrom’s contributions have been, both theoretically and in practice. But we can get a good taste by looking at four: the multitasking problem and the no-trade theorem on the theoretical side, and medieval guilds and modern spectrum auctions on the applied side.

It is perhaps surprising that Milgrom’s most-cited paper was published in the JLEO, well into his career. But the famed multitasking paper is so incredibly informative. The idea is simple: you can motivate someone either with direct rewards or by changing their opportunity cost. For instance, if you want a policeman to walk the beat more often, then make their office particularly dull and full of paperwork. Workers generally have many tasks they can work on, however, which vary in their relative costs. For example, a cop can slack off, arrest people on nonsense charges, or solve murders. Making their office dull will cause them to sit at their office desk for fewer hours, but it likely won’t cause them to solve murders rather than arrest people on nonsense charges. Why not just pay for the solved murders directly? Often it is impossible to measure or observe everything you want done.

If you “reward A while hoping for B”, as Steven Kerr’s famous management paper puts it, you are likely to get a lot of A. If you pay rewards for total arrests, your cops will cite people for riding bikes with no lights. So what can be done? Milgrom and Holmstrom give a simple model where workers exert effort, do some things you can measure and some which you cannot, and you get a payoff depending on both. If a job has some things you care about which are hard to measure, you should use fewer strong incentives on the things you can measure: by paying cops for arrests, you make the opportunity cost of solving murders for the cops who like doing this higher, because now they are giving up the reward they would get from arresting the bicyclists every time they work a murder! Further, you should give workers working on hard-to-measure tasks little job flexibility. The murder cop paid on salary should need to show her face in the office, while the meter maid getting paid based on how many tickets she gives already has a good reason not to shirk while on the clock. Once you start thinking about multitasking and the interaction of incentives with opportunity costs, you start seeing perverse incentives absolutely everywhere.

Milgrom’s writings on incentives within organizations are without a doubt the literature I draw on most heavily when teaching strategic management. It is a shame that the textbook written alongside John Roberts never caught on. For a taste of their basic view of management, check out “The Firm as an Incentive System”, which formally lays out formal incentives, asset ownership, and task assignments as a system of complements which make organizations function well. The field now known as organizational economics has grown to incorporate ideas like information transmission (Garicano 2000 JPE) and the link between relational contracts and firm culture (e.g., Gibbons and Henderson 2011). Yet there remain many questions on why firms are organized the way they are which are open to an enterprising graduate student with a good theoretical background.

Multitasking has a similar feel to many of Milgrom’s great papers: they provide a framework improving our intuition about some effect in the world, rather than just showing a mathematical curiosity. The same is true of his most famous finance paper, the “no-trade theorem” developed with Nancy Stokey. The idea is ex-post obvious but ex-ante incredibly surprising. Imagine that in the market for corn, there is free exchange, and all trades anyone wants to make (to mitigate risk, for use, to try to trade on private information, etc.) have been made. A farmer one day notices a blight on his crop, and suspects this blight is widespread in the region. Therefore, the supply of corn will fall. Can he profit from this insight? Milgrom-Stokey’s answer is no!

How could this be? Even if everyone had identical prior beliefs about corn supply, conditional on getting this information, the farmer definitely has a higher posterior belief about corn price come harvest season than everyone else. However, we assumed that before the farmer saw the blight, all gains from trade had been exhausted, and that it was common knowledge that this was so. The farmer offering to buy corn at a higher price is informative that the farmer has learned something. If the former price was $5/bushel, and the farmer offers you $7, then you know that he has received private information that the corn will be worth even more than $7, hence you should not sell him any. Now, of course there is trade on information all the time; indeed, there are huge sums spent collecting information so that it can be traded on! However, Milgrom-Stokey makes clear just how clear we have to be about what is causing the “common knowledge that all gains from trade were exhausted” assumption to fail. Models with “noise” traders, or models with heterogeneous prior beliefs (a very subtle philosophical issue), have built on Milgrom-Stokey to understand everything from asset bubbles to the collapse in trade in mortgage backed securities in 2008.

When it comes to practical application, Milgrom’s work on auctions is well-known, and formed the basis of his Nobel citation. How did auctions become so “practical”? There is no question that the rise of applied auction theory, with the economist as designer, has its roots in the privatization wave of the 1990s that followed the end of the Cold War. Governments held valuable assets: water rights, resource tracts, spectrum that was proving important for new technologies like the cell phone. Who was to be given these assets, and at what price? Milgrom’s 1995 Churchill lectures formed the basis for a book, “Putting Auction Theory to Work”, which is now essential reading alongside Klemperer’s “Theory and Practice”, for theorists and practitioners alike. Where it is unique is in its focus on the practical details of running auctions.

This focus is no surprise. Milgrom’s most famous theoretical work in his 1982 Econometrica with Robert Weber on optimal auctions which are partly common-value and partly private-value. That is, consider selling a house, where some of the value is your idiosyncratic taste, and some of the value is whether the house has mold. Milgrom and Weber show a seller should reduce uncertainty as much as possible about the “common” part of the value. If the seller does not know this information or can’t credibly communicate it, then unlike in auctions which don’t have that common component, it matters a lot whether how you run the auction. For instance, with a first-price auction, you may bid low even though you like the house because you worry about winning when other bidders noticed the mold and you didn’t. In a second-price auction, the price you pay incorporates in part that information from others, hence leads to more revenue for the homeseller.

In practical auctions more broadly, complements across multiple goods being sold separately, private information about common components, the potential to collude or form bidder rings, and the regularity with which auctions are held and hence the number of expected bidders are all incredibly important to auctioneer revenue and efficient allocation of the object being sold. I omit further details of precisely what Milgrom did in the many auctions he consulted on, as the popular press will cover this aspect of his work well, but it is not out of the question to say that the social value of better allocation of things like wireless spectrum is on the order of tens of billions of dollars.

One may wonder why we care about auctions at all. Why not just assign the item to whoever we wish, and then let the free market settle things such that the person with the highest willingness-to-pay winds up with the good? It seems natural to think that how the good is allocated matters for how much revenue the government earns – selling the object is better on this count than giving it away – but it turns out that the free market will not in general allocate goods efficiently when sellers and buyers are uncertain about who is willing to pay how much for a given object.

For instance, imagine you own a car, and you think I am willing to pay somewhere between $10,000 and $20,000 to buy it from you. I think you are willing to give up the car for somewhere between $5,000 and $15,000. I know my own valuation, so let’s consider the case where I am willing to pay exactly $10,000. If you are willing to sell for $8,000, it seems reasonable that we can strike a deal. This is not the case: since all you know is that I am willing to pay somewhere between $10,000 and $20,000, you do know you can always get a $2,000 profit by selling at $10,000, but also that it’s incredibly unlikely that I will say no if you charge $10,001, or $11,000, or even more. You therefore will be hesitant to strike the deal to sell for 10 flat. This essential tension is the famed Myerson-Satterthwaite Theorem, and it occurs precisely because the buyer and seller do not know each other’s value for the object. A government auctioning off an object initially, however, can do so efficiently in a much wider set of contexts (see Maskin 2004 JEL for details). The details of auction design cannot be fixed merely by letting the market sort things out ex-post: the post-Cold War asset sales had issues not just of equity, but also efficiency. Since auctions today are used to allocate everything from the right to extract water to carbon emissions permits at the heart of global climate change policy, ensuring we get their design right is not just a minor theoretical concern!

The problem of auction design today is, partly because of Milgrom’s efforts, equally prominent in computer science. Many allocation problems are computational, with players being algorithms. This is true of electricity markets in practice, as well as the allocation of online advertisements, the design of blockchain-like mechanisms for decentralized exchange and record-keeping, and methods for preventing denial of service attacks while permitting legitimate access to internet-connected servers. Even when humans remain in the loop to some extent, we need to guarantee not just an efficient algorithm, but a practically-computable equilibrium. Leyton-Brown, Milgrom and Segal discuss this in the context of a recent spectrum auction. The problem of computability turns out to be an old one: Robert Wilson’s early work was on precisely the problem of computing equilibria. Nonetheless, given its importance in algorithmic implementation of mechanisms, it would be no surprise to see many important results in applied game theory come from computer scientists and not just economists and mathematicians in coming years. This pattern of techniques flowing from their originating field to the one where they have important new applications looks a lot like the trail of applied game theory arriving in economics by way of operations research, does it not?

That deep results in game theory can inform the real world goes beyond cases like auctions, where the economic setting is easy to understand. Consider the case of the long distance trade in the Middle Ages. The fundamental problem is that of the Yuan dynasty folk song: when “heaven is high and the emperor is far away”, what stops the distant city you arrive in from confiscatory taxation, or outright theft, of your goods? Perhaps the threat that you won’t return to trade? This is not enough – you may not return, but other traders will be told, “we had to take the goods from the last guy because he broke some rules, but of course we will treat you fairly!” It was quite common for confiscation to be targeted only at one group – the Genoans in Constantinople, the Jews in Sicily – with all other traders being treated fairly.

The theory of repeated games can help explain what to do. It is easiest to reach efficiency when you punish not only the cheaters, but also punish those who do not themselves punish cheaters. That is, the Genoans need to punish not just the Turks by withdrawing business, but also punish the Saracens who would try to make up the trade after the Genoans pull out. The mechanism to do so is a merchant guild, a monopoly which can enforce boycotts in distant cities by taking away a given merchant’s rights in their own city. Greif, Milgrom and Weingast suggest that because merchant guilds allow cities to credibly commit to avoid confiscation, they benefit the cities themselves by increasing the amount of trade. This explains why cities encouraged the formations of guilds – one does not normally encourage your sellers to form a monopsony!

Enough on the student – let us turn to Milgrom’s advisor, Robert Wilson. Wilson was born in the tiny hamlet of Geneva, Nebraska. As discussed above, his doctoral training at Harvard was from Howard Raiffa and the decision theorists, after which he was hired at Stanford, where he has spent his career. As Milgrom is now also back at Stanford, their paths are so intertwined that the two men now quite literally live on the same street.

Wilson is most famous for his early work applying the advances of game theory in the 1970s to questions in auction design and reputation. His 3 page paper written in 1966 and published in Management Science in 1969 gives an early application of Harsanyi’s theory of Bayesian games to the “winner’s curse”. The winner’s curse means that the winner of an auction for a good with a “common value” – for instance, a tract of land that either has oil or does not – optimally bids less in a first-price auction than what they believe that good to be worth, or else loses money.

One benefit of being an operations researcher is that there is a tight link in that field between academia and industry. Wilson consulted with the Department of the Interior on oil licenses, and with private oil companies on how they bid in these auctions. What he noticed was that managers often shaded down their engineer’s best estimates of the value of an oil tract. The reason why is, as the paper shows, very straightforward. Assume we both get a signal uniformly distributed on [x-1,x+1] about the value of the tract, where x is the true value. Unconditionally, my best estimate of the value of the plot is exactly my signal. However, conditional on winning the auction, my signal was higher than my rivals. Therefore, if I knew my rival’s signal, I would have bid exactly halfway between the two. Of course, I don’t know her signal. But since my payoff is 0 if I don’t win, and my payoff is my bid minus x if I win, there is a straightforward formula, which depends on the distribution of the signals, for how much I should shade my bid. Many teachers have drawn on Bob’s famous example of the winner’s curse by auctioning off a jar of coins in class, the winner inevitably being the poor student who doesn’t realize they should have shaded their bid!

Wilson not only applied these new game theoretic tools, but also developed many of them. This is particularly true in 1982, when he published all three of his most cited papers: a resolution of the “Chain store paradox”, the idea of sequential equilibria, and the “Gang of Four” reputation paper with Kreps, Roberts, and Milgrom. To understand these, we need to understand the problem of non-credible threats.

The chain store paradox goes like this. Consider Walmart facing a sequence of potential competitors. If they stay out, Walmart earns monopoly profits in the town. If they enter, Walmart can either fight (in which case both make losses) or accept the entry (in which case they both earn duopoly profits, lower than what Walmart made as a monopolist). It seems intuitive that Walmart should fight a few early potential competitors to develop a reputation for toughness. Once they’ve done it, no one will enter. But if you think through the subgame perfect equilibrium here, the last firm who could enter knows that after they enter, Walmart is better off accepting the entry. Hence the second-to-last firm reasons that Walmart won’t benefit from establishing a reputation for deterrence, and hence won’t fight it. And likewise for the third-to-last entrant and so on up the line: Walmart never fights because it can’t “credibly” threaten to fight future entrants regardless of what it did in the past.

This seems odd. Kreps and Wilson (JET 1982) make an important contribution to reputation building by assuming there are two types of Walmart CEOs: a tough one who enjoys fighting, and a weak one with the normal payoffs above. Competitors don’t know which Walmart they are facing. If there is even a small chance the rivals think Walmart is tough, then even the weak Walmart may want to fight early rivals by “pretending” to be tougher than they are. Can this work as an equilibrium? We really need a new concept, because we both want the game to be perfect, where at any time, players play Nash equilibria from that point forward, and Bayesian, where players have beliefs about the others’ type and update those beliefs according to the hypothesized equilibrium play. Kreps and Wilson show how to do this in their Econometrica introducing sequential equilibria. The idea here is that equilibria involve strategies and beliefs at every node of the game tree, with both being consistent along the equilibrium path. Beyond having the nice property of allowing us to specifically examine the beliefs at any node, even off the equilibrium path, sequential equilibria are much simpler to compute than similar ideas like trembling hand perfection. Looking both back to Wilson’s early work on how to compute Nash equilibria, and Milgrom’s future work on practical mechanism design, is it not surprising to see the idea of practical tractability appear even back in 1982.

This type of reputation-building applies even to cooperation – or collusion, as cooperating when it is your interest to cheat and colluding when it is in your interest to undercut are the same mathematical problem. The Gang of Four paper by Kreps, Wilson, Milgrom, and Roberts shows that in finite prisoner’s dilemmas, you can get quite a bit of cooperation just with a small probability that your rival is an irrational type who always cooperates as long as you do so. Indeed, the Gang of Four show precisely how close to the end of the game players will cooperate for a given small belief that a rival is the naturally-cooperative type. Now, one may worry that allowing types in this way gives too much leeway for the modeler to justify any behavior, and indeed this is so. Nonetheless, the 1982 papers kicked off an incredibly fruitful search for justifications for reputation building – and given the role of reputation in everything from antitrust to optimal policy from central banks, a rigorous justification is incredibly important to understanding many features of the economic world.

I introduced Robert Wilson as The Teacher. This is not meant to devalue his pure research contributions, but rather to emphasize just how critical he was in developing students at the absolute forefront of applied games. Bengt Holmstrom did his PhD under Wilson in 1978, went to Kellogg MEDS after a short detour in Finland, then moved to Yale and MIT before winning the Nobel Prize. Al Roth studied with Wilson in 1974, was hired at the business school at Illinois, then Pittsburgh, then Harvard and Stanford before winning a Nobel Prize. Paul Milgrom was a 1979 student of Wilson’s, beginning also at MEDS before moving to Yale and Stanford, and winning his own Nobel Prize. This is to say nothing of his students developed later, including the great organizational theorist Bob Gibbons, or his earliest students like Armando Ortega Reichert, whose unpublished dissertation in 1969 contains important early results in auction theory and was an important influence on the limit pricing under incomplete information in Milgrom and Roberts (1982). It is one thing to write papers of Nobel quality. It is something else altogether to produce (at least!) three students who have done the same. And as any teacher is proud of their successful students, surely little is better than winning a Nobel alongside one of them!

Advertisement

The 2018 John Bates Clark: Parag Pathak

The most prestigious award a young researcher can receive in economics, the John Bates Clark medal, has been awarded to the incredibly productive Parag Pathak. His CV is one that could only belong a true rocket of a career: he was a full professor at MIT 11 years after he started his PhD, finishing the degree in four years before going to Paul Samuelson route through the Harvard Society of Fellows to become the top young theorist in Cambridge.

Pathak is best known for his work on the nature and effects of how students are allocated to schools. This is, of course, an area where theorists have had incredible influence on public policy, notably via Pathak’s PhD Advisor, the Nobel prize winner Al Roth, the group of Turkish researchers including Atila Abdulkadiroglu, Utku Unver, and Tayfun Sonmez, as well as the work of 2015 Clark medal winner Roland Fryer. Indeed, this group’s work on how to best allocate students to schools in an incentive-compatible way – that is, in a way where parents need only truthfully state which schools they like best – was adopted by the city of Boston, to my knowledge the first-time this theoretically-optimal mechanism was used by an actual school district. As someone born in Boston’s contentious Dorchester neighborhood, it is quite striking how much more successful this reform was than the busing policies of the 1970s which led to incredible amounts of bigoted pushback.

Consider the old “Boston mechanism”. Parents list their preferred schools in order. Everyone would be allocated their first choice if possible. If a school is oversubscribed, some random percentage get their second choice, and if still oversubscribed, their third, and so on. This mechanism gives clear reason for strategic manipulation: you certainly don’t want to list a very popular school as your second choice, since there is almost no chance that it won’t fill up with first choices. The Boston mechanism was replaced by the Gale-Shapley mechanism, which has the property that it is never optimal for a parent to misrepresent preferences. In theory, not only is this more efficient but it is also fairer: parents who do not have access to sophisticated neighbors helping them game the system are, you might reason, most likely to lose out. And this is precisely what Pathak and Sonmez show in a theoretical model: sophisticated parents may prefer the old Boston mechanism because it makes them better off at the expense of the less sophisticated! The latter concern is a tough one for traditional mechanism design to handle, as we generally assume that agents act in their self-interest, including taking advantage of the potential for strategic manipulation. There remains some debate about what is means for a mechanism to be “better” when some agents are unsophisticated or when they do not have strict preferences over all options.

Competition for students may also affect the quality of underlying schools, either because charter schools and for-profits compete on a profit-maximizing basis, or because public schools somehow respond to the incentive to get good students. Where Pathak’s work is particularly rigorous is that he notes how critical both the competitive environment and the exact mechanism for assigning students are for the responsiveness of schools. It is not that “charters are good” or “charters are bad” or “test schools produce X outcome”, but rather the conditionality of these statements on how the choice and assignment mechanism works. A pure empirical study found charters in Boston performed much better for lottery-selected students than public schools, and that attending an elite test school in Boston or New York doesn’t really affect student outcomes. Parents appear unable to evaluate school quality except in terms of peer effects, which can be particularly problematic when good peers enroll is otherwise bad schools.

Pathak’s methodological approach is refreshing. He has a theorist’s toolkit and an empiricist’s interest in policy. For instance, imagine we want to know how well charter schools perform. The obvious worry is that charter students are better than the average student. Many studies take advantage of charter lotteries, where oversubscribed schools assign students from the application class by lottery. This does not identify the full effect of charters, however: whether I enter a lottery at all depends on how I ranked schools, and hence participants in a lottery for School A versus School B are not identical. Pathak, Angrist and Walters show how to combine the data from specific school choice mechanisms with the lottery in a way that ensures we are comparing like-to-like when evaluating lotteries at charters versus non-charters. In particular, they find that Denver’s charter schools do in fact perform better.

Indeed, reading through Pathak’s papers this afternoon, I find myself struck by how empirical his approach has become over time: if a given question requires the full arsenal of choice, he deploys it, and if it is unnecessary, he estimates things in a completely reduced form way. Going beyond the reduced form often produces striking results. For instance, what would be lost if affirmative action based on race were banned in school choice? Chicago’s tier-based plans, where many seats at test schools were reserved for students from low-SES tiers, works dreadfully: not only does it not actually select low-SES students (high-SES students in low-income neighborhoods are selected), but it would require massively dropping entrance score criteria to get a pre-established number of black and hispanic students to attend. This is particularly true for test schools on the north side of the city. Answering the question of what racial representation looks like in a counterfactual world where Chicago doesn’t have to use SES-based criteria to indirectly choose students to get a given racial makeup, and the question of whether the problem is Chicago’s particular mechanism or whether it is fundamental to any location-based selection mechanism, requires theory, and Pathak deploys it wonderfully. Peng Shi and Pathak also do back-testing on their theory-based discrete-choice predictions of the impact of a change in Boston’s mechanism meant to reduce travel times, showing that to the extent the model missed, it was because the student characteristics were unexpected, not because there were not underlying structural preferences. If we are going to deploy serious theoretical methods to applied questions, rather than just to thought experiments, this type of rigorous combination of theory, design-based empirics, and back-testing is essential.

In addition to his education research, Pathak has also contributed, alongside Fuhito Kojima, to the literature on large matching markets. The basic idea is the following. In two-sided matching, where both sides have preferences like in a marriage market, there are no stable matching mechanisms where both sides want to report truthfully. For example, if you use the mechanism currently in place in Boston where students rank schools, schools themselves have the incentive to manipulate the outcome by changing how many slots they offer. Pathak and Kojima show that when the market is large, it is (in a particular sense) an equilibrium for both sides to act truthfully; roughly, even if I screw one student I don’t want out of a slot, in a thick market it is unlikely I wind up with a more-preferred student to replace them. There has more recently been a growing literature on what really matters in matching markets: is it the stability of the matching mechanism, or the thickness of the market, or the timing, and so on.

This award strikes me as the last remaining award, at least in the near term, from the matching/market design boom of the past 20 years. As Becker took economics out of pure market transactions and into a wider world of rational choice under constraints, the work of Al Roth and his descendants, including Parag Pathak, has greatly expanded our ability to take advantage of choice and local knowledge in situations like education and health where, for many reasons, we do not use the price mechanism. That said, there remains quite a bit to do on understanding how to get the benefits of decentralization without price – I am deeply interested in this question when it comes to innovation policy – and I don’t doubt that two decades from now, continued inquiry along these lines will have fruitfully exploited the methods and careful technique that Parag Pathak embodies.

One final note, and this in no way takes away from how deserving Pathak and other recent winners have been. Yet: I would be remiss if I didn’t point out, again, how unusually “micro” the Clark medal has been of late. There literally has not been a pure macroeconomist or econometrician – two of the three “core” fields of economics – since 1999, and only Donaldson and Acemoglu are even arguably close. Though the prize has gone to three straight winners with a theoretical bent, at least in part, the prize is still not reflecting our field as a whole. Nothing for Emi Nakamura, or Victor Chernozhukov, or Emmanuel Farhi, or Ivan Werning, or Amir Sufi, or Chad Syverson, or Marc Melitz? These folks are incredibly influential on our field as a whole, and the Clark medal is failing to reflect the totality of what economists actually do.

“The Efficiency of Dynamic Post-Auction Bargaining: Evidence from Wholesale Used-Auto Auctions,” B. Larsen (2013)

Another job market season is in the books, as Brad Larsen was the last job talk I made it to this year. Here’s something you may not have known: there is an industry which sells lots of used cars at auctions. Each year, they sell eighty billion dollars worth of cars at these auctions! I’ve written auction theory papers before, and yet have never even heard of this industry. The complexity of economics never fails to impress, but this is good; it means there is always something interesting for us economists to study.

Now when you hear “inefficient bargaining” and “autos”, you probably immediately think of Stiglitz’ lemons model. But selection really isn’t a big issue in these used-auto auctions. Car rental agencies that participate, for instance, sell off an entire season of cars all at once. The real inefficiency is of the Myerson-Satterthwaite type. In that classic result, your private value for a good and my private value for a good are drawn from overlapping distributions – and “draw” and “distribution” may be over the set of all possible buyers or sellers as in Harsanyi purification – then there is no price at which we would agree to a transaction. (You must know Myerson-Satterthwaite! Among other things, it absolutely destroys the insight behind the Coase theorem…) Myerson and Satterthwaite, as well as a handful of other others, later fully described the second-best mechanism when first-best trade is impossible. This second-best mechanism turns out to be quite complicated. Almost all of the auto auction houses use a seemingly strange mechanism instead. First, sellers set a secret reserve which is told to the auctioneer. Then the car is sold in an English auction. If the auction ends below the reserve price, then sellers and potential buyers can opt to bargain individually over the phone, perhaps asynchronously. At the end of these bargains, there are still many cars that go unsold.

This brings up the obvious question: how close to the second-best optimum is this strange selling mechanism? Larsen got access to some amazing data, which included outcomes of the auction, all of the sequential offers made over the phone, seller secret reserve prices, and tons of data at the level of the car and the buyer. The problem, of course, is that we don’t have access to the seller and buyer types (i.e., their private valuations of the car). Indeed, we don’t even have knowledge of the distribution of their types. If you are a regular reader of this blog, you know what is coming next – we can perhaps back out those distributions, but only if theory is applied in a clever way. (And, wow, how are job market speakers not permanently nervous? It’s one thing to stretch the Myerson-Satterthwaite framework to its limits, but it’s another thing to do so when Mark is sitting in the front row of the seminar room, as was the case here at Kellogg. Luckily, he’s a nice guy!)

The high bid in the auction helps us get buyer valuations. This high bid is, if bidders bid truthfully, equivalent to the second-order statistic of bidder values. Order theory turns out to give us a precise relationship between the underlying distribution and distribution of draws of its second-order statistic (when the second order is from a set of N draws from the distribution). Larsen knows exactly how many people were at each auction house, and can use this to roughly estimate how many bidders could potentially have bid on each car. Homogenizing each car using observable characteristics, therefore, the order statistic method can be used to gather the underlying distribution of buyer values for each car.

On the seller side, it is the secret reserve price that really helps us estimate the bargaining game. If the post-auction bargaining never results in a price lower than the last bid, then bidders have a dominant strategy to bid truthfully. Costly bargaining in addition to truthful bidding by buyers means that the optimal seller reserve price must be strictly increasing in the seller’s type. And what does strict monotonicity give us? Invertibility. We now know there is a link between the seller reserve price and the seller type. The structure of post-auction bargaining helps us to bound the relationship between seller type and the reserve. Finally, multiple rounds of bargaining let us bound the disutility costs of bargaining, which turn out to be quite small.

So how does the auto auction bargaining mechanism do? In this market, given the estimated type distributions, the Myerson-Satterthwaite second-best gives you 98% of the surplus of a first-best (informationally impossible) mechanism. Real-world dynamic bargaining captures 88-96% of that second-best surplus, with the most inefficiency coming from small sellers where, perhaps, screening problems result. That’s still leaving money on the table, though. Larsen, among others, suggests that these types of strange dynamic mechanisms may persist because in two-sided markets, they roughly are equivalent to committing to giving buyers a large chunk of the gains from trade, and we know that attracting additional potential buyers is the most important part of successful auctions.

January 2013 working paper (No IDEAS version).

“On the Equivalence of Bayesian and Dominant Strategy Implementation,” A. Gershkov et al (2012)

Let’s take a quick break from the job market. Bayesian incentive compatibility is not totally satisfying when trying to implement some mechanism, as each agent must care about the rationality and actions of other agents. Dominant strategy incentive compatibility (DSIC) is much cleaner: no matter what other agents do, the action that makes me best off is just to state my preferences truthfully. For example, the celebrated Vickrey-Clarke-Groves mechanism is dominant strategy incentive compatible. Even if other agents lie or make mistakes, the transfers I pay (or receive) are set so that I have just enough incentive to report my true type.

Many clever mechanisms are not DSIC, however. Consider the Cremer-McLean auction with correlated types. There are a bunch of bidders with correlated private values for an oil field. If the values were independent, in the optimal mechanism, the winning bidder gets an information rent. Consider a first price auction, where all of our values are distributed uniformly on [0,1]. I draw .7 for my value. I won’t actually bid .7, because bidding slightly lower increases my profit when I win, but only slightly decreases my probability of winning. If the auctioneer wants to use a scheme that reveals my true value, he better give me some incentive to reveal; the second-price auction does just that, by making me pay only the second-highest bid if I win.

Cremer-McLean does the following. If I want to enter the auction, I need to make a side bet that gives me zero expected payoff if both I and everyone else report our true types, and a very negative payoff if somebody lies. In particular, charge me an entry fee that depends only on what I think other people’s bids will be. Conditional on everyone else telling the truth, I am perfectly happy as a Bayesian to report truthfully. My willingness to accept the bet reveals, because of correlated private values, something about my own private value. But now the auctioneer knows our true values, and hence can extract the full surplus in the auction without paying any information rents. Many, many people – Milgrom and Wilson, famously – find this mechanism rather unsatisfying in the real world. Certainly, it’s tough to think of any social choice mechanism that uses the Cremer-McLean strategy, or even something similar. This may be because it relies heavily on knife-edge Bayesian reasoning and common knowledge of rationality among the players, much stronger conditions than we need for dominant strategy incentive compatibility.

Manelli and Vincent have a 2010 Econometrica with a phenomenal result: in many simple cases, Bayesian IC gives me nothing that I can’t get from a DSIC. This makes sense in some ways: consider the equivalence of the Bayesian mechanism of a sealed-bid auction and the dominant strategy mechanism of a second price auction. What Gershkov et al do is extent that equivalence to a much broader class of social choice implementation. In particular, take any social choice function with one dimensional independent types, and quasi-linear utility for each agent. If there is a Bayesian IC mechanism to implement some social choice, then I can write down allocations and transfers which are DSIC and give the exact same interim (meaning after individuals learn their private types) expected utility for every agent. That is, in any auction with independent private values and linear utility, there is nothing I can do with Bayesian mechanisms that I can’t do with much more plausible mechanisms.

How does this work? Recall that the biggest difference between Bayesian IC and DSIC is that a mechanism is Bayesian IC (on a connected type space) if expected utility from an allocation rule is non-decreasing in my own type, and DSIC if utility from the allocation rule is non-decreasing in my own type no matter what the types of the other agents. Gershkov et al give the following example. I want to give an object to the agent with the highest value, as long as her value is not more than .5 higher than the other agent. Both agent’s values are independent drawn from U[0,1]. If the difference between the two is more than .5, I want to allocate the good to no one. Just giving an agent the good with probability 1 if his type is higher than the other agent’s report and lower than the other agent’s report plus .5 is Bayesian incentive compatible (the marginal of expected utility is nondecreasing in my type, so there must exist transfer payments that implement), but not DSIC: if the other agent reports his type minus .1, then I want to shade an equal amount. However, consider just giving an agent the good with probability equal to the minimum of his report and .5. If my type is .7, then I get the object with probability .5. This is exactly the interim probability I would get the object in the Bayesian mechanism. Further, the allocation probability is increasing in my own type no matter what the other agent’s type, so there must exist transfer payments that implement in dominant strategies. The general proof relies on extending a mathematical proof from the early 1990s: if a bounded, non-negative function of several variables generates monotone, one-dimensional marginals, then there must exist a non-negative function with the same bound, and the same marginals, that is monotone is each coordinate. The first function looks a lot like the condition on allocation rules for Bayesian IC, and the second the condition on allocation rules for DSIC…

Final Econometrica preprint (IDEAS version)

“Learning About the Future and Dynamic Efficiency,” A. Gershkov & B. Moldovanu (2009)

How am I to set a price when buyers arrive over time and I have a good that will expire, such as a baseball ticket or an airplane seat?  “Yield management” pricing is widespread in industries like these, but the standard methods tend to involve nonstrategic agents.  But a lack of myopia can sometimes be very profitable.  Consider a home sale.  Buyers arrive slowly, and the seller doesn’t know the distribution of potential buyer values.  It’s possible that if I report a high value when I arrive first, the seller will Bayesian update about the future and will not sell me the house, since they believe that other buyers also value the house highly. If I report a low value, however, I may get the house.

Consider the following numerical example from Gershkov and Moldovanu.  There are two agents, one arriving now and one arriving tomorrow.  The seller doesn’t know whether the agent values are IID in [0,1] or IID in [1,2], but puts 50 percent weight on each possibility.  With complete information, the dynamically efficient thing to do would be to sell to the first agent if she reports a value in [.5,1]U[1.5,2]. With incomplete information, however, there is no transfer than can simultaneously get the first agent to tell the truth when her value is in [.5,1] and tell the truth when her value is in [1,1.5].  By the revelation principle, then, there can be no dynamically efficient pricing mechanism.

Consider a more general problem, with N goods with qualities q1,q2..qN, and one buyer arriving each period.  The buyer has a value x(i) drawn from a distribution F, and he gets utility x(i)*q(j) if he receives good j.   Incomplete information by itself turns out not to be a major problem, as long as the seller knows the distribution: just find the optimal history-dependent cutoffs using a well-known result from Operations Research, then choose VCG style payments to ensure each agent reports truthfully.  If the distribution from which buyer values is unknown, as in the example above, then seller’s learn about what the optimal cutoffs should be from the buyer’s reports. Unsurprisingly, we will need something like the following: since cutoffs depend on my report, implementation depends on the maximal amount the cutoff can change having a derivative less than one in my type.   If the derivative is less than one, then the multiplicative nature of buyer utilities means that there will be no incentive to lie about your valuation in order to alter the seller’s beliefs about the buyer value distribution.

http://www.econ2.uni-bonn.de/moldovanu/pdf/learning-about-the-future-and-dynamic-efficiency.pdf (IDEAS version).  Final version published in the September 2009 AER. I previously wrote about a followup by the same authors for the case where the seller does not observe the arrival time of potential buyers, in addition to not knowing the buyer’s values.  

“Clocks and Trees: Isomorphic Dutch Auctions and Centipede Games,” J. C. Cox & D. James (2012)

There has been a substantial amount of effort by experimental economists showing where lab subjects take actions other than Nash equilibria play in common games. Folks appear to give away money in ultimatum games, bid too high in Dutch auctions, wait too long to end the centipede game, and so on. The most common response to these experimental facts is to propose alternative utility functions (such as forms of social preferences) or alternative equilibrium definitions that incorporate particular types of errors (like the Quantal Response).

Cox and James dig back into Selten’s famous Chain Store Paradox paper for an alternative explanation. (An aside: I imagined this had to be the most cited paper ever to appear in Theory and Decision; it is.) Selten points out that the very first thing an agent in a game must do is understand what her options are, what information can be transmitted, and what the consequences of any tuple of actions will be. Such information must generally be deduced by an agent from the description of the game; it is not obvious. When we teach game theoretic reasoning, we generally make such complicated deduction easier through the use of an extensive-form game tree. That is, how well students will do on a game theory exam is absolutely affected by how the information in an identical game is presented.

Cox and James discuss this point in the context of Dutch auctions and centipede games. In a Dutch auction, agents have a private valuation for a good, the auction starts at some amount higher than that valuation, and then the price drops until some agents “bids”, winning the good at that price, and getting payoff equal to her valuation minus the bid price. Alternatively, we could write a (discrete-interval) Dutch auction out as a game tree. If you have valuation 6 and you bid when the auction is at “5”, you win a dollar. Every possible bid by each player can be written out in the tree, along with consequences. It turns out that presenting information in such a tree led to actions which are roughly the same as predicted by Nash equilibrium, whereas the traditional declining-clock Dutch auction without a tree led to the overbidding seen in other experiments.

The centipede game is a bit different. Traditional experiments present the game as an extensive-form game tree. But there is a nice way to run a centipede experiment using a clock. Start at state 0, giving player 1 the option to stop the game and win his valuation. If the player does not stop the game within 10 seconds, shift to state 2, where player 2 has the option to stop the game and win her valuation plus 2, giving player 1 nothing. If she does not stop the game after 10 seconds, go to stage 3, and so on. The clock version of the centipede game led almost all subjects to stop at 0, which is the unique Nash equilibrium.

There are two things you might take out of these results. One is that play in games involves both the player’s strategic reasoning as well as their comprehension about how the game operates. Both are important. It makes me skeptical of some experimental results showing non-equilibrium outcomes. Second, you might imagine that we could incorporate learning about the options available to players as they play repeatedly into an equilibrium definition. The draft of this paper I read did not refer to the 1990s learning-in-games theoretical literature, which absolutely has discussed this point, but perhaps the final Econometrica version does.

Working paper (IDEAS version). Final paper published in March 2012 Econometrica.

“The Herodotus Paradox,” M. Baye, D. Kovenock and C. de Vries (2012)

Famously, the first auction we know of in the historical record is the “Babylonian Wives” auction reported by Herodotus (I know – you likely prefer Didius Julianus buying the entire Roman Empire in an auction!). Every year, Babylonian villages would bring out all the girls of marriageable age, ordering them from the prettiest to the ugliest. Suitors would first bid for all the beautiful women. With the money thus collected, then, men would be paid to marry the uglier women. How romantic.

But beyond being a great history of econ story, the Babylonian auction turns out to have some really interesting game theoretic properties. Consider an auction with two women and two bidders. Each bidder gets a known amount of utility from the beautiful and the ugly maiden: for now, assume each suitor gets H from the beauty and L from the other woman. Whatever bid wins the beautiful maiden is paid to the winner of the ugly maiden. If bids are identical, then each suitor is given the beautiful woman or the ugly woman with equal probability, and no money changes hands. There is a simple equilibrium in pure strategies: each agent bids (H-L)/2. But what of mixed strategy equilibria?

The normal way we find symmetric mixed strategy atomless equilibria is the following. First, identify a strategy distribution F such that payoffs are constant on the support of F, given that the other player also bids F. Second, verify that F is a well-defined, continuous cdf. Third, show that neither play can deviate by bidding off the support of F and increasing their payoff, given the opponent bids F. And such a method will, in fact, find the set of atomless mixed strategies without profitable deviations. But this does not guarantee that we have found equilibria when payoffs are potentially unbounded. Why? Fubini’s Theorem.

You may remember Fubini’s theorem from high school calculus: it tells you when you are allowed to change the order of integration in a double integral. Recall that expected utility calculations in a two-player game involve solving a double integral: integrate over the product space of my strategy’s probability distribution and over your strategy’s probability distribution. One way to know whether the expectation of a random variable X, such as utility given your and my strategies, is well-defined is to check whether max(X,0) and min(X,0) are both equal to infinity. If they both are, then the expectation does not exist.

If you solve the Babylonian Wife auction with complete information using the three step process above, you get that each player earns expected utility equal to L+m, where m can be arbitrarily large. That is, it appears each player can earn infinite payoff. But we can easily see the game is one with a constant sum, since one player earns H from the beautiful wife, one earns L from the ugly wife, and the transfers exactly cancel. The method correctly finds a mixed strategy with no profitable distribution. But the mixed strategy found is one such that calculating ex-ante expected utility from playing the mixed strategy involves solving a double integral whose solution is not well defined. That is, the three-step method needs a fourth step: check whether the joint distribution of putative mixed strategies gives well-defined expected utilities for each player. A similar problem exists in the Babylonian Wife auction with incomplete information, of which details can be found in the paper. Such a problem is likely rare: the war of attrition, for instance, also has potentially unbounded payoffs, but does not have problematic computation of expected utility.

http://www.ifo.de/portal/pls/portal/docs/1/1185352.PDF (2010 working paper. The 2012 final version in Games and Economic Behavior is substantially shorter.)

“Asynchronous Revision Games with Deadline,” Y. Kamada & T. Sugaya (2011)

Sugaya and Kamada are both Japanese job market stars in theory this year – Sugaya infamously has a job market paper nearly 300 pages long. The present, much shorter paper strikes me as a particularly interesting contribution. Consider a game where we choose actions continuously but only the state of our action choices at time T matters for payoffs. At various time t before T – perhaps given by some Poisson process – we have an opportunity to switch our action choice. How will equilibria differ from the stage game with the same payoff matrix? Such games are called “revision games” and have previously been described in a paper by Kamada with a different coauthor. They actually have quite a few uses: I’ve changed a paper of mine in auction theory to fit the revision game framework, and you might also consider timing games where the timing is endogenous. For example, it may not be obvious ex-post who the Stackelberg leader in a duopoly game is. Or we might want to allow communication with limited commitment using a revision game, where the state before T is a reflection of which suppliers you’ve been in contact with, for example.

The problem, as I found out a few years ago trying to work out my auction, is that such games are very difficult to solve. Kamada and Sugaya consider the case where opportunities to revise strategies arrive for each player i according to a Poisson process with parameter L(i), and let T be arbitrarily large so that each player has arbitrarily many opportunities to revise. They prove that if the game has a unique, strictly Pareto dominant action profile, and payoffs across players are “almost” the same, then the Pareto dominant NE is the unique equilibrium. Further, in 2-player 2×2 games with 2 unique Nash equilibria (say, Battle of the Sexes), a tiny bit of asymmetry in player payoffs can lead to a unique equilibrium favoring the strongest player. Both of these results may be counterintuitive. The first may be counterintuitive because many refinements of coordination games select the risk-dominant, not the Pareto dominant, equilibrium. Kamada and Sugaya’s model get around this result because the “preparation” before time T means the only risk of miscoordination is when the other player doesn’t get a change to revise before T, which happens with probability zero as T goes to infinity. The second result is surprising simply because tiny changes in payoffs radically change the equilibrium set. Let’s look at this result further.

Consider a battle of the sexes game, with payoffs (L,l)=(2+e,1), (R,r)=(1,2) and otherwise (0,0). In the usual parlance, the husband and wife want to meet somewhere and get no utility if they are apart, but given that they are together, the husband prefers the ballgame and the wife the opera. Note that the husband gets epsilon more utility from baseball than the wife from Puccini. At time 0, both players choose an action. Let both get a chance to revise their action over time according to Poisson processes with identical parameters, and let T be arbitrarily large. We will show that if e>0, the unique subgame perfect equilibrium is (L,l). I do not prove this here, but if e=0, both (L,l) and (R,r), as well as many mixtures, are equilibria.

The intuition, perhaps, is that the husband has slightly more incentive to wait for the wife to yield and see the ballgame than the wife does for husband to yield and go to the opera. Note that as we get close to T, both husband and wife will revise to match the other if they have not done so yet; this is simply because the probability that no one gets to revise again is converging to 1, and with no revision, both get payoff 0, whereas by yielding and matching your partner’s preparation, the minimum payoff is 1. Let’s work backward in time, assuming that both players yield to their spouse’s current state if they get a chance to revise after the current time, and find the cutoffs representing the earliest either spouse will yield to the other given an opportunity to revise. The payoff from yielding is 1, which must be equal at the right cutoff to the payoff from not yielding, conditional on both players yielding any time they get a chance in the future, is equal to

Pr(Revise before T given current time)*2+1/2

for the wife. The probability is just the chance that anyone will get to revise again, and the right side means the wife gets 2 half the time (the husband gets the first future chance to yield) and gets 1 half the time (she yields first in the future). Note that the same equality has the 2 replaced by 2+e for the husband. Plugging in Poisson rates for the probability in the equation above and solving gives you that the wife has an earlier cutoff to begin yielding that the husband.

But what should the wife’s cutoff t* be? Assume we are at some time t<t*. At t*, the continuation payoff is 1, and we have already shown that the husband will not revise, meaning switch from holding out for baseball to yielding to his wife’s preference for opera, before t*. After t*, the wife's continuation value is strictly less than one since the probability that time runs out before either of us get to revise again is increasing. The wife, at any time she is able to revise, can lock in a payoff of 1 by yielding. If she does not yield, her continuation value is strictly less than one, since at any chance to revise, she never can lock in more than one, and there is a positive probability neither player will get a chance to revise before T, giving payoff zero. Therefore, the wife will yield at the start of the game. Another way to see this is to note that the continuation value of the game conditional on not yielding is decreasing over time.

The full proof is a bit trickier since we also have to show that given the current states are (L,l) or (R,r), neither player will want to revise again. This is technically difficult to prove, but involves nothing more complicated that pages of algebra. For the theory geeks out there, proving similar results with continuous action spaces is considerably more difficult, because of the need to get around using tiny changes in strategy as a “counter” for non-Markov punishment. What I do in my work here is to restrict attention to equilibria of the game in constant strategies, meaning that on the equilibrium path, no one ever revises their strategy. There are a few reasons why this helps that I’ll get to when I write up the auctions result.

http://www.princeton.edu/~tsugaya/selection_revision_game (July 2010 working paper – currently under revision at TE. A very much related paper by Calcagno and Lovo deals with the same issue of preopening revision of strategies.)

“Promoting School Competition Through School Choice: A Market Design Approach,” J.W. Hatfield, F. Kojima & Y. Narita (2011)

Matching theory has been quite prominent in economics in the past decade: how ought we match organ donors, or match schools to students, or match medical residents to assignments, or husbands to wives? The basic principle is that financial transfers – “side payments” – are often illegal or contrary to well-established custom, so our standard mechanism design tricks won’t work. In school choice, a network of authors centered around Al Roth has developed mechanisms for matching students to schools in New York, Boston and San Francisco, among other cities, in a way that attempts (imperfectly) to deal with schools and students who will try to game the system. Hatfield and coauthors, including the ridiculously productive young scholar Fuhito Kojima, point out in this new working paper that something has been missing in this literature. The motivation for allowing school choice in the first place was so that schools would have an incentive to improve themselves. Previous work assumed school quality was fixed. Something is incongruous.

Let there be schools with M slots available to students, and a ranking over which students they prefer the most. Let students also rank which schools they like best. Let a mechanism respect improvements if, when it becomes “more preferred” by students, it gets students it prefers more. The authors prove there is no stable, strategy-proof matching mechanism which respects improvements: no matter how we propose to use information, schools sometimes have an incentive to misreport which students they actually like. Restricting the domain of preferences does not help, as the same result applies unless all schools have (almost) the same ranking over which students they prefer. Bronx Science and a School of the Arts surely rank along different criteria.

Why is this? Consider the intuition from the famous Deferred Acceptance algorithm. This algorithm is stable, and the intuition of the example will generalize to any stable matching. Let there be two students (A and B) and two schools (1 and 2). School 1 has 2 spots and School 2 has only 1 spot. Both students prefer school 2. School 1 prefers student A to student B, and School 2 prefers student B to student A. In the student-proposing Deferred Acceptance algorithm, both students apply to school 2 in the first round. School 2 conditionally accepts its preferred student, student B. Since school 2 has only one spot, it has filled its capacity. Student A then proposes to school 1 in the second round, and is accepted. Precisely the same outcome occurs (you can easily check this yourself) if schools propose. A famous theorem says that any the student-propose and school-propose DA algorithms given the worst-case and best-case stable matchings from the students’ perspective, so since these are the same, the preferences here permit a unique stable matching.

But what if school 1 improves itself, so that student B now prefers school 1 to school 2? Consider student-proposing DA again. In round 1, A will apply to 2 and B to 1. Both are conditionally accepted, and the algorithm terminates. So now school 1 winds up with student B rather than student A, a worse outcome in school 1’s eyes! It has been punished for improving itself in a very real sense. (You can check here, very simply, that the only stable matching under these preferences is again unique, for exactly the same reason as in the previous paragraph.)

The authors prove, in a similar manner, that there is no mechanism which is Pareto optimal for students which respects school quality improvement. Lest we be too distraught by these results, it is at least the case that stable matchings (like student-propose Deferred Acceptance) in “large” markets, under some conditions, almost never need worry that improvements will make the school worse off. The intuition is that, in the example above, I wanted a student to rank me low because this created competition at another school he now favor. That other school then may reject a different student that I particularly like, who will then “settle” for my school. These “chains of rejections” depend on some schools having reached their capacity, because otherwise they will just conditionally accept everyone who applies. The conditions on large markets essentially just say that the rejection chain I want to induce is unlikely to reach me because the student I am trying to attract has, with high probability, options she prefers which have not filled their quota as of yet.

https://360269…Fuhito-Kojima/SchoolCompetition20111111.pdf (November 15, 2011 Working Paper; at a delay of less than 2 days, I am sure this is the “freshest” research ever discussed on this site!)

“Demand Uncertainty and Dynamic Allocation with Strategic Agents,” A. Gershkov & B. Moldavanu (2010)

Consider a social planner, who discounts the future, trying to give away an indivisible object efficiently to agents who arrive over time; in the context of firms, this is often called “dynamic yield management.” Often, papers assume that agents are nonstrategic. But in many cases – consider possible home-buyers – agents will pretend not to be interested in a house if they think the seller will lower the price after they see little interest among buyers. Strategic action is important. If we knew everyone’s arrival time and valuation, this is just a relatively straightforward optimal stopping problem. What is the distribution of arrival times was unknown? In that case, private information about arrival time is important because the first few arrivals give the seller information about the distribution of arrival rates. Can we reach the dynamic first best even if everything is private information?

The answer, it turns out, is yes. This might surprise you if you know Gershkov and Moldavanu’s AER from a couple years back. In that paper, the uncertainty was about the distribution of private values, not arrivals. Generally, there does not exist an efficient mechanism under that kind of uncertainty. The case with uncertainty about arrival rate is different because agents can only pretend to arrive later, whereas with private values they could pretend to have either lower or higher private values. This one-sided deviation is important.

Here’s an example. Let bidder values be distributed U[0,1], and arrival times be a renewal with time between arrivals either distributed U[1,2] or U[2,3]. Papers from that 80s show that, if I know the actual arrival times and private values, the optimal dynamic mechanism just sells immediately after any agent arrives with a private value above a cutoff, and sells at a price equal to that cutoff. The cutoff x[1,2] if arrivals are distributed U[1,2] is higher than x[2,3] if arrivals are distributed U[2,3].

This introduces an incentive problem. Say the true arrivals are U[1,2], you are the first arrival, and you have a private value higher than x[1,2] but lower than x[2,3]. Since there is at least one unit of time until the next guy arrives, you can “delay” your arrival by up to one unit of time and be sure no one else will arrive. Truthfully revealing your arrival time and value means you are not allocated the object and therefore make profit zero; your private value is below x[1,2]. If you instead deviate by lying and saying you have arrival time after t=2, then you are allocated the object (since x[2,3] is now the cutoff, and that is below your private value), and you make positive profit.

Can this be fixed? Sure. Just pay anyone who reports arrival in [1,2] a subsidy of x[1,2]-x[2,3]. It is simple to prove that no one will want to deviate from the truth given these subsidies, so optimal dynamic efficiency can be obtained. You might prefer to only have payments made or given to agents who are allocated an object, though: otherwise there is a big incentive to “just show up” in order to get the subsidy. In this example, there is no efficient mechanism without subsidies: in order to keep the agent with value between x(2,3) and x(1,2) from lying about his arrival time, he must get “some” payoff from reporting truthfully. That can’t be a subsidy, so he must get the object with some probability. But then we no longer have dynamic efficiency.

More generally, with subsidies you can always incentivize everyone to report truthfully (and hence can efficiency allocate the good). The intuition is the same as in the example: pay a subsidy large enough that everyone’s utility is decreasing in their revealed arrival time. In particular, the earlier you arrive, the higher the subsidy you get. This will work as long as the subsidy is sufficiently large because there only one-sided deviations about arrival time are allowed: no one can “arrive” before their actual arrival time, so even a huge early arrival payment won’t can’t incentivize anyone to arrive “too early.” In some circumstances, the required subsidy is zero: this means that only the agent allocated the object will pay or get paid. It also means, roughly, that the efficient mechanism is the optimal dynamic mechanism.

When can we get away with no subsidies? This is quite difficult to prove. Gershkov and Moldavanu give some classes of distributions which require no subsidies, but the negative result is more intuitive: if arriving later makes the seller more pessimistic about future arrivals, then subsidies are required. The reason is as in the example; that property gives bidders an incentive to wait to “arrive” until the seller lowers the private value cutoff for which the object is sold.

A few comments. First, generally, this paper is not about finding the optimal mechanism (in terms of seller revenue) but is about finding any sort of efficient mechanism. The examples in the introduction do not all have this flavor, unfortunately. Second, and this is a problem in many mechanism design papers, strictly speaking mechanisms can only be run by monopoly firms or social planners. You might think that outside of a monopoly, a firm can take residual demand given other mechanisms as the private values it will face, but this usually can’t be done because the choice of mechanism by the firm will affect the distribution of private values seen. In general, the problem of so-called “competing mechanisms” is very difficult.

http://www.econ2.uni-bonn.de/bmoldovanu/paper/zuckerresfin.pdf (Working paper)

%d bloggers like this: