Category Archives: Auctions/Allocations

“The Efficiency of Dynamic Post-Auction Bargaining: Evidence from Wholesale Used-Auto Auctions,” B. Larsen (2013)

Another job market season is in the books, as Brad Larsen was the last job talk I made it to this year. Here’s something you may not have known: there is an industry which sells lots of used cars at auctions. Each year, they sell eighty billion dollars worth of cars at these auctions! I’ve written auction theory papers before, and yet have never even heard of this industry. The complexity of economics never fails to impress, but this is good; it means there is always something interesting for us economists to study.

Now when you hear “inefficient bargaining” and “autos”, you probably immediately think of Stiglitz’ lemons model. But selection really isn’t a big issue in these used-auto auctions. Car rental agencies that participate, for instance, sell off an entire season of cars all at once. The real inefficiency is of the Myerson-Satterthwaite type. In that classic result, your private value for a good and my private value for a good are drawn from overlapping distributions – and “draw” and “distribution” may be over the set of all possible buyers or sellers as in Harsanyi purification – then there is no price at which we would agree to a transaction. (You must know Myerson-Satterthwaite! Among other things, it absolutely destroys the insight behind the Coase theorem…) Myerson and Satterthwaite, as well as a handful of other others, later fully described the second-best mechanism when first-best trade is impossible. This second-best mechanism turns out to be quite complicated. Almost all of the auto auction houses use a seemingly strange mechanism instead. First, sellers set a secret reserve which is told to the auctioneer. Then the car is sold in an English auction. If the auction ends below the reserve price, then sellers and potential buyers can opt to bargain individually over the phone, perhaps asynchronously. At the end of these bargains, there are still many cars that go unsold.

This brings up the obvious question: how close to the second-best optimum is this strange selling mechanism? Larsen got access to some amazing data, which included outcomes of the auction, all of the sequential offers made over the phone, seller secret reserve prices, and tons of data at the level of the car and the buyer. The problem, of course, is that we don’t have access to the seller and buyer types (i.e., their private valuations of the car). Indeed, we don’t even have knowledge of the distribution of their types. If you are a regular reader of this blog, you know what is coming next – we can perhaps back out those distributions, but only if theory is applied in a clever way. (And, wow, how are job market speakers not permanently nervous? It’s one thing to stretch the Myerson-Satterthwaite framework to its limits, but it’s another thing to do so when Mark is sitting in the front row of the seminar room, as was the case here at Kellogg. Luckily, he’s a nice guy!)

The high bid in the auction helps us get buyer valuations. This high bid is, if bidders bid truthfully, equivalent to the second-order statistic of bidder values. Order theory turns out to give us a precise relationship between the underlying distribution and distribution of draws of its second-order statistic (when the second order is from a set of N draws from the distribution). Larsen knows exactly how many people were at each auction house, and can use this to roughly estimate how many bidders could potentially have bid on each car. Homogenizing each car using observable characteristics, therefore, the order statistic method can be used to gather the underlying distribution of buyer values for each car.

On the seller side, it is the secret reserve price that really helps us estimate the bargaining game. If the post-auction bargaining never results in a price lower than the last bid, then bidders have a dominant strategy to bid truthfully. Costly bargaining in addition to truthful bidding by buyers means that the optimal seller reserve price must be strictly increasing in the seller’s type. And what does strict monotonicity give us? Invertibility. We now know there is a link between the seller reserve price and the seller type. The structure of post-auction bargaining helps us to bound the relationship between seller type and the reserve. Finally, multiple rounds of bargaining let us bound the disutility costs of bargaining, which turn out to be quite small.

So how does the auto auction bargaining mechanism do? In this market, given the estimated type distributions, the Myerson-Satterthwaite second-best gives you 98% of the surplus of a first-best (informationally impossible) mechanism. Real-world dynamic bargaining captures 88-96% of that second-best surplus, with the most inefficiency coming from small sellers where, perhaps, screening problems result. That’s still leaving money on the table, though. Larsen, among others, suggests that these types of strange dynamic mechanisms may persist because in two-sided markets, they roughly are equivalent to committing to giving buyers a large chunk of the gains from trade, and we know that attracting additional potential buyers is the most important part of successful auctions.

January 2013 working paper (No IDEAS version).

“On the Equivalence of Bayesian and Dominant Strategy Implementation,” A. Gershkov et al (2012)

Let’s take a quick break from the job market. Bayesian incentive compatibility is not totally satisfying when trying to implement some mechanism, as each agent must care about the rationality and actions of other agents. Dominant strategy incentive compatibility (DSIC) is much cleaner: no matter what other agents do, the action that makes me best off is just to state my preferences truthfully. For example, the celebrated Vickrey-Clarke-Groves mechanism is dominant strategy incentive compatible. Even if other agents lie or make mistakes, the transfers I pay (or receive) are set so that I have just enough incentive to report my true type.

Many clever mechanisms are not DSIC, however. Consider the Cremer-McLean auction with correlated types. There are a bunch of bidders with correlated private values for an oil field. If the values were independent, in the optimal mechanism, the winning bidder gets an information rent. Consider a first price auction, where all of our values are distributed uniformly on [0,1]. I draw .7 for my value. I won’t actually bid .7, because bidding slightly lower increases my profit when I win, but only slightly decreases my probability of winning. If the auctioneer wants to use a scheme that reveals my true value, he better give me some incentive to reveal; the second-price auction does just that, by making me pay only the second-highest bid if I win.

Cremer-McLean does the following. If I want to enter the auction, I need to make a side bet that gives me zero expected payoff if both I and everyone else report our true types, and a very negative payoff if somebody lies. In particular, charge me an entry fee that depends only on what I think other people’s bids will be. Conditional on everyone else telling the truth, I am perfectly happy as a Bayesian to report truthfully. My willingness to accept the bet reveals, because of correlated private values, something about my own private value. But now the auctioneer knows our true values, and hence can extract the full surplus in the auction without paying any information rents. Many, many people – Milgrom and Wilson, famously – find this mechanism rather unsatisfying in the real world. Certainly, it’s tough to think of any social choice mechanism that uses the Cremer-McLean strategy, or even something similar. This may be because it relies heavily on knife-edge Bayesian reasoning and common knowledge of rationality among the players, much stronger conditions than we need for dominant strategy incentive compatibility.

Manelli and Vincent have a 2010 Econometrica with a phenomenal result: in many simple cases, Bayesian IC gives me nothing that I can’t get from a DSIC. This makes sense in some ways: consider the equivalence of the Bayesian mechanism of a sealed-bid auction and the dominant strategy mechanism of a second price auction. What Gershkov et al do is extent that equivalence to a much broader class of social choice implementation. In particular, take any social choice function with one dimensional independent types, and quasi-linear utility for each agent. If there is a Bayesian IC mechanism to implement some social choice, then I can write down allocations and transfers which are DSIC and give the exact same interim (meaning after individuals learn their private types) expected utility for every agent. That is, in any auction with independent private values and linear utility, there is nothing I can do with Bayesian mechanisms that I can’t do with much more plausible mechanisms.

How does this work? Recall that the biggest difference between Bayesian IC and DSIC is that a mechanism is Bayesian IC (on a connected type space) if expected utility from an allocation rule is non-decreasing in my own type, and DSIC if utility from the allocation rule is non-decreasing in my own type no matter what the types of the other agents. Gershkov et al give the following example. I want to give an object to the agent with the highest value, as long as her value is not more than .5 higher than the other agent. Both agent’s values are independent drawn from U[0,1]. If the difference between the two is more than .5, I want to allocate the good to no one. Just giving an agent the good with probability 1 if his type is higher than the other agent’s report and lower than the other agent’s report plus .5 is Bayesian incentive compatible (the marginal of expected utility is nondecreasing in my type, so there must exist transfer payments that implement), but not DSIC: if the other agent reports his type minus .1, then I want to shade an equal amount. However, consider just giving an agent the good with probability equal to the minimum of his report and .5. If my type is .7, then I get the object with probability .5. This is exactly the interim probability I would get the object in the Bayesian mechanism. Further, the allocation probability is increasing in my own type no matter what the other agent’s type, so there must exist transfer payments that implement in dominant strategies. The general proof relies on extending a mathematical proof from the early 1990s: if a bounded, non-negative function of several variables generates monotone, one-dimensional marginals, then there must exist a non-negative function with the same bound, and the same marginals, that is monotone is each coordinate. The first function looks a lot like the condition on allocation rules for Bayesian IC, and the second the condition on allocation rules for DSIC…

Final Econometrica preprint (IDEAS version)

“Learning About the Future and Dynamic Efficiency,” A. Gershkov & B. Moldovanu (2009)

How am I to set a price when buyers arrive over time and I have a good that will expire, such as a baseball ticket or an airplane seat?  “Yield management” pricing is widespread in industries like these, but the standard methods tend to involve nonstrategic agents.  But a lack of myopia can sometimes be very profitable.  Consider a home sale.  Buyers arrive slowly, and the seller doesn’t know the distribution of potential buyer values.  It’s possible that if I report a high value when I arrive first, the seller will Bayesian update about the future and will not sell me the house, since they believe that other buyers also value the house highly. If I report a low value, however, I may get the house.

Consider the following numerical example from Gershkov and Moldovanu.  There are two agents, one arriving now and one arriving tomorrow.  The seller doesn’t know whether the agent values are IID in [0,1] or IID in [1,2], but puts 50 percent weight on each possibility.  With complete information, the dynamically efficient thing to do would be to sell to the first agent if she reports a value in [.5,1]U[1.5,2]. With incomplete information, however, there is no transfer than can simultaneously get the first agent to tell the truth when her value is in [.5,1] and tell the truth when her value is in [1,1.5].  By the revelation principle, then, there can be no dynamically efficient pricing mechanism.

Consider a more general problem, with N goods with qualities q1,q2..qN, and one buyer arriving each period.  The buyer has a value x(i) drawn from a distribution F, and he gets utility x(i)*q(j) if he receives good j.   Incomplete information by itself turns out not to be a major problem, as long as the seller knows the distribution: just find the optimal history-dependent cutoffs using a well-known result from Operations Research, then choose VCG style payments to ensure each agent reports truthfully.  If the distribution from which buyer values is unknown, as in the example above, then seller’s learn about what the optimal cutoffs should be from the buyer’s reports. Unsurprisingly, we will need something like the following: since cutoffs depend on my report, implementation depends on the maximal amount the cutoff can change having a derivative less than one in my type.   If the derivative is less than one, then the multiplicative nature of buyer utilities means that there will be no incentive to lie about your valuation in order to alter the seller’s beliefs about the buyer value distribution.

http://www.econ2.uni-bonn.de/moldovanu/pdf/learning-about-the-future-and-dynamic-efficiency.pdf (IDEAS version).  Final version published in the September 2009 AER. I previously wrote about a followup by the same authors for the case where the seller does not observe the arrival time of potential buyers, in addition to not knowing the buyer’s values.  

“Clocks and Trees: Isomorphic Dutch Auctions and Centipede Games,” J. C. Cox & D. James (2012)

There has been a substantial amount of effort by experimental economists showing where lab subjects take actions other than Nash equilibria play in common games. Folks appear to give away money in ultimatum games, bid too high in Dutch auctions, wait too long to end the centipede game, and so on. The most common response to these experimental facts is to propose alternative utility functions (such as forms of social preferences) or alternative equilibrium definitions that incorporate particular types of errors (like the Quantal Response).

Cox and James dig back into Selten’s famous Chain Store Paradox paper for an alternative explanation. (An aside: I imagined this had to be the most cited paper ever to appear in Theory and Decision; it is.) Selten points out that the very first thing an agent in a game must do is understand what her options are, what information can be transmitted, and what the consequences of any tuple of actions will be. Such information must generally be deduced by an agent from the description of the game; it is not obvious. When we teach game theoretic reasoning, we generally make such complicated deduction easier through the use of an extensive-form game tree. That is, how well students will do on a game theory exam is absolutely affected by how the information in an identical game is presented.

Cox and James discuss this point in the context of Dutch auctions and centipede games. In a Dutch auction, agents have a private valuation for a good, the auction starts at some amount higher than that valuation, and then the price drops until some agents “bids”, winning the good at that price, and getting payoff equal to her valuation minus the bid price. Alternatively, we could write a (discrete-interval) Dutch auction out as a game tree. If you have valuation 6 and you bid when the auction is at “5”, you win a dollar. Every possible bid by each player can be written out in the tree, along with consequences. It turns out that presenting information in such a tree led to actions which are roughly the same as predicted by Nash equilibrium, whereas the traditional declining-clock Dutch auction without a tree led to the overbidding seen in other experiments.

The centipede game is a bit different. Traditional experiments present the game as an extensive-form game tree. But there is a nice way to run a centipede experiment using a clock. Start at state 0, giving player 1 the option to stop the game and win his valuation. If the player does not stop the game within 10 seconds, shift to state 2, where player 2 has the option to stop the game and win her valuation plus 2, giving player 1 nothing. If she does not stop the game after 10 seconds, go to stage 3, and so on. The clock version of the centipede game led almost all subjects to stop at 0, which is the unique Nash equilibrium.

There are two things you might take out of these results. One is that play in games involves both the player’s strategic reasoning as well as their comprehension about how the game operates. Both are important. It makes me skeptical of some experimental results showing non-equilibrium outcomes. Second, you might imagine that we could incorporate learning about the options available to players as they play repeatedly into an equilibrium definition. The draft of this paper I read did not refer to the 1990s learning-in-games theoretical literature, which absolutely has discussed this point, but perhaps the final Econometrica version does.

Working paper (IDEAS version). Final paper published in March 2012 Econometrica.

“The Herodotus Paradox,” M. Baye, D. Kovenock and C. de Vries (2012)

Famously, the first auction we know of in the historical record is the “Babylonian Wives” auction reported by Herodotus (I know – you likely prefer Didius Julianus buying the entire Roman Empire in an auction!). Every year, Babylonian villages would bring out all the girls of marriageable age, ordering them from the prettiest to the ugliest. Suitors would first bid for all the beautiful women. With the money thus collected, then, men would be paid to marry the uglier women. How romantic.

But beyond being a great history of econ story, the Babylonian auction turns out to have some really interesting game theoretic properties. Consider an auction with two women and two bidders. Each bidder gets a known amount of utility from the beautiful and the ugly maiden: for now, assume each suitor gets H from the beauty and L from the other woman. Whatever bid wins the beautiful maiden is paid to the winner of the ugly maiden. If bids are identical, then each suitor is given the beautiful woman or the ugly woman with equal probability, and no money changes hands. There is a simple equilibrium in pure strategies: each agent bids (H-L)/2. But what of mixed strategy equilibria?

The normal way we find symmetric mixed strategy atomless equilibria is the following. First, identify a strategy distribution F such that payoffs are constant on the support of F, given that the other player also bids F. Second, verify that F is a well-defined, continuous cdf. Third, show that neither play can deviate by bidding off the support of F and increasing their payoff, given the opponent bids F. And such a method will, in fact, find the set of atomless mixed strategies without profitable deviations. But this does not guarantee that we have found equilibria when payoffs are potentially unbounded. Why? Fubini’s Theorem.

You may remember Fubini’s theorem from high school calculus: it tells you when you are allowed to change the order of integration in a double integral. Recall that expected utility calculations in a two-player game involve solving a double integral: integrate over the product space of my strategy’s probability distribution and over your strategy’s probability distribution. One way to know whether the expectation of a random variable X, such as utility given your and my strategies, is well-defined is to check whether max(X,0) and min(X,0) are both equal to infinity. If they both are, then the expectation does not exist.

If you solve the Babylonian Wife auction with complete information using the three step process above, you get that each player earns expected utility equal to L+m, where m can be arbitrarily large. That is, it appears each player can earn infinite payoff. But we can easily see the game is one with a constant sum, since one player earns H from the beautiful wife, one earns L from the ugly wife, and the transfers exactly cancel. The method correctly finds a mixed strategy with no profitable distribution. But the mixed strategy found is one such that calculating ex-ante expected utility from playing the mixed strategy involves solving a double integral whose solution is not well defined. That is, the three-step method needs a fourth step: check whether the joint distribution of putative mixed strategies gives well-defined expected utilities for each player. A similar problem exists in the Babylonian Wife auction with incomplete information, of which details can be found in the paper. Such a problem is likely rare: the war of attrition, for instance, also has potentially unbounded payoffs, but does not have problematic computation of expected utility.

http://www.ifo.de/portal/pls/portal/docs/1/1185352.PDF (2010 working paper. The 2012 final version in Games and Economic Behavior is substantially shorter.)

“Asynchronous Revision Games with Deadline,” Y. Kamada & T. Sugaya (2011)

Sugaya and Kamada are both Japanese job market stars in theory this year – Sugaya infamously has a job market paper nearly 300 pages long. The present, much shorter paper strikes me as a particularly interesting contribution. Consider a game where we choose actions continuously but only the state of our action choices at time T matters for payoffs. At various time t before T – perhaps given by some Poisson process – we have an opportunity to switch our action choice. How will equilibria differ from the stage game with the same payoff matrix? Such games are called “revision games” and have previously been described in a paper by Kamada with a different coauthor. They actually have quite a few uses: I’ve changed a paper of mine in auction theory to fit the revision game framework, and you might also consider timing games where the timing is endogenous. For example, it may not be obvious ex-post who the Stackelberg leader in a duopoly game is. Or we might want to allow communication with limited commitment using a revision game, where the state before T is a reflection of which suppliers you’ve been in contact with, for example.

The problem, as I found out a few years ago trying to work out my auction, is that such games are very difficult to solve. Kamada and Sugaya consider the case where opportunities to revise strategies arrive for each player i according to a Poisson process with parameter L(i), and let T be arbitrarily large so that each player has arbitrarily many opportunities to revise. They prove that if the game has a unique, strictly Pareto dominant action profile, and payoffs across players are “almost” the same, then the Pareto dominant NE is the unique equilibrium. Further, in 2-player 2×2 games with 2 unique Nash equilibria (say, Battle of the Sexes), a tiny bit of asymmetry in player payoffs can lead to a unique equilibrium favoring the strongest player. Both of these results may be counterintuitive. The first may be counterintuitive because many refinements of coordination games select the risk-dominant, not the Pareto dominant, equilibrium. Kamada and Sugaya’s model get around this result because the “preparation” before time T means the only risk of miscoordination is when the other player doesn’t get a change to revise before T, which happens with probability zero as T goes to infinity. The second result is surprising simply because tiny changes in payoffs radically change the equilibrium set. Let’s look at this result further.

Consider a battle of the sexes game, with payoffs (L,l)=(2+e,1), (R,r)=(1,2) and otherwise (0,0). In the usual parlance, the husband and wife want to meet somewhere and get no utility if they are apart, but given that they are together, the husband prefers the ballgame and the wife the opera. Note that the husband gets epsilon more utility from baseball than the wife from Puccini. At time 0, both players choose an action. Let both get a chance to revise their action over time according to Poisson processes with identical parameters, and let T be arbitrarily large. We will show that if e>0, the unique subgame perfect equilibrium is (L,l). I do not prove this here, but if e=0, both (L,l) and (R,r), as well as many mixtures, are equilibria.

The intuition, perhaps, is that the husband has slightly more incentive to wait for the wife to yield and see the ballgame than the wife does for husband to yield and go to the opera. Note that as we get close to T, both husband and wife will revise to match the other if they have not done so yet; this is simply because the probability that no one gets to revise again is converging to 1, and with no revision, both get payoff 0, whereas by yielding and matching your partner’s preparation, the minimum payoff is 1. Let’s work backward in time, assuming that both players yield to their spouse’s current state if they get a chance to revise after the current time, and find the cutoffs representing the earliest either spouse will yield to the other given an opportunity to revise. The payoff from yielding is 1, which must be equal at the right cutoff to the payoff from not yielding, conditional on both players yielding any time they get a chance in the future, is equal to

Pr(Revise before T given current time)*2+1/2

for the wife. The probability is just the chance that anyone will get to revise again, and the right side means the wife gets 2 half the time (the husband gets the first future chance to yield) and gets 1 half the time (she yields first in the future). Note that the same equality has the 2 replaced by 2+e for the husband. Plugging in Poisson rates for the probability in the equation above and solving gives you that the wife has an earlier cutoff to begin yielding that the husband.

But what should the wife’s cutoff t* be? Assume we are at some time t<t*. At t*, the continuation payoff is 1, and we have already shown that the husband will not revise, meaning switch from holding out for baseball to yielding to his wife’s preference for opera, before t*. After t*, the wife's continuation value is strictly less than one since the probability that time runs out before either of us get to revise again is increasing. The wife, at any time she is able to revise, can lock in a payoff of 1 by yielding. If she does not yield, her continuation value is strictly less than one, since at any chance to revise, she never can lock in more than one, and there is a positive probability neither player will get a chance to revise before T, giving payoff zero. Therefore, the wife will yield at the start of the game. Another way to see this is to note that the continuation value of the game conditional on not yielding is decreasing over time.

The full proof is a bit trickier since we also have to show that given the current states are (L,l) or (R,r), neither player will want to revise again. This is technically difficult to prove, but involves nothing more complicated that pages of algebra. For the theory geeks out there, proving similar results with continuous action spaces is considerably more difficult, because of the need to get around using tiny changes in strategy as a “counter” for non-Markov punishment. What I do in my work here is to restrict attention to equilibria of the game in constant strategies, meaning that on the equilibrium path, no one ever revises their strategy. There are a few reasons why this helps that I’ll get to when I write up the auctions result.

http://www.princeton.edu/~tsugaya/selection_revision_game (July 2010 working paper – currently under revision at TE. A very much related paper by Calcagno and Lovo deals with the same issue of preopening revision of strategies.)

“Promoting School Competition Through School Choice: A Market Design Approach,” J.W. Hatfield, F. Kojima & Y. Narita (2011)

Matching theory has been quite prominent in economics in the past decade: how ought we match organ donors, or match schools to students, or match medical residents to assignments, or husbands to wives? The basic principle is that financial transfers – “side payments” – are often illegal or contrary to well-established custom, so our standard mechanism design tricks won’t work. In school choice, a network of authors centered around Al Roth has developed mechanisms for matching students to schools in New York, Boston and San Francisco, among other cities, in a way that attempts (imperfectly) to deal with schools and students who will try to game the system. Hatfield and coauthors, including the ridiculously productive young scholar Fuhito Kojima, point out in this new working paper that something has been missing in this literature. The motivation for allowing school choice in the first place was so that schools would have an incentive to improve themselves. Previous work assumed school quality was fixed. Something is incongruous.

Let there be schools with M slots available to students, and a ranking over which students they prefer the most. Let students also rank which schools they like best. Let a mechanism respect improvements if, when it becomes “more preferred” by students, it gets students it prefers more. The authors prove there is no stable, strategy-proof matching mechanism which respects improvements: no matter how we propose to use information, schools sometimes have an incentive to misreport which students they actually like. Restricting the domain of preferences does not help, as the same result applies unless all schools have (almost) the same ranking over which students they prefer. Bronx Science and a School of the Arts surely rank along different criteria.

Why is this? Consider the intuition from the famous Deferred Acceptance algorithm. This algorithm is stable, and the intuition of the example will generalize to any stable matching. Let there be two students (A and B) and two schools (1 and 2). School 1 has 2 spots and School 2 has only 1 spot. Both students prefer school 2. School 1 prefers student A to student B, and School 2 prefers student B to student A. In the student-proposing Deferred Acceptance algorithm, both students apply to school 2 in the first round. School 2 conditionally accepts its preferred student, student B. Since school 2 has only one spot, it has filled its capacity. Student A then proposes to school 1 in the second round, and is accepted. Precisely the same outcome occurs (you can easily check this yourself) if schools propose. A famous theorem says that any the student-propose and school-propose DA algorithms given the worst-case and best-case stable matchings from the students’ perspective, so since these are the same, the preferences here permit a unique stable matching.

But what if school 1 improves itself, so that student B now prefers school 1 to school 2? Consider student-proposing DA again. In round 1, A will apply to 2 and B to 1. Both are conditionally accepted, and the algorithm terminates. So now school 1 winds up with student B rather than student A, a worse outcome in school 1’s eyes! It has been punished for improving itself in a very real sense. (You can check here, very simply, that the only stable matching under these preferences is again unique, for exactly the same reason as in the previous paragraph.)

The authors prove, in a similar manner, that there is no mechanism which is Pareto optimal for students which respects school quality improvement. Lest we be too distraught by these results, it is at least the case that stable matchings (like student-propose Deferred Acceptance) in “large” markets, under some conditions, almost never need worry that improvements will make the school worse off. The intuition is that, in the example above, I wanted a student to rank me low because this created competition at another school he now favor. That other school then may reject a different student that I particularly like, who will then “settle” for my school. These “chains of rejections” depend on some schools having reached their capacity, because otherwise they will just conditionally accept everyone who applies. The conditions on large markets essentially just say that the rejection chain I want to induce is unlikely to reach me because the student I am trying to attract has, with high probability, options she prefers which have not filled their quota as of yet.

https://360269…Fuhito-Kojima/SchoolCompetition20111111.pdf (November 15, 2011 Working Paper; at a delay of less than 2 days, I am sure this is the “freshest” research ever discussed on this site!)

“Demand Uncertainty and Dynamic Allocation with Strategic Agents,” A. Gershkov & B. Moldavanu (2010)

Consider a social planner, who discounts the future, trying to give away an indivisible object efficiently to agents who arrive over time; in the context of firms, this is often called “dynamic yield management.” Often, papers assume that agents are nonstrategic. But in many cases – consider possible home-buyers – agents will pretend not to be interested in a house if they think the seller will lower the price after they see little interest among buyers. Strategic action is important. If we knew everyone’s arrival time and valuation, this is just a relatively straightforward optimal stopping problem. What is the distribution of arrival times was unknown? In that case, private information about arrival time is important because the first few arrivals give the seller information about the distribution of arrival rates. Can we reach the dynamic first best even if everything is private information?

The answer, it turns out, is yes. This might surprise you if you know Gershkov and Moldavanu’s AER from a couple years back. In that paper, the uncertainty was about the distribution of private values, not arrivals. Generally, there does not exist an efficient mechanism under that kind of uncertainty. The case with uncertainty about arrival rate is different because agents can only pretend to arrive later, whereas with private values they could pretend to have either lower or higher private values. This one-sided deviation is important.

Here’s an example. Let bidder values be distributed U[0,1], and arrival times be a renewal with time between arrivals either distributed U[1,2] or U[2,3]. Papers from that 80s show that, if I know the actual arrival times and private values, the optimal dynamic mechanism just sells immediately after any agent arrives with a private value above a cutoff, and sells at a price equal to that cutoff. The cutoff x[1,2] if arrivals are distributed U[1,2] is higher than x[2,3] if arrivals are distributed U[2,3].

This introduces an incentive problem. Say the true arrivals are U[1,2], you are the first arrival, and you have a private value higher than x[1,2] but lower than x[2,3]. Since there is at least one unit of time until the next guy arrives, you can “delay” your arrival by up to one unit of time and be sure no one else will arrive. Truthfully revealing your arrival time and value means you are not allocated the object and therefore make profit zero; your private value is below x[1,2]. If you instead deviate by lying and saying you have arrival time after t=2, then you are allocated the object (since x[2,3] is now the cutoff, and that is below your private value), and you make positive profit.

Can this be fixed? Sure. Just pay anyone who reports arrival in [1,2] a subsidy of x[1,2]-x[2,3]. It is simple to prove that no one will want to deviate from the truth given these subsidies, so optimal dynamic efficiency can be obtained. You might prefer to only have payments made or given to agents who are allocated an object, though: otherwise there is a big incentive to “just show up” in order to get the subsidy. In this example, there is no efficient mechanism without subsidies: in order to keep the agent with value between x(2,3) and x(1,2) from lying about his arrival time, he must get “some” payoff from reporting truthfully. That can’t be a subsidy, so he must get the object with some probability. But then we no longer have dynamic efficiency.

More generally, with subsidies you can always incentivize everyone to report truthfully (and hence can efficiency allocate the good). The intuition is the same as in the example: pay a subsidy large enough that everyone’s utility is decreasing in their revealed arrival time. In particular, the earlier you arrive, the higher the subsidy you get. This will work as long as the subsidy is sufficiently large because there only one-sided deviations about arrival time are allowed: no one can “arrive” before their actual arrival time, so even a huge early arrival payment won’t can’t incentivize anyone to arrive “too early.” In some circumstances, the required subsidy is zero: this means that only the agent allocated the object will pay or get paid. It also means, roughly, that the efficient mechanism is the optimal dynamic mechanism.

When can we get away with no subsidies? This is quite difficult to prove. Gershkov and Moldavanu give some classes of distributions which require no subsidies, but the negative result is more intuitive: if arriving later makes the seller more pessimistic about future arrivals, then subsidies are required. The reason is as in the example; that property gives bidders an incentive to wait to “arrive” until the seller lowers the private value cutoff for which the object is sold.

A few comments. First, generally, this paper is not about finding the optimal mechanism (in terms of seller revenue) but is about finding any sort of efficient mechanism. The examples in the introduction do not all have this flavor, unfortunately. Second, and this is a problem in many mechanism design papers, strictly speaking mechanisms can only be run by monopoly firms or social planners. You might think that outside of a monopoly, a firm can take residual demand given other mechanisms as the private values it will face, but this usually can’t be done because the choice of mechanism by the firm will affect the distribution of private values seen. In general, the problem of so-called “competing mechanisms” is very difficult.

http://www.econ2.uni-bonn.de/bmoldovanu/paper/zuckerresfin.pdf (Working paper)

“Internet Advertising and the Generalized Second-Price Auction,” B. Edelman, M. Ostrovsky & M. Schwarz (2007)

Consider the Google Adwords Auction. When you search on Google (or Microsoft, or facebook, or Bing, among others), a series of ads is displayed in your browser. Assume that, when you search for “cookies,” the top ad slot is clicked 3 times per hour, the second slot 2 times per hour and the bottom slot once per hour. How should you sell these?

You could just set a fixed price for every X displays of each ad: this was how internet advertising was done in the 1990s. But we know that a posted price is rarely the optimal mechanism. Instead, why not auction each slot? Since we also have a famous result that adding bidders to the auction is a very good thing indeed, why not sell all three ad slots simultaneously? Have each advertiser offer a bid per click, then give the slot with the most clicks to the bidder who bid highest, the second most clicks to the bidder who bid second highest, and so on. The designers of this auction, Overture (now owned by Yahoo), were likely familiar with some economics literature and knew that second-price auctions had some nice properties: they are dominant-strategy truthful, they are efficient, and with a proper reserve price, they are optimal. Overture took that intuition and started charging the top bidder the second highest bid per click, the second highest bidder the third highest per click and so on. They called this “generalized second price”, or GSP. As of the mid-2000s, Google claimed in ad materials that this auction was very similar in its properties to a classic second-price auction.

Edelman and his coauthors, as well as Varian in a separate 2007 paper, noted that this is, in fact, not so. GSP is not necessarily efficient (let private values be [16,15,14], clickthrough rate be [3,2,1] and bids be [7,9,11]; you can check and see that this bidding is Nash in the one-shot simultaneous bid game, but has the highest value bidder getting the least number of clicks) nor truthful (let private values be [4,3,2] and clickthrough rate be [10,9,0] – truthful bids give bidder with value 4 a payoff of (4-3)*10=10, but bidding 2.5 would give her (4-2.5)*9=13.5). What, then, are the properties of the GSP mechanism? And why does anyone use it?

For any given parameter values, there are tons of Nash equilibria when GSP is modeled as a one-shot simultaneous bid game. The real auction is dynamic: bidders may change their bids as often as they want. Edelman et al, as well as Varian, refine the Nash set to equilibria that are a stable matching (in the sense of Roth and Sotomayor) between ad slots and bidders. It turns out that all of these “stable” or “locally envy-free” or “symmetric” equilibria give the auctioneer higher revenue than the Vickrey-Clarke-Groves mechanism; even though VCG is much simpler for bidders, auctioneers may be using GSP in order to extract higher revenue.

There have been many extensions of this model, only a few of which I point out. Susan Athey has a paper with Glenn Ellison incorporating consumer search across ad platforms – that is, the value of an ad slot is endogenous – and another paper with Denis Nekipelov adding bidder uncertainty about how the search engine will “quality-score” their bid. My classmates Kane Sweeney and Renato Gomes let bidders be uncertain about other bidders’ values per click, and show that even in the static game an efficient Bayes Nash equilibrium to GSP often fails to exist. I suppose this post mainly serves as notes to myself on this topic – I’m putting final touches on a new paper about GSP in a dynamic context with the takeaway that GSP is actually particularly bad for the auctioneer when bidders bid sequentially, and that the stable matching/local envy free refinement to stage game Nash is in an important sense giving exactly the opposite results of the dynamic subgame perfect equilibria set. More on this in a couple weeks.

http://www.benedelman.org/publications/gsp-060801.pdf (Final AER version – three cheers to Ben Edelman for posting final versions of his papers on his website!)

“Optimal Auction Design Under Non-Commitment,” V. Skreta (2011)

Consider a government selling an oil or timber tract (and pretend for a second that the relevant uncertainty is over how effectively the buyers can exploit the resource, not over some underlying correlated value of the tract; don’t blame me, I’m just using the example from the paper!). If buyers have independent private values, the revenue-maximizing auction is the Myerson auction, which you can think of as a second-price auction with a reserve price. But if a reserve price is set, then sometimes the auction will end with all bids below the reserve and the government still holding the resource. Can the government really commit not to rehold the auction in that case? Empirical evidence suggests not.

But without this type of commitment, what does the optimal sequence of auction mechanisms look like for the government? There is a literature beginning in the 80s about this problem. In any dynamic mechanism, the real difficulty comes down to the fact that, in any given stage, buyers do not want to reveal their actual type (where type, in our example, is their private value for the good). If they reveal in period 1, then the seller can fully extract their surplus in period 2 because private values are now public information. So there is a tradeoff often called the “ratchet effect”: mechanism designers can try to learn information in early stages of the dynamic mechanism to separate types, but will probably have to compensate agents in equilibrium for giving up this information. On a technical level, the big problem is that this effect means the revelation principle will not apply. This, you won’t be surprised, is a huge problem, since finding an “optimal” mechanism without the revelation principle means searching over any way to use messages that imply something about types revealed in some sequence, whereas a direct truthful mechanism means searching only over the class of mechanisms where every agent states his type and does not want to mimic some other type.

In a clever new paper, Vasiliki Skreta uses some tricks to actually solve for the optimal sequential auction mechanism. McAfee and Vincent (GEB 1997) proved that if an auctioneer is using a first price or second price sequence of auctions, revenue equivalence between the two still hold even without commitment, and if the discount rate goes to 1, seller revenue and optimal reserve prices converge to the static case. This result doesn’t tell us what the optimal mechanism is when discount rates matter, however; oil tract or cellular auctions may be years apart. Skreta shows that the highest perfect Bayesian equilibrium payoff is achieved with Myerson style reserve prices in each period and shows how to solve for these (often numerically due to their formula complexity). In the case of symmetric bidders, the optimal mechanism can be written as a series of first price or second price auctions, with reserve prices varying by period.

The actual result is probably less interesting than how we reach it. Two particular tricks are worth noting. First, Skreta lets the seller have total flexibility in how much information to reveal to buyers. That is, after period 1, if there is no sale the auctioneer can reveal everyone’s bids to each bidder, or not, or do so partially. Using a result from a great 2009 paper, also by Skreta, it turns out that with independent private values, it does not matter what information the seller reveals to the buyers in each period. That is, when looking for optimal mechanisms, we can just assume either than every bidder knows the full history of bids by every other bidder, or assume that every bidder knows only their own full history of bids. The proof is difficult, but the intuition is very much along the lines of Milgrom-Stokey no-trade: all agents have common priors about other agents’ types, and auctioneers can lie in their information disclosure, so if a seller is willing to offer some disclosure, it must be that this disclosure has negative expected value for bidders. For buyers to be willing to “offer their type” in an early stage to the auctioneer, they must be compensated positively. Since disclosure can’t be good for both buyer and seller by a type of Milgrom-Stokey result, any information disclosure rule that is able to gather good information has expected value zero for the seller. Obviously, I’m eliding the difficulties in the proof, but this is the basic idea.

The second useful result is about when you might want to separate types as the auctioneer. Imagine you are committing to holding the auction no more than 6 times. Should you separate early or late? It turns out you want to separate types as late as possible. That is, the optimal auction will, in period 1, sell to the highest bidder above some cutoff. The auction does want to separate types above that cutoff. But below that cutoff, buyers know that by not pooling their types, they are giving the auctioneer information about their private values which will be used in period 2 to extract rents. In equilibrium in period 1, then, the low-value buyers will need to be compensated to not choose actions that pool their types. This compensation turns out to be so expensive that the informational gains to the seller in period 2’s auction are not valuable enough to make it worth the cost.

A few comments: really what we care about here is what happens as the number of auctions which may occur goes to infinity. The principle is proven to be the same – cutoff rules are optimal – but how do those cutoff rules as the lack of commitment gets arbitrarily bad? What revenue properties are there? For instance, for various discount rates in the 2-bidder, uniform [0,1] private value auction, does seller revenue approach zero? Or does it asymptote to some fraction of the revenue in the auction with commitment? I think these types of applied theory questions are quite important, and should be answered here. My hunch is that even in simple cases the mathematics becomes too difficult as the number of periods grows large, and this is why Skreta didn’t include it. If the loss to the auctioneer from non-commitment turns out to be really large, we already know the “optimal” auction without commitment is to assign the rights to the auction to some second seller who, perhaps for contractual reasons, actually can commit not to resell. If the loss in some simple numerical examples is no more than 10%, then no big deal.

http://sites.google.com/site/vskreta/research/auctions_rev_3.pdf (WP version, Jan 2011)

Follow

Get every new post delivered to your Inbox.

Join 176 other followers

%d bloggers like this: