Category Archives: Auctions/Allocations

“Internet Advertising and the Generalized Second-Price Auction,” B. Edelman, M. Ostrovsky & M. Schwarz (2007)

Consider the Google Adwords Auction. When you search on Google (or Microsoft, or facebook, or Bing, among others), a series of ads is displayed in your browser. Assume that, when you search for “cookies,” the top ad slot is clicked 3 times per hour, the second slot 2 times per hour and the bottom slot once per hour. How should you sell these?

You could just set a fixed price for every X displays of each ad: this was how internet advertising was done in the 1990s. But we know that a posted price is rarely the optimal mechanism. Instead, why not auction each slot? Since we also have a famous result that adding bidders to the auction is a very good thing indeed, why not sell all three ad slots simultaneously? Have each advertiser offer a bid per click, then give the slot with the most clicks to the bidder who bid highest, the second most clicks to the bidder who bid second highest, and so on. The designers of this auction, Overture (now owned by Yahoo), were likely familiar with some economics literature and knew that second-price auctions had some nice properties: they are dominant-strategy truthful, they are efficient, and with a proper reserve price, they are optimal. Overture took that intuition and started charging the top bidder the second highest bid per click, the second highest bidder the third highest per click and so on. They called this “generalized second price”, or GSP. As of the mid-2000s, Google claimed in ad materials that this auction was very similar in its properties to a classic second-price auction.

Edelman and his coauthors, as well as Varian in a separate 2007 paper, noted that this is, in fact, not so. GSP is not necessarily efficient (let private values be [16,15,14], clickthrough rate be [3,2,1] and bids be [7,9,11]; you can check and see that this bidding is Nash in the one-shot simultaneous bid game, but has the highest value bidder getting the least number of clicks) nor truthful (let private values be [4,3,2] and clickthrough rate be [10,9,0] – truthful bids give bidder with value 4 a payoff of (4-3)*10=10, but bidding 2.5 would give her (4-2.5)*9=13.5). What, then, are the properties of the GSP mechanism? And why does anyone use it?

For any given parameter values, there are tons of Nash equilibria when GSP is modeled as a one-shot simultaneous bid game. The real auction is dynamic: bidders may change their bids as often as they want. Edelman et al, as well as Varian, refine the Nash set to equilibria that are a stable matching (in the sense of Roth and Sotomayor) between ad slots and bidders. It turns out that all of these “stable” or “locally envy-free” or “symmetric” equilibria give the auctioneer higher revenue than the Vickrey-Clarke-Groves mechanism; even though VCG is much simpler for bidders, auctioneers may be using GSP in order to extract higher revenue.

There have been many extensions of this model, only a few of which I point out. Susan Athey has a paper with Glenn Ellison incorporating consumer search across ad platforms – that is, the value of an ad slot is endogenous – and another paper with Denis Nekipelov adding bidder uncertainty about how the search engine will “quality-score” their bid. My classmates Kane Sweeney and Renato Gomes let bidders be uncertain about other bidders’ values per click, and show that even in the static game an efficient Bayes Nash equilibrium to GSP often fails to exist. I suppose this post mainly serves as notes to myself on this topic – I’m putting final touches on a new paper about GSP in a dynamic context with the takeaway that GSP is actually particularly bad for the auctioneer when bidders bid sequentially, and that the stable matching/local envy free refinement to stage game Nash is in an important sense giving exactly the opposite results of the dynamic subgame perfect equilibria set. More on this in a couple weeks.

http://www.benedelman.org/publications/gsp-060801.pdf (Final AER version – three cheers to Ben Edelman for posting final versions of his papers on his website!)

“Optimal Auction Design Under Non-Commitment,” V. Skreta (2011)

Consider a government selling an oil or timber tract (and pretend for a second that the relevant uncertainty is over how effectively the buyers can exploit the resource, not over some underlying correlated value of the tract; don’t blame me, I’m just using the example from the paper!). If buyers have independent private values, the revenue-maximizing auction is the Myerson auction, which you can think of as a second-price auction with a reserve price. But if a reserve price is set, then sometimes the auction will end with all bids below the reserve and the government still holding the resource. Can the government really commit not to rehold the auction in that case? Empirical evidence suggests not.

But without this type of commitment, what does the optimal sequence of auction mechanisms look like for the government? There is a literature beginning in the 80s about this problem. In any dynamic mechanism, the real difficulty comes down to the fact that, in any given stage, buyers do not want to reveal their actual type (where type, in our example, is their private value for the good). If they reveal in period 1, then the seller can fully extract their surplus in period 2 because private values are now public information. So there is a tradeoff often called the “ratchet effect”: mechanism designers can try to learn information in early stages of the dynamic mechanism to separate types, but will probably have to compensate agents in equilibrium for giving up this information. On a technical level, the big problem is that this effect means the revelation principle will not apply. This, you won’t be surprised, is a huge problem, since finding an “optimal” mechanism without the revelation principle means searching over any way to use messages that imply something about types revealed in some sequence, whereas a direct truthful mechanism means searching only over the class of mechanisms where every agent states his type and does not want to mimic some other type.

In a clever new paper, Vasiliki Skreta uses some tricks to actually solve for the optimal sequential auction mechanism. McAfee and Vincent (GEB 1997) proved that if an auctioneer is using a first price or second price sequence of auctions, revenue equivalence between the two still hold even without commitment, and if the discount rate goes to 1, seller revenue and optimal reserve prices converge to the static case. This result doesn’t tell us what the optimal mechanism is when discount rates matter, however; oil tract or cellular auctions may be years apart. Skreta shows that the highest perfect Bayesian equilibrium payoff is achieved with Myerson style reserve prices in each period and shows how to solve for these (often numerically due to their formula complexity). In the case of symmetric bidders, the optimal mechanism can be written as a series of first price or second price auctions, with reserve prices varying by period.

The actual result is probably less interesting than how we reach it. Two particular tricks are worth noting. First, Skreta lets the seller have total flexibility in how much information to reveal to buyers. That is, after period 1, if there is no sale the auctioneer can reveal everyone’s bids to each bidder, or not, or do so partially. Using a result from a great 2009 paper, also by Skreta, it turns out that with independent private values, it does not matter what information the seller reveals to the buyers in each period. That is, when looking for optimal mechanisms, we can just assume either than every bidder knows the full history of bids by every other bidder, or assume that every bidder knows only their own full history of bids. The proof is difficult, but the intuition is very much along the lines of Milgrom-Stokey no-trade: all agents have common priors about other agents’ types, and auctioneers can lie in their information disclosure, so if a seller is willing to offer some disclosure, it must be that this disclosure has negative expected value for bidders. For buyers to be willing to “offer their type” in an early stage to the auctioneer, they must be compensated positively. Since disclosure can’t be good for both buyer and seller by a type of Milgrom-Stokey result, any information disclosure rule that is able to gather good information has expected value zero for the seller. Obviously, I’m eliding the difficulties in the proof, but this is the basic idea.

The second useful result is about when you might want to separate types as the auctioneer. Imagine you are committing to holding the auction no more than 6 times. Should you separate early or late? It turns out you want to separate types as late as possible. That is, the optimal auction will, in period 1, sell to the highest bidder above some cutoff. The auction does want to separate types above that cutoff. But below that cutoff, buyers know that by not pooling their types, they are giving the auctioneer information about their private values which will be used in period 2 to extract rents. In equilibrium in period 1, then, the low-value buyers will need to be compensated to not choose actions that pool their types. This compensation turns out to be so expensive that the informational gains to the seller in period 2’s auction are not valuable enough to make it worth the cost.

A few comments: really what we care about here is what happens as the number of auctions which may occur goes to infinity. The principle is proven to be the same – cutoff rules are optimal – but how do those cutoff rules as the lack of commitment gets arbitrarily bad? What revenue properties are there? For instance, for various discount rates in the 2-bidder, uniform [0,1] private value auction, does seller revenue approach zero? Or does it asymptote to some fraction of the revenue in the auction with commitment? I think these types of applied theory questions are quite important, and should be answered here. My hunch is that even in simple cases the mathematics becomes too difficult as the number of periods grows large, and this is why Skreta didn’t include it. If the loss to the auctioneer from non-commitment turns out to be really large, we already know the “optimal” auction without commitment is to assign the rights to the auction to some second seller who, perhaps for contractual reasons, actually can commit not to resell. If the loss in some simple numerical examples is no more than 10%, then no big deal.

http://sites.google.com/site/vskreta/research/auctions_rev_3.pdf (WP version, Jan 2011)

“Persuasion by Cheap Talk,” A. Chakraborty & R. Harbaugh (2010)

This is a nice paper from the recent AER whose result is simple enough that the reader surely wonders, why didn’t I think of this first?

Consider someone – a lawyer, a vendor, whatever – who has state-independent preferences on the actions of another. No matter how good a product is, a salesman wants you to buy it. No matter how guilty a criminal is, a lawyer wants you to acquit. Talk is cheap in these situations: no matter what the salesman says, I know he wants me to buy the product, so I just don’t believe him. The standard ways to get around this problem are to make talk costly (Spence signaling) or to make claims sometimes verifiable (we sometimes call these “persuasion games”).

Chakraborty and Harbaugh describe another situation where cheap talk can have real effects: multidimensional games. Let there be two products a customer might buy. Let a customer have value of not buying anything of x distributed uniform on [0,1] and known only to the buyer. Let each good confer some benefit v1 (and v2) each distributed uniform[0,1] i.i.d., with the exact value known only to the seller. The customer buys if his expected benefit from buying is more than his value x. Since both goods are identical to the buyer, he expects both of them to confer benefit .5, and he buys (arbitrarily) one of the goods if x<.5, so a sale is made half the time. The seller does not care which good he wants to sell, so he can credibly (in equilibrium) claim that one of the goods is better than the other. The better good, from the perspective of the customer, now has expected value 2/3, and a sale is now made 2/3 of the time. That is, the seller is made better off by credibly talking up one of the products and talking down the other one.

More generally, for any utility functions and any distribution of prior beliefs about values unknown to one of the agents, there is always an information-revealing equilibrium in a multidimensional problem; this is true even if the agent with private information wants to increase the other agent’s estimate of one dimension and decrease his estimate of a second dimension. Both agents are strictly better off if the information-revealer has strictly quasiconvex utility. The intuition is simply that influential communication will spread out the decision maker’s estimates of each dimension, and quasiconvexity ensures that this spread is better for the communicator.

Beyond the salesman example, this logic explains a number of other phenomena. If a jury requires unanimity, and half the voters will say “guilty” only if they think the crime actually happened, and half the voters will say “guilty” only if they think the criminal was immoral, then a defense lawyer can (credibly!) spend time discussing only one of culpability or morality. If the lawyer discusses morality, then all jurors will think the potential criminal more likely to have committed the crime, but they will also all consider him more moral; since only one juror needs to vote “innocent”, this spread is good for the defense attorney. Applications to auctions and advertising are also found in the paper – this might be the rare paper that gets to AER on the strength of its examples alone.

http://www.bus.indiana.edu/riharbau/PersuasionByCheapTalk.pdf (final version, published in December 2010 AER)

“All-Pay Contests,” R. Siegel (2009)

Everybody knows the famous all-pay auction, where the highest bidder wins the prize but everbody pays, or perhaps the highest M bidders all win M prizes. Political lobbying is the classic example. But many real-life situations do not involve bids, but rather involve cost functions for each player. Lobbying firm A may be able to generate some interest among politicians without exerting any effort at all, whereas lobbying firm B may have to expend significant resources to generate any interest.

Siegel models this as an “all-pay” contest. Bidders have complete information; everyone knows everyone else’s valuations and cost structures. There are N bidders and M identical prizes, and each bidder has a value from winning a prize of v(i). Each bidder’s strategy involves selecting a “score” s with a cost c(s) which may be different for different players, and which is not necessarily differentiable. There turns out to be a very nice Nash equilibrium in this game. Let the “reach” of a player be the s which represents the final time that v(i)-c(s) is positive; an assumption on the cost function ensures that bidding infinity is never optimal. Order the players by their reach. Let the reach of the M+1th player be the “threshold”. Let the “power” of a player be v(i)-c(s) when s is precisely the threshold; the M+1th player will have power of zero.

Solving for equilibrium strategies is very difficult, but they almost certainly will be mixed strategies. But we can solve for payoffs in equilibrium. The unique expected payoffs (not necessarily unique equilibrium, as different strategies may give the same expected payoffs) give every player the maximum of 0 and their power. With some (very minor) assumptions about the game, at least N-M players will win an object with zero probability; since they can guarantee no worse than zero payoff by simply not participating in the game (by bidding zero), this means that (at least) N-M players will [correction, 11/23/10: either not participate or participate and earn zero payoff in expectation.]

The proof is actually surprisingly straightforward mathematically: no crazy functional analysis is necessary. The only tricky bit is guaranteeing that an equilibrium exists at all, since as with most auction games, the payoff functions are discontinuous in the strategies. Siegel uses a nice application of a old result from Simon and Zane to guarantee existence. Essentially, Simon and Zane note that in many auctions, the only discontinuities we need to worry about occur when players bid the same amount (a “tie”). They then show that there exists some tie-breaking rule which, when this rule is added the game, equilibrium is guaranteed. Siegel uses this result, then shows ties occur with probability zero in the all-pay contest, meaning that for a measure-one set of games, the exact tie-breaking rule won’t matter, but it allows us to know an equilibrium exists.

The characterization of all-pay contest equilibrium is really useful in practice. For one, it shows that, no matter what the cost function of the bidders who lose, expected payoffs do not change. That is, a first-price auction, and a first-price auction where losers pay some fraction of their bids, will give precisely the same expected payoffs for all players; this is a weak form of revenue equivalence. Also, recruiting a “weak” additional bidder, meaning a bidder with reach lower than the threshold, will not increase seller revenue since that bidder will bid zero in equilibrium. This literature is still fairly wide-open: there is no Myerson-like result on “optimal contest design,” for instance. I was thinking a bit today about an extension where we let v(i) be a function of all player strategies; consider two fighter jet manufacturers who are lobbying a government to make a purchase. The advertising of the losing bidder may increase the value of winning for the winning bidder if it causes the government to make a larger purchase of jets. These types of “advertising contests” are pretty widespread, but as far as I know, there is no characterization of optimal design or of equilibrium in the literature.

One extension is finished, however: Ron mentioned in a seminar the other day that he has a similar result for games where costs and values are non-monotonic. An example is writing an economics paper: the initial effort is pleasurable, but the revising is costly. A minor modification of the 2009 paper is enough to guarantee existence of an equilibrium and to characterize payoffs.

http://faculty.wcas.northwestern.edu/~rsi665/all-pay-contests.pdf (Final Econometrica version)

“Efficient Mechanism Design,” V. Krishna & M. Perry (2000)

Mechanism Design is the source of many of the most elegant results in economics. In this unpublished paper, Krishna and Perry show how powerful the famous Vickrey-Clarke-Groves mechanism is. The standard VCG just says that, to allocate goods, I should give them to the people with the highest value for those goods, and charge everyone a sum equal to the “externality” imposed on other bidders by my entrance into the mechanism. Such a mechanism makes it a dominant strategy for everyone to reveal their true private value, hence is efficient. This is an efficiency result, not a revenue maximization result.

Vickrey and Perry extend VCG by showing that VCG with an appropriate “basis” is the revenue-optimal mechanism among all efficient mechanisms, even when VCG is general enough to allow for complementaries among agents or consumption externalities. In particular, the basis is the “worst” surplus for agent i among his possible types given the mechanism. For instance, in a standard second-price auction, an agent with value 0 for the good gets expected surplus 0, so his basis is 0. The general VCG is just the VCG “externality” payment minus the participation constraint for the basis defined above. Since every agent is willing to participate in his worst case, then those agents are willing to participate in the mechanism always. From here, a short proof shows that generalized VCG is optimal among all efficient, incentive compatible, participation constraint satisfying mechanisms. Further, there exists an efficient, revenue optimal mechanism with balanced budgets (the sum of all transfers are zero) iff the sum of payments in VCG is greater than or equal to zero.

As an example, the authors show a simple proof of Mailath and Postlewaite’s 1990 result that no public project mechanism can pay for itself: government must force unwilling people to pay taxes to overcome information asymmetry. The intuition is that the designer does not know who really wants the project and who does not. He needs to charge the low value types a small enough value that they don’t veto the project, but he can’t charge high value types so much that they would pretend to be low value types. Note that the VCG externality is only positive if an individual is “pivotal” – that is, if his pretending to be a low type would switch the project from being constructed to not being constructed. The only agent that pays a VCG transfer pays exactly the cost of the project minus the sum of everyone else’s welfare had the project been constructed (call this Tj, and call individual i’s utility from completing the project Ti). Then c-Tj>=Ti must hold if the mechanism is efficient because the project is efficient to enact only if Tj+Ti>=c. It is known by Green and Laffont that the VCG mechanism gives transfers summing to less than the cost of the project (you can check this here: let there be two agents, c=1, Ti=.6 for both. Both agents are pivotal, and VCG payment is .4 for each, hence sum to .8<c). But we also know from above that VCG maximizes total payments among the class of efficient mechanisms satisfying participation constraints. So there exists no mechanism for eliciting private values for a public project that will pay for itself.

http://econ.la.psu.edu/~vkrishna/papers/vcg20.pdf

“Near Optimality of Second Price Mechanisms in a Class of Asymmetric Auctions,” V. Mares & J. Swinkels (2010)

Auctions are often asymmetric, in that some bidders are preferred by the seller/buyer. For instance, government procurement often wishes to give a “bonus” to minority-headed firms, and farm produce purchasers might wish to penalize growers who are shipping from a distance. The optimal mechanism in asymmetric auctions, however, is often quite complicated and hence difficult to implement. Mares and Swinkels show that the “virtual surplus” function common in auction theory has a tight link with a concept called rho-concavity which measures the degree of concavity of a function at a given point. They then propose a simple mechanism – choose a scale q in (0,1] and give a winning bidder a bonus of qD(i), where D(i) is a measure of how much the seller “prefers” agent i. Depending on what properties the distribution of bidder costs has, the simple mechanism can retain a surprisingly high percentage of the seller surplus compared to the optimal mechanism; for instance, if the density of costs c is monotonically increasing, then the simple mechanism generates at least 75% of the buyer surplus that the optimal mechanism generates vis-a-vis a nondiscriminatory auction. That is, the simple mechanism will always beat a symmetric auction given some very mild assumptions, and this result holds even when the auctioneer does not know the true cost distribution, but rather only knows it lies in some class. This suggests that auction designers may want to consider asymmetric auctions, since though they seem difficult to implement, their simple analogues are still vast improvements on optimal mechanisms that treat all bidders the same.

http://www.kellogg.northwestern.edu/faculty/mares/htm/personal/A2.pdf

“What if Achilles and the Tortoise were to Bargain?,” D. Samet (2010)

The Greek Zeno famously claimed that motion was impossible because once half the distance desired has been crossed, we must cross half the remaining distance, then cross half what still remains, and so on infinite times. As Aristotle informally and modern mathematicians formally have discussed, this argument is incorrect, since the problems faced at each step are not precisely the same (the distance remaining becomes shorter), and in particular the infinite sum of times needed to cross is convergent.

However, a similar problem occurs in bargaining games. Consider two agents trying to agree on a policy by reaching the Pareto frontier (for instance, Israelis and Palestinians negotiating on the boundary of their states). In each step of the bargaining process, some proposal is made and accepted such that the agents move closer to the Pareto frontier (for instance, successive rounds of “peace process” negotiations). Given fairly broad assumptions about the nature of the bargaining process, in finite time the agents will never reach the Pareto frontier. The principal difference between bargaining and Zeno’s paradox is that utility is unique only up to affine transformations. That is, when we reach the second stage of the bargaining, we no longer have only “half the utility” still to cover, but instead are at exactly the original problem we faced!

http://www.tau.ac.il/~samet/papers/Achilles%20and%20the%20tortoise.pdf

“What Really Matters in Auction Design?,” Paul Klemperer, 2002

Klemperer’s JEP on auction design in practice. Though the Revenue Equivalence Theorem claims that, with a fixed, non-collusive group of bidders, auctioneer revenue does not depend on the choice of auction, in practice it is not possible to stop either collusion or to force bidders to enter. For instance, if there is a cost to bidding, and bidders perceive one bidder to be stronger, a second-price auction will attain zero revenue, since only one bidder will bid. Many interesting examples of real-life “failed” auctions. Notes that sealed-bid auctions and the Klemperer “Anglo-Dutch” auction have nice properties when it comes to avoiding collusion, though perhaps at some loss of efficiency. Argues that regulators should treat antitrust auction during auctions as serious as antitrust action in “normal markets.”

http://www.jstor.org/stable/pdfplus/2696581.pdf

%d bloggers like this: