“Reputational Bargaining under Knowledge of Rationality,” A. Wolitzky (2010)

It’s job market season, and the first candidates are arriving here at my school. Wolitsky and his flat-out ridiculous CV came to present this paper here on Monday. This paper is actually not one of the stronger ones he’s written, but the result is interesting nonetheless for a reason I’ll get to shortly.

The problem of “reputational bargaining” (which you may recognize from an example in Myerson’s textbook) is a pie-splitting problem when one or both sides have the ability to commit to a minimum position. Consider a union that claims it won’t take less than $15/hr average salary. If the union whips up its membership, then management may believe there is a 1 in 1000 chance that the posture is a firm commitment; perhaps there is a 1 in 1000 chance that some union member will kill the union head if he accepts any less. Assume this probability is common knowledge.

A number of papers have looked at equilibria of games like this under standard common knowledge of rationality assumptions. Wolitzky makes very unrestrictive epistemic assumptions – we only have first-order knowledge of rationality, meaning I know you are rational, but I don’t know that you know that I am rational or anything more complex. He then looks for the maxmin payoff, or the payoff a player can get no matter what beliefs the opponent may have about his bargaining strategy.

The maxmin turns out to be surprisingly high. If players are negotiating over the division of a pie of size 1, then a 1 in 10 chance that an announced posture is a firm commitment allows the committing player to guarantee herself 30 percent of the pie. Even a 1 in a million chance allows 7 percent of the pie to be recovered. A strategy which garners that maxmin payoff is to announce a posture whereby the player accepts x percent if an agreement is reached in time zero, and will only accept x% plus a payment for delay if agreement is not reached instantly. This is something along the lines of “prejudgment interest” in civil torts.

Why can so much be recovered? The noncommitted player will only continue if he expects the potentially-committed player to reduce her demand in the near future. If the potentially-committed player just pretends to be his committed self, then the noncommitted player will update his beliefs about the probability she is committed quite quickly. Once it is believed the player is committed with sufficiently high probability, agreement must be reached, since the committed strategy always demands more and more of the pie over time. It turns out the maxmin strategy is unique, even though we allow all sorts of noncontinuous, nondifferentiable, etc. strategies for each player; proving this uniqueness involves a lot of algebra which, to be honest, it is not worth reading through.

The particularly interesting aspect of this paper, though, is Section 7. If I give players higher-order rationality, then iteratively delete strictly dominated strategies (the paper calls this “rationalizability”, but that’s not quite right…), the optimal strategies and optimal maxmin payoffs will not change. That is, first-order knowledge of rationality really is all I need. The proof of this is fairly straightforward: there is a belief which one player may hold where he thinks the other player will concede almost all of the pie, and vice versa. Since we’re looking for maxmin payoffs, the “rationalizable” existence of such beliefs means that more rationality does not allow me to extract any higher, or any lower, of a payoff. I would like to see more game theoretic models explore whether their results are robust to weaker epistemic conditions, and this paper shows that interesting results along those lines can sometimes be had with very little additional work.

http://econ-www.mit.edu/files/6127 (Working paper – Job Market version)

Advertisement
%d bloggers like this: