I wonder if Crawford and Sobel knew just what they were starting when they wrote their canonical cheap talk paper – it is incredible how much more we know about the value of cheap communication even when agents are biased. Most importantly, it is *not* true that bias or self-interest means we must always require people to “put skin in the game” or perform some costly action in order to prove the true state of their private information. A colleague passed along this paper by Aumann and Hart which addresses a question that has long bedeviled students of repeated games: why don’t they end right away? (And fair notice: we once had a full office shrine, complete with votive candles, to Aumann, he of the antediluvian beard and two volume tome, so you could say we’re fans!)

Take a really simple cheap talk game, where only one agent has any useful information. Row knows what game we are playing, and Column only knows the probability distribution of such games. In the absence of conflict (say, where there are two symmetric games, each of which has one Pareto optimal equilibrium), Row first tells Column that which game is the true one, this is credible, and so Column plays the Pareto optimal action. In other cases, we know from Crawford-Sobel logic that partial revelation may be useful even when there are conflicts of interest: Row tells Column with some probability what the true game is. We can also create new equilibria by using talk to reach “compromise”. Take a Battle of the Sexes, with LL payoff (6,2), RR (2,6) and LR=RL=(0,0). The equilibria of the simultaneous game without cheap talk are LL,RR, or randomize 3/4 on your preferred location and 1/4 of the opponent’s preferred location. But a new equilibria is possible if we can use talk to create a public randomization device. We both write down 1 or 2 on a piece of paper, then show the papers to each other. If the sum is even, we both go LL. If the sum is odd, we both then go RR. This gives ex-ante payoff (4,4), which is not an equilibrium payoff without the cheap talk.

So how do multiple rounds help us? They allow us to combine these motives for cheap talk. Take an extended Battle of the Sexes, with a third action A available to Column. LL still pays off (6,2), RR still (2,6) and LR=RL=(0,0). RA or LA pays off (3,0). Before we begin play, we may be playing extended Battle of the Sexes, or we may be playing a game Last Option that pays off 0 to both players unless Column plays A, in which case both players get 4; both games are equally probable ex-ante, and only Row learns which game we actually in. Here, we can enforce a payoff of (4,4) if, when the game is actually extended Battle of the Sexes, we randomize between L and R as in the previous paragraph, but if the game is Last Option, Column always plays A. But the order in which we publicly randomize and reveal information matters! If we first randomize, then reveal which game we are playing, then whenever the public randomization causes us to play RR (giving row player a payoff of 2 in Battle of the Sexes), Row will afterwards have the incentive to claim we are actually playing Last Resort. But if Row first reveals which game we are playing, and then we randomize if we are playing extended Battle of the Sexes, we indeed enforce ex-ante expected payoff (4,4).

Aumann and Hart show precisely what can be achieved with arbitrarily long strings of cheap talk, using a clever geometric proof which is far too complex to even summarize here. But a nice example of how really long cheap talk of this fashion can be used is in a paper by Krishna and Morgan called The Art of Conversation. Take a standard Crawford-Sobel model. The true state of the world is drawn uniformly from [0,1]. I know the true state, and get utility which is maximized when you take action on [0,1] as close as possible to the true state of the world plus .1. Your utility is maximized when you take action as close as possible to the true state of the world. With this “bias”, there is a partially informative one-shot cheap talk equilibrium: I tell you whether we are in [0,.3] or [.3,1] and you in turn take action either .15 or .65. How might we do better with a string of cheap talk? Try the following: first I tell you whether we are in [0,.2] or [.2,1]. If I say we are in the low interval, you take action .1. If I say we are in the high interval, we perform a public randomization which ends the game (with you taking action .6) with probability 4/9 and continues the game with probability 5/9; for example, to publicly randomize we might both shout out numbers between 1 and 9, and if the difference is 4 or less, we continue. If we continue, I tell you whether we are in [.2,.4] or [.4,1]. If I say [.2,.4], you take action .3, else you take action .7. It is easy to calculate that both players are better off ex-ante that in the one-shot cheap talk game. The probabilities 4/9 and 5/9 were chosen so as to make each player indifferent from following the proposed equilibrium after the randomization or not.

The usefulness of the lotteries interspersed with the partial revelation are to let the sender credibly reveal more information. If there were no lottery, but instead we always continued with probability 1, look at what happens when the true state of nature is .19. The sender knows he can say in the first revelation that, actually, we are on [.2,1], then in the second revelation that, actually, we are on [.2,4], in which case the receiver plays .3 (which is almost exactly sender’s ideal point .29). Hence without the lotteries, the sender has an incentive to lie at the first revelation stage. That is, cheap talk can serve to give us jointly controlled lotteries in between successive revelation of information, and in so doing, improve our payoffs.

Final published Econometrica 2003 copy (IDEAS). Sequential cheap talk has had many interesting uses. I particularly enjoyed this 2008 AER by Alonso, Dessein and Matouschek. The gist is the following: it is often thought that the tradeoff between decentralized firms and centralized firms is more local control in exchange for more difficult coordination. But think hard about what information will be transmitted by regional managers who only care about their own division’s profits. As coordination becomes *more* important, the optimal strategy in my division is *more* closely linked to the optimal decision in other divisions. Hence I, the regional manager, have a greater incentive to freely share information with other regional managers than in the situation where coordination is less important. You may prefer centralized decision-making when cooperation is *least* important because this is when individual managers are least likely to freely share useful information with each other.