Kamenica and Gentzkow recently published this gloriously-titled extension of the cheap talk literature in AER. Recall that in Crawford-Sobel cheap talk, a sender and receiver differ in their preferred action, and the sender holds better information about the true state. The receiver cannot commit to an action conditional on the message, so this is not a principal-agent problem. Even though both people know the sender is biased, there are equilibria that are partially informative, where the sender credibly just tells you what set of states the true state is in, and the receiver updates based on that knowledge. These equilibria are not always good for the sender, nor for the receiver – both may be better off in the “babbling equilibrium” where the signal is totally ignored.
Kamenica and Gentzkow consider a problem where the agent picks the signal structure, and then sends a verifiable signal; this avoids worries about the babbling equilibrium. Their example is a prosecutor, who legally cannot lie, but can choose how to collect information. Both judge and prosecutor have a prior of .3 that the defendant is guilty. The prosecutor earns payoff 1 from a guilty conviction, and 0 from innocence. The judge earns payoff 1 for a correct conviction/acquittal, and 0 from an incorrect one. The prosecutor bias is known. Nonetheless, the prosecutor can get the defendant convicted 60% of the time! How? Collect two types of evidence. We want to use signals to generate a posterior that the defendant is guilty with probability of exactly .5 (in which case the expected-utility maximizing judge convicts), and a posterior that the defendant is innocent with probability 1. That is, half of the time that the prosecutor says “guilty”, the defendant is innocent, and always if the prosecutor says “innocent”, the defendant is innocent. This means the prosecutor says guilty 60% of the time, and innocent 40% of the time. These are Bayesian rational given the prior of guilt with p=.5.
Note what’s going on. From the prosecutor’s perspective, it doesn’t matter when the judge thinks the defendant is guilty with probability 1 or p=.8 or p=.55. As long as p(guilty)>=.5, the judge will convict, and the prosecutor will get payoff 1. So collecting evidence that is really strong is worthless. Better to collect evidence that is just strong enough to get a conviction; by doing so, the prosecutor can give the same evidence for lots of innocent people as well as the truly guilty! This principle applies broadly; the authors show that a car dealership will want a buyer to just barely believe the car is a good match for her if she should buy, and tell all others that the car is a terrible match.
The math here is interesting. Subgame perfection, plus a short proposition, means that the agent’s action will be a deterministic function of his posterior. The sender’s payoff is a function of the agent’s action. Hence the sender’s payoff is a function of the induced posterior. Whenever there are convexities in the sender payoff as a function of agent’s action, the sender should choose signals (or randomize) to “smooth out” that convexity. In the example, the prosecutor payoff is 0 if the judge’s posterior of guilt is below .5, and 1 if equal to or above .5. That function has a convexity between 0 and .5. We can choose signals in a verifiable way such that if the prior is some p in (0, .5), the overall probability of conviction rises to 2p; just choose evidence such that the p percent of guilty defendants, plus another p percent of innocent defendants, all have the same just-strong-enough evidence against them.