“Information and Voting: The Wisdom of the Experts versus the Wisdom of the Masses,” J. McMurray (2011)

Consider elections where there is no question of preferences: the sole point of the election is to aggregate information about which candidate is “better” objectively, or whether a criminal facing a jury is guilty, etc. Let nature choose which of two candidates is better objectively. Let each agent get a signal about which candidate is better, and a signal of the quality of that signal. For instance, a signal “A is better” and a quality .5 means that A is truly better fifty percent of the time, and B fifty percent of the time. The signal “A is better” and a quality 1 means that A is truly better one hundred percent of the time.

There are two main, contradictory intuitions. First, Condorcet in the 18th century showed in his famous “jury theorem” that, as a matter of statistics, aggregating partially informative signals always improves outcomes (the proof is simple and can be found on Wikipedia, among others). On the other hand, Condorcet didn’t have any equilibrium concept, and it turns out that the aggregation in his jury theorem is not a Nash equilibrium. In particular, let there be two signals, one completely uninformative (quality .5) and one totally informative (quality 1). In equilibrium, Feddersen and Pesendorfer famously proved that the low quality voters do not vote in equilibrium. The reason is that, when you write out the relevant binomial formulae, conditioning on being a pivotal vote that swings the election, the probability of swinging the election by electing the wrong guy mistakenly is greater than the probability of swinging the election by electing the “better” candidate. In particular, the unanimity rule in jury voting is not optimal when we have this division of information: more voters are not necessarily better.

McMurray extends Fedderson and Pesendorfer to the case with continuous signal quality on [.5,1], with majority rule voting. This seems the obvious next step, but the problem had proved relatively intractable. McMurray is able to get around some technical difficulties by letting the number of voters be drawn from a Poisson distribution and using some results from Myerson’s Poisson population games papers. In particular, given a (symmetric) strategy profile s, and a distribution of signal quality F, the Poisson number of voters means that the number of voters for candidates A and B given that nature chooses candidate A as better are independent random variables with means np(AA) and np(AB) where n is the Poisson parameter and p(Ax) is just the probability of voting for candidate x given the true state is A and the strategy s is used, integrating over the distribution of signal quality types. That independence means that the probability of a voting outcome is the product of independent Poisson probabilities, and is therefore tractable.

With this trick in hand, McMurray shows that a symmetric cutoff rule is the only symmetric Bayesian equilibrium, meaning that if your signal quality is above a threshhold, you vote for whichever candidate your signal says is better. More importantly, this cutoff is, for any distribution F, bounded below 1 even as the number of voters goes to infinity. The intuition is the following: as n goes to infinity, the expected margin of victory for the better candidate grows unboundedly. However, an individual voting decision only depends on what happens conditional on victory being by between -1 and 1 votes. As long as the variance of the margin of victory is also increasing relatively rapidly, an agent with arbitrarily high signal quality will continue to believe that conditional on him being pivotal, a number of other agents must be making mistakes. In particular, the number of votes for and against a candidate n+ and n- are independent Poisson variables, so as the number of potential voters grows large, the margin of victory (n+)-(n-), the difference of two independent Poissons, hence asymptotically normal. This normal variable, the limiting margin of victory, turns out to have a constant ratio of expected value to variance. Therefore, the ratio of a one win victory to a one win defeat for the better candidate is just the ratio calculated from a standard normal distribution, and will be strictly less than one. Further, the cutoff turns out to be exactly the cutoff a social planner would choose were she to try to maximize the chance of choosing the right candidate, so there are some nice welfare properties. Of course, if a social planner can design any mechanism, and not just majority vote with a cutoff decision rule on whether to vote or not, she would do better: everyone has identical preferences, so the optimal mechanism would just ask each agent for their signal quality and what signal was received, then compute the likelihood function.

There are some numerical examples that attempt to convince the reader that this model has some nice empirical properties: for instance, if the distribution of quality types is uniform on [.5,1], then 59% of voters will vote, which sounds about right for most democracies. I don’t find these examples terrible convincing, though. The uniform type distribution gives an expected margin of victory of 70% for the winning candidate. You can manipulate the distribution of types to get around 50% participation and close elections, of course, but the necessary distributions are pretty ad hoc, and equally reasonable distributions can give you roughly any proportion of voters and margin of victory that you want. Certainly the more important part of the paper is showing that with continuous signals, equilibria can be computed and that they never imply full, compulsory voting is optimal.

One last brief word: when talking about elections, and not jury trials or similar, identical preferences and asymmetric information may not be the most satisfying model. It’s tough to come up with a story where information aggregation is the most compelling reason for aggregating votes. Some friends here were discussing what happens in this model if you let there be two groups of agents, with identical preferences within each group. The hypothesis is that voting is still not 100%, but is higher than in the McMurray paper because you sometimes want to flip a result that is being driven by the aggregation of the opposing group’s preferences.

http://econ.byu.edu/Faculty/Joseph%20McMurray/Assets/Research/Turnout.pdf (Current working paper, March 2011)

Advertisement
%d bloggers like this: