Arrow’s (Im)possibility Theorem is, and I think this is universally acknowledged, one of the great social science theorems of all time. I particularly love it because of its value when arguing with Popperians and other anti-theory types: the theorem is “untestable” in that it quite literally does not make any predictions, yet surely all would consider it a valuable scientific insight.

In this post, I want to talk about a couple. new papers using Arrow’s result is unusual ways. First, a philosopher has shown exactly how Arrow’s result is related to the general philosophical problem of choosing which scientific theory to accept. Second, a pair of computer scientists have used AI techniques to generate an interesting new method for proving Arrow.

The philosophic problem is the following. A good theory should satisfy a number of criteria; for Kuhn, these included accuracy, consistency, breadth, simplicity and fruitfulness. Imagine now there are a group of theories (about, e.g., how galaxies form, why birds have wings, etc.) and we ordinally rank them based on these criteria. Also imagine that we have ranked each theory according to these criteria and we all agree on the rankings. Which theory ought we accept? Arrow applied to theory choice gives us the worrying result that not only is there no *unique* method of choosing among theories but also that there may *not exist* any such method at all, at least if we want to satisfy unanimity, non-dictatorship and independence of irrelevant alternatives. That is, even if you and I all agree about how each theory ranks according to different desirability criteria, we still don’t have a good, general method of aggregating among criteria.

So what to do? Davide Rizza, in a new paper in Synthese (gated, I’m afraid), discusses a number of solutions. Of course, if we have more than just ordinal information about each criterion, then we can construct aggregated orders. For instance, if we assigned a number for the relative rankings on each criterion, we could just add these up for each theory and hence have an order. Note that this theory choice rule can be done even if we just have ordinal data – if there are N theories, then on criterion C, give the best theorem in that criterion N points, the second best N-1, and so on, then add up the scores. This is the famous Borda Count.

Why can’t we choose theories by the Borda Count or similar, then? Well, Borda (and any other rule that could construct an aggregate order while satisfying unanimity and non-dictatorship) must be violating the IIA assumption in Arrow. Unanimity, which insists a rule accept a theory if it considered best along every criterion, and non-dictatorship, where more than one criterion can at least matter in principle, seem totally unobjectionable. So maybe we ought just toss IIA from our theory choice rule, as perhaps Donald Saari would wish us to do. And IIA is a bit strange indeed. If I rank A>B>C, and if you require me to have transitive preferences, then just knowing the binary rankings A>B and B>C is enough to tell you that I prefer A>C even if I didn’t know that particular binary relationship. In this case, adding B isn’t “irrelevant”; there is information in the binary pairs generated by transitivity which IIA does not allow me to take advantage of. Some people call the IIA assumption “binary independence” since it aggregates using only binary relations, an odd thing given that the individual orders contain, by virtue of being orders, more than just binary relations. It turns out that there *are* aggregation rules which generate an order if we loosen IIA to an alternative restriction on how to use information in sequences. IIA, rather than ordinal rankings across criteria, is where Arrow poses a problem for theory choice. Now, Rizza points out that these aggregation rules needn’t be unique so we still can have situations where we all agree about how different theories rank according to each criterion, and agree on the axiomatic properties we want in an aggregation rules, yet nonetheless disagree about which theory to accept. Still worrying, though not for Kuhn, and certainly not for us crazier Feyerabend and Latour fans!

(A quick aside: How strange it is that Arrow’s Theorem is so heavily associated with voting? That every voting rule is subject to tactical behavior is Gibbard-Satterthwaite, not Arrow, and this result about strategic voting imposes nothing like an IIA assumption. Arrow’s result is about the far more general problem of aggregating orders, a problem which fundamentally has nothing to do with individual behavior. Indeed, I seem to recall that Arrow came up with his theorem while working one summer as a grad student at RAND on the problem of what, if anything, it could mean for a country to have preferences when voting on behalf of its citizens in bodies like the UN. The story also goes that when he showed his advisor – perhaps Hotelling? – what he had been working on over the summer, he was basically told the result was so good that he might as well just graduate right away!)

The second paper today comes from two computer scientists. There are lots of proofs of Arrow’s theorem – the original proof in Arrow’s 1951 book is actually incorrect! – but the CS guys use a technique I hadn’t seen before. Essentially, they first prove with a simple induction that iff you can find a case with 2 voters and 3 options that satisfies the Arrow axioms, can you find such a case with N>=2 voters and M>=3 options. This doesn’t actually narrow the problem a great deal: there are still 3!=6 ways to order 3 options, hence 6^2=36 permutations of the joint vote of the 2 voters, hence 6^36 functions mapping the voter orders to a social order. Nonetheless, the problem is small enough to be tackled by a Constraint Satisfaction algortihm which checks IIA and unanimity and finds only two social welfare functions not violating one of those constraints, which are just the cases where Agents 1 and 2 are dictators. Their algorithm took one second to run on a standard computer (clearly they are better algorithm writers than the average economist!). Sen’s theorem and Muller-Satterthwaite can also be proven using a similar restriction to the base case followed by algorithmic search.

Of course, algorithmic proofs tend to lack the insight and elegance of standard proofs. But they have benefits as well. Just as you can show that only 2 social welfare functions with N=2 voters and M=3 options satisfy IIA and unanimity, you can also show that only 94 (out of 6^36!) satisfy IIA. That is, it is IIA rather than other assumptions which is doing most of the work in Arrow. Inspecting those 94 remaining social welfare functions by hand can help elucidate alternative sets of axioms which also generate aggregation possibility or impossibility.

(And a third paper, just for fun: it turns out that Kiribati and Nauru actually use Borda counts in their elections, and that there does appear to be strategic candidate nomination behavior designed to take advantage of the non-IIA nature of Borda! IIA looks in many ways like a restriction on tactical behavior *by candidates* or those nominating issues, rather than a restriction on tactical behavior by voters. If you happen to teach Borda counts, this is a great case to give students.)

In discrete choice theory, if the representative utility is defined V = U + eps, then IIA is equivalent to ‘doing logistic regression on epsilon’. In effect, this limits the ‘extreme value distribution’ of epsilon to a very special form.

This has the interesting result in ‘big data’ that since labelled binary data is easy to come by, and logistic regression is a popular way to estimate classifiers, that a lot of big data classifiers build in IIA by (unintentional?) design because the practitioners don’t know this fact.

So, the average ‘big data’ model is a lot more likely to respect IIA than the real world is.

Thanks, John. Discrete choice models in general have similar problems!

On the very same topic, see also Samir Okasha’s paper ‘Theory choice and social choice: Kuhn versus Arrow’:

http://m.mind.oxfordjournals.org/content/120/477/83

Dear Kevin

I’ll use this as an opportunity plug one of my older papers (with Sethuraman and Teo):

`Integer programming and arrovian social welfare functions.’

which appeared in Mathematics of Operations Research archive

Volume 28 Issue 2, May 2003 Pages 309 – 326

We give a linear inequality description of all SWFs that satisfy Arrow’s conditions. This system admits only dictatorial solutions. However, the same set of inequalities must be satisfied by any SCF that satisfies the conditions of Muller-Satterthwiate. In fact in the longer, unpublished version of the paper, we pointed out how a number of impossibility results follow the same pattern. In this sense, we give a unified proof of a number of impossibility results by showing that their conditions a characterized by the same set of inequalities.

rakesh

Thanks Ricky – interesting, and perhaps more intuitive than Reny’s equivalence.

BTW, forgot to mention that the idea that one can reduce Arrow to the case of 2 voters and 3 alternatives appears in Kalai and Mullers JET paper from the 1970’s. Indeed the point of their paper was to show dictatorial with 2 voters iff dictatorial with n voters.

I do not know what you mean when you say that the theory is not testable. Give me any social choice function and I predict that it will violate one of Arrow’s conditions. In this way, I can test the theory. I think that the only way you can argue that Arrow’s Theorem is not testable is by showing that it is tautological.

Testability in the usual sense is a function of data. If you read French, some philosophers of economics have a nice post on this point: http://rationalitelimitee.wordpress.com/2014/09/27/tautologies-infalsifiabilite-et-trivialite/

“any other rule that could construct an aggregate order while satisfying unanimity and non-dictatorship) must be violating the IIA assumption in Arrow.”

I am not quite sure whether you agree or disagree with the notion that if we have more than ordinal information, we can (easily) generate aggregate preferences that have no problems with Arrow whatsoever.

Simple example:

Let’s take three voters called A, B and C who need to construct their aggregate preference order between colours (r)ed, (g)reen and (b)lue. Each voter is given 9 votes which they can divide between colors as they like. Now, A votes r:8, g:1, b:0. B votes r:5, g:0, b:4. C votes r:3, g:3, b:3. The aggregate preference order is defined simply as the sum of the votes for each color. Then, r=8+5+3=16, g=1+0+3=4 and b=0+4+3=7. So we get r>b>g. As far as I understand, Arrow is not violated.

Yes, I completely agree that using more than ordinal information, we can avoid Arrow. But since Arrow is strictly about aggregating orders, I am not sure why this is interesting. Gibbard-Satterthwaite still tells us that even using cardinal information, no voting system is strategy-proof if it satisfies certain noncontroversial desiderata.