Matching theory has been quite prominent in economics in the past decade: how ought we match organ donors, or match schools to students, or match medical residents to assignments, or husbands to wives? The basic principle is that financial transfers – “side payments” – are often illegal or contrary to well-established custom, so our standard mechanism design tricks won’t work. In school choice, a network of authors centered around Al Roth has developed mechanisms for matching students to schools in New York, Boston and San Francisco, among other cities, in a way that attempts (imperfectly) to deal with schools and students who will try to game the system. Hatfield and coauthors, including the ridiculously productive young scholar Fuhito Kojima, point out in this new working paper that something has been missing in this literature. The motivation for allowing school choice in the first place was so that schools would have an incentive to improve themselves. Previous work assumed school quality was fixed. Something is incongruous.

Let there be schools with M slots available to students, and a ranking over which students they prefer the most. Let students also rank which schools they like best. Let a mechanism respect improvements if, when it becomes “more preferred” by students, it gets students it prefers more. The authors prove there is no stable, strategy-proof matching mechanism which respects improvements: no matter how we propose to use information, schools sometimes have an incentive to misreport which students they actually like. Restricting the domain of preferences does not help, as the same result applies unless all schools have (almost) the same ranking over which students they prefer. Bronx Science and a School of the Arts surely rank along different criteria.

Why is this? Consider the intuition from the famous Deferred Acceptance algorithm. This algorithm is stable, and the intuition of the example will generalize to any stable matching. Let there be two students (A and B) and two schools (1 and 2). School 1 has 2 spots and School 2 has only 1 spot. Both students prefer school 2. School 1 prefers student A to student B, and School 2 prefers student B to student A. In the student-proposing Deferred Acceptance algorithm, both students apply to school 2 in the first round. School 2 conditionally accepts its preferred student, student B. Since school 2 has only one spot, it has filled its capacity. Student A then proposes to school 1 in the second round, and is accepted. Precisely the same outcome occurs (you can easily check this yourself) if schools propose. A famous theorem says that any the student-propose and school-propose DA algorithms given the worst-case and best-case stable matchings from the students’ perspective, so since these are the same, the preferences here permit a unique stable matching.

But what if school 1 improves itself, so that student B now prefers school 1 to school 2? Consider student-proposing DA again. In round 1, A will apply to 2 and B to 1. Both are conditionally accepted, and the algorithm terminates. So now school 1 winds up with student B rather than student A, a worse outcome in school 1’s eyes! It has been punished for improving itself in a very real sense. (You can check here, very simply, that the only stable matching under these preferences is again unique, for exactly the same reason as in the previous paragraph.)

The authors prove, in a similar manner, that there is no mechanism which is Pareto optimal for students which respects school quality improvement. Lest we be too distraught by these results, it is at least the case that stable matchings (like student-propose Deferred Acceptance) in “large” markets, under some conditions, almost never need worry that improvements will make the school worse off. The intuition is that, in the example above, I wanted a student to rank me low because this created competition at another school he now favor. That other school then may reject a different student that I particularly like, who will then “settle” for my school. These “chains of rejections” depend on some schools having reached their capacity, because otherwise they will just conditionally accept everyone who applies. The conditions on large markets essentially just say that the rejection chain I want to induce is unlikely to reach me because the student I am trying to attract has, with high probability, options she prefers which have not filled their quota as of yet.

https://360269…Fuhito-Kojima/SchoolCompetition20111111.pdf (November 15, 2011 Working Paper; at a delay of less than 2 days, I am sure this is the “freshest” research ever discussed on this site!)

I still do not understand what is wrong with gaming the system. To a certain degree, people in a first price auction are “gaming the system” by shading their valuations. However, they are still revealing their underlying preferences (thinking here of course of a symmetric environment with private values). In addition, there is no evidence that gaming the system actually makes people worse off. In particular, there are few studies on how well real systems actually work, other than theoretical comparisons of real world mechanisms to the DA mechanism. It would be good to see some real empirics on this topic.

Apologies for the rant, but I do enjoy your blog.