Category Archives: Matching

The 2018 John Bates Clark: Parag Pathak

The most prestigious award a young researcher can receive in economics, the John Bates Clark medal, has been awarded to the incredibly productive Parag Pathak. His CV is one that could only belong a true rocket of a career: he was a full professor at MIT 11 years after he started his PhD, finishing the degree in four years before going to Paul Samuelson route through the Harvard Society of Fellows to become the top young theorist in Cambridge.

Pathak is best known for his work on the nature and effects of how students are allocated to schools. This is, of course, an area where theorists have had incredible influence on public policy, notably via Pathak’s PhD Advisor, the Nobel prize winner Al Roth, the group of Turkish researchers including Atila Abdulkadiroglu, Utku Unver, and Tayfun Sonmez, as well as the work of 2015 Clark medal winner Roland Fryer. Indeed, this group’s work on how to best allocate students to schools in an incentive-compatible way – that is, in a way where parents need only truthfully state which schools they like best – was adopted by the city of Boston, to my knowledge the first-time this theoretically-optimal mechanism was used by an actual school district. As someone born in Boston’s contentious Dorchester neighborhood, it is quite striking how much more successful this reform was than the busing policies of the 1970s which led to incredible amounts of bigoted pushback.

Consider the old “Boston mechanism”. Parents list their preferred schools in order. Everyone would be allocated their first choice if possible. If a school is oversubscribed, some random percentage get their second choice, and if still oversubscribed, their third, and so on. This mechanism gives clear reason for strategic manipulation: you certainly don’t want to list a very popular school as your second choice, since there is almost no chance that it won’t fill up with first choices. The Boston mechanism was replaced by the Gale-Shapley mechanism, which has the property that it is never optimal for a parent to misrepresent preferences. In theory, not only is this more efficient but it is also fairer: parents who do not have access to sophisticated neighbors helping them game the system are, you might reason, most likely to lose out. And this is precisely what Pathak and Sonmez show in a theoretical model: sophisticated parents may prefer the old Boston mechanism because it makes them better off at the expense of the less sophisticated! The latter concern is a tough one for traditional mechanism design to handle, as we generally assume that agents act in their self-interest, including taking advantage of the potential for strategic manipulation. There remains some debate about what is means for a mechanism to be “better” when some agents are unsophisticated or when they do not have strict preferences over all options.

Competition for students may also affect the quality of underlying schools, either because charter schools and for-profits compete on a profit-maximizing basis, or because public schools somehow respond to the incentive to get good students. Where Pathak’s work is particularly rigorous is that he notes how critical both the competitive environment and the exact mechanism for assigning students are for the responsiveness of schools. It is not that “charters are good” or “charters are bad” or “test schools produce X outcome”, but rather the conditionality of these statements on how the choice and assignment mechanism works. A pure empirical study found charters in Boston performed much better for lottery-selected students than public schools, and that attending an elite test school in Boston or New York doesn’t really affect student outcomes. Parents appear unable to evaluate school quality except in terms of peer effects, which can be particularly problematic when good peers enroll is otherwise bad schools.

Pathak’s methodological approach is refreshing. He has a theorist’s toolkit and an empiricist’s interest in policy. For instance, imagine we want to know how well charter schools perform. The obvious worry is that charter students are better than the average student. Many studies take advantage of charter lotteries, where oversubscribed schools assign students from the application class by lottery. This does not identify the full effect of charters, however: whether I enter a lottery at all depends on how I ranked schools, and hence participants in a lottery for School A versus School B are not identical. Pathak, Angrist and Walters show how to combine the data from specific school choice mechanisms with the lottery in a way that ensures we are comparing like-to-like when evaluating lotteries at charters versus non-charters. In particular, they find that Denver’s charter schools do in fact perform better.

Indeed, reading through Pathak’s papers this afternoon, I find myself struck by how empirical his approach has become over time: if a given question requires the full arsenal of choice, he deploys it, and if it is unnecessary, he estimates things in a completely reduced form way. Going beyond the reduced form often produces striking results. For instance, what would be lost if affirmative action based on race were banned in school choice? Chicago’s tier-based plans, where many seats at test schools were reserved for students from low-SES tiers, works dreadfully: not only does it not actually select low-SES students (high-SES students in low-income neighborhoods are selected), but it would require massively dropping entrance score criteria to get a pre-established number of black and hispanic students to attend. This is particularly true for test schools on the north side of the city. Answering the question of what racial representation looks like in a counterfactual world where Chicago doesn’t have to use SES-based criteria to indirectly choose students to get a given racial makeup, and the question of whether the problem is Chicago’s particular mechanism or whether it is fundamental to any location-based selection mechanism, requires theory, and Pathak deploys it wonderfully. Peng Shi and Pathak also do back-testing on their theory-based discrete-choice predictions of the impact of a change in Boston’s mechanism meant to reduce travel times, showing that to the extent the model missed, it was because the student characteristics were unexpected, not because there were not underlying structural preferences. If we are going to deploy serious theoretical methods to applied questions, rather than just to thought experiments, this type of rigorous combination of theory, design-based empirics, and back-testing is essential.

In addition to his education research, Pathak has also contributed, alongside Fuhito Kojima, to the literature on large matching markets. The basic idea is the following. In two-sided matching, where both sides have preferences like in a marriage market, there are no stable matching mechanisms where both sides want to report truthfully. For example, if you use the mechanism currently in place in Boston where students rank schools, schools themselves have the incentive to manipulate the outcome by changing how many slots they offer. Pathak and Kojima show that when the market is large, it is (in a particular sense) an equilibrium for both sides to act truthfully; roughly, even if I screw one student I don’t want out of a slot, in a thick market it is unlikely I wind up with a more-preferred student to replace them. There has more recently been a growing literature on what really matters in matching markets: is it the stability of the matching mechanism, or the thickness of the market, or the timing, and so on.

This award strikes me as the last remaining award, at least in the near term, from the matching/market design boom of the past 20 years. As Becker took economics out of pure market transactions and into a wider world of rational choice under constraints, the work of Al Roth and his descendants, including Parag Pathak, has greatly expanded our ability to take advantage of choice and local knowledge in situations like education and health where, for many reasons, we do not use the price mechanism. That said, there remains quite a bit to do on understanding how to get the benefits of decentralization without price – I am deeply interested in this question when it comes to innovation policy – and I don’t doubt that two decades from now, continued inquiry along these lines will have fruitfully exploited the methods and careful technique that Parag Pathak embodies.

One final note, and this in no way takes away from how deserving Pathak and other recent winners have been. Yet: I would be remiss if I didn’t point out, again, how unusually “micro” the Clark medal has been of late. There literally has not been a pure macroeconomist or econometrician – two of the three “core” fields of economics – since 1999, and only Donaldson and Acemoglu are even arguably close. Though the prize has gone to three straight winners with a theoretical bent, at least in part, the prize is still not reflecting our field as a whole. Nothing for Emi Nakamura, or Victor Chernozhukov, or Emmanuel Farhi, or Ivan Werning, or Amir Sufi, or Chad Syverson, or Marc Melitz? These folks are incredibly influential on our field as a whole, and the Clark medal is failing to reflect the totality of what economists actually do.

Advertisement

Dale Mortensen as Micro Theorist

Northwestern’s sole Nobel Laureate in economics, Dale Mortensen, passed overnight; he remained active as a teacher and researcher over the past few years, though I’d be hearing word through the grapevine about his declining health over the past few months. Surely everyone knows Mortensen the macroeconomist for his work on search models in the labor market. There is something odd here, though: Northwestern has really never been known as a hotbed of labor research. To the extent that researchers rely on their coworkers to generate and work through ideas, how exactly did Mortensen became such a productive and influential researcher?

Here’s an interpretation: Mortensen’s critical contribution to economics is as the vector by which important ideas in micro theory entered real world macro; his first well-known paper is literally published in a 1970 book called “Microeconomic Foundations of Employment and Inflation Theory.” Mortensen had the good fortune to be a labor economist working in the 1970s and 1980s at a school with a frankly incredible collection of microeconomic theorists; during those two decades, Myerson, Milgrom, Loury, Schwartz, Kamien, Judd, Matt Jackson, Kalai, Wolinsky, Satterthwaite, Reinganum and many others were associated with Northwestern. And this was a rare condition! Game theory is everywhere today, and pioneers in that field (von Neumann, Nash, Blackwell, etc.) were active in the middle of the century. Nonetheless, by the late 1970s, game theory in the social sciences was close to dead. Paul Samuelson, the great theorist, wrote essentially nothing using game theory between the early 1950s and the 1990s. Quickly scanning the American Economic Review from 1970-1974, I find, at best, one article per year that can be called game-theoretic.

What is the link between Mortensen’s work and developments in microeconomic theory? The essential labor market insight of search models (an insight which predates Mortensen) is that the number of hires and layoffs is substantial even in the depth of a recession. That is, the rise in the unemployment rate cannot simply be because the marginal revenue of the potential workers is always less than the cost, since huge numbers of the unemployed are hired during recessions (as others are fired). Therefore, a model which explains changes in churn rather than changes in the aggregate rate seems qualitatively important if we are to develop policies to address unemployment. This suggests that there might be some use in a model where workers and firms search for each other, perhaps with costs or other frictions. Early models along this line by Mortensen and others were generally one-sided and hence non-strategic: they had the flavor of optimal stopping problems.

Unfortunately, Diamond in a 1971 JET pointed out that Nash equilibrium in two-sided search leads to a conclusion that all workers are paid their reservation wage: all employers pay the reservation wage, workers believe this to be true hence do not engage in costly search to switch jobs, hence the belief is accurate and nobody can profitably deviate. Getting around the “Diamond Paradox” involved enriching the model of who searches when and the extent to which old offers can be recovered; Mortensen’s work with Burdett is a nice example. One also might ask whether laissez faire search is efficient or not: given the contemporaneous work of micro theorists like Glenn Loury on mathematically similar problems like the patent race, you might imagine that efficient search is unlikely.

Beyond the efficiency of matches themselves is the question of how to split surplus. Consider a labor market. In the absence of search frictions, Shapley (first with Gale, later with Shubik) had shown in the 1960s and early 1970s the existence of stable two-sided matches even when “wages” are included. It turns out these stable matches are tightly linked to the cooperative idea of a core. But what if this matching is dynamic? Firms and workers meet with some probability over time. A match generates surplus. Who gets this surplus? Surely you might imagine that the firm should have to pay a higher wage (more of the surplus) to workers who expect to get good future offers if they do not accept the job today. Now we have something that sounds familiar from non-cooperative game theory: wage is based on the endogenous outside options of the two parties. It turns out that noncooperative game theory had very little to say about bargaining until Rubinstein’s famous bargaining game in 1982 and the powerful extensions by Wolinsky and his coauthors. Mortensen’s dynamic search models were a natural fit for those theoretic developments.

I imagine that when people hear “microfoundations”, they have in mind esoteric calibrated rational expectations models. But microfoundations in the style of Mortensen’s work is much more straightforward: we simply cannot understand even the qualitative nature of counterfactual policy in the absence of models that account for strategic behavior. And thus the role for even high micro theory, which investigates the nature of uniqueness of strategic outcomes (game theory) and the potential for a planner to improve welfare through alternative rules (mechanism design). Powerful tools indeed, and well used by Mortensen.

Nobel 2012, Roth and Shapley

There will be many posts summarizing the modern market design aspect of Roth and Shapley, today’s winners of the Econ Nobel. So here let me briefly discuss certain theoretical aspects of their work, and particularly my read of the history here as it relates to game theory more generally. I also want to point out that the importance of the matching literature goes way beyond the handful of applied problems (school choice, etc.) of which most people are familiar.

Pure noncooperative game theory is insufficient for many real-world problems, because we think that single-person deviations are not the only deviations worth examining. Consider marriage, as in Gale and Shapley’s famous 1962 paper. Let men and women be matched arbitrarily. Do we find such a set of marriages reasonable, meaning an “equilibrium” in some sense? Assuming that every agent prefers being married (to anyone) to being unmarried, then any set of marriages is a Nash equilibrium. But we find it reasonable to think that two agents, a man and a woman, can commit to jointly deviate, breaking their marriage and forming a new one. Gale and Shapley prove that there always exists a match that is “pairwise stable” meaning that no pair of men and women wish to deviate in this way.

Now, if you know your game theory, you may be thinking that such deviations sound like a subset of cooperative games. After all, cooperative (or coalitional) games involve checking for deviations by groups of agents, who may or may not be able to arbitrarily distribute their joint utility among their coalition. Aspects of such cooperation are left unmodeled in their noncooperative sense. It turns out (and I believe this is a result due to Mr. Roth, though I’m not sure) that pairwise stable matches are equivalent to the (weak) core of the same cooperative game in one-to-one or many-to-one matching problems. That means both that checking deviations by one potential set of marrying partners is equivalent to checking deviations by any sized group of marrying partners. But more importantly, this link between the core and pairwise stability allows us to utilize many results in cooperative game theory, known since the 1950s and 60s, to answer questions about matching markets.

Indeed, the link between cooperative games and matching, and between cooperative and noncooperative games, allows for a very nice mathematical extension of many well-known general problems: the tools of matching are not restricted solely to school choice and medical residents, but indeed can answer important questions about search in labor markets, about financial intermediation, etc. But to do so requires reframing matching as simply mechanism design problems with heterogeneous agents and indivisibility. Ricky Vohra, of Kellogg and the Leisure of the Theory Class blog, has made a start at giving tools for such a program in his recent textbook; perhaps this post can serve as a siren call across the internet for Vohra and his colleagues to discuss some examples on this point on their blog. The basic point is that mechanism design problems can often be reformulated as linear programs with a particular set of constraints (say, integer solutions, or “fairness” requirements, etc.). The most important set of constraints, surely, are incomplete information which allows for strategic lying, as Roth discovered when he began working on “strategic” matching theory in the 1980s.

My reading of much of the recent matching literature, and there are obviously exceptions of which Roth and Shapley are both obviously included as well as younger researchers like Kojima, is that many applied practitioners do not understand how tightly linked matching is to classic results in mechanism design and cooperative games. I have seen multiple examples, published in top journals, of awkward proofs related to matching which seem to completely ignore this historical link. In general, economists are very well trained in noncooperative game theory, but less so in the other two “branches”, cooperative and evolutionary games. Fixing that imbalance is worthwhile.

As for extensions, I offer you a free paper idea, which I would be glad to discuss at further length. “Repeated” matching has been less often studied. Consider school choice. Students arrive every period to match, but schools remain in the game every period. In theory, I can promise the schools better matches in the future in exchange for not deviating today. The use of such dynamic but consistent promises is vastly underexplored.

Finally, who is left for future Nobels in the areas of particular interest to this blog, micro theory and innovation? In innovation, the obvious names are Rosenberg, Nelson and Winter; Nelson and Winter’s evolutionary econ book is one of the most cited texts in the history of our field, and that group will hopefully win soon as they are all rather old. Shapley’s UCLA colleagues Alchian and Demsetz are pioneers of agency theory. I can’t imagine that Milgrom and Holmstrom will be left off the next micro theory prize given their many seminal papers (along with Myerson, they made the “new” game theory of the 70s and 80s possible!), and a joint prize with either Bob Wilson or Roy Radner would be well deserved. An econ history prize related to the Industrial Revolution would have to include Joel Mokyr. There are of course many more that could win, but these five or six prizes seem the most realistically next in line.

“Promoting School Competition Through School Choice: A Market Design Approach,” J.W. Hatfield, F. Kojima & Y. Narita (2011)

Matching theory has been quite prominent in economics in the past decade: how ought we match organ donors, or match schools to students, or match medical residents to assignments, or husbands to wives? The basic principle is that financial transfers – “side payments” – are often illegal or contrary to well-established custom, so our standard mechanism design tricks won’t work. In school choice, a network of authors centered around Al Roth has developed mechanisms for matching students to schools in New York, Boston and San Francisco, among other cities, in a way that attempts (imperfectly) to deal with schools and students who will try to game the system. Hatfield and coauthors, including the ridiculously productive young scholar Fuhito Kojima, point out in this new working paper that something has been missing in this literature. The motivation for allowing school choice in the first place was so that schools would have an incentive to improve themselves. Previous work assumed school quality was fixed. Something is incongruous.

Let there be schools with M slots available to students, and a ranking over which students they prefer the most. Let students also rank which schools they like best. Let a mechanism respect improvements if, when it becomes “more preferred” by students, it gets students it prefers more. The authors prove there is no stable, strategy-proof matching mechanism which respects improvements: no matter how we propose to use information, schools sometimes have an incentive to misreport which students they actually like. Restricting the domain of preferences does not help, as the same result applies unless all schools have (almost) the same ranking over which students they prefer. Bronx Science and a School of the Arts surely rank along different criteria.

Why is this? Consider the intuition from the famous Deferred Acceptance algorithm. This algorithm is stable, and the intuition of the example will generalize to any stable matching. Let there be two students (A and B) and two schools (1 and 2). School 1 has 2 spots and School 2 has only 1 spot. Both students prefer school 2. School 1 prefers student A to student B, and School 2 prefers student B to student A. In the student-proposing Deferred Acceptance algorithm, both students apply to school 2 in the first round. School 2 conditionally accepts its preferred student, student B. Since school 2 has only one spot, it has filled its capacity. Student A then proposes to school 1 in the second round, and is accepted. Precisely the same outcome occurs (you can easily check this yourself) if schools propose. A famous theorem says that any the student-propose and school-propose DA algorithms given the worst-case and best-case stable matchings from the students’ perspective, so since these are the same, the preferences here permit a unique stable matching.

But what if school 1 improves itself, so that student B now prefers school 1 to school 2? Consider student-proposing DA again. In round 1, A will apply to 2 and B to 1. Both are conditionally accepted, and the algorithm terminates. So now school 1 winds up with student B rather than student A, a worse outcome in school 1’s eyes! It has been punished for improving itself in a very real sense. (You can check here, very simply, that the only stable matching under these preferences is again unique, for exactly the same reason as in the previous paragraph.)

The authors prove, in a similar manner, that there is no mechanism which is Pareto optimal for students which respects school quality improvement. Lest we be too distraught by these results, it is at least the case that stable matchings (like student-propose Deferred Acceptance) in “large” markets, under some conditions, almost never need worry that improvements will make the school worse off. The intuition is that, in the example above, I wanted a student to rank me low because this created competition at another school he now favor. That other school then may reject a different student that I particularly like, who will then “settle” for my school. These “chains of rejections” depend on some schools having reached their capacity, because otherwise they will just conditionally accept everyone who applies. The conditions on large markets essentially just say that the rejection chain I want to induce is unlikely to reach me because the student I am trying to attract has, with high probability, options she prefers which have not filled their quota as of yet.

https://360269…Fuhito-Kojima/SchoolCompetition20111111.pdf (November 15, 2011 Working Paper; at a delay of less than 2 days, I am sure this is the “freshest” research ever discussed on this site!)

%d bloggers like this: