“On the Strategic Stability of Equilibria,” E. Kohlberg and J.-F. Mertens (1986)

(The following discussion also draws heavily on “Quantity Precommitment and Bertrand Competition Yield Cournot Outcomes, by D. Kreps and J. Scheinkman, 1983. That paper shows that if duopolists first choose capacities (at a cost) then simultaneously choose prices after observing capacities, the unique Nash equilibrium gives Cournot prices and quantities. Proving this in general is very difficult, as the subgame has noncontinuous payoffs: as in Bertrand, the low cost seller gets all the sales (up to capacity, of course), and in general noncontinuous games may not have any equilibria at all. Uniqueness is striking, since it means the argument doesn’t even depend on subgame perfection, though the equilibrium strategy is subgame perfect. That is, there is more going on than the strategies “Both choose Cournot capacities in period 1, and if either does not, choose price 0 to punish the other player.” The exact details of how to construct this equilibrium are interesting indeed, but for the purposes of the following discussion, just know that Kreps and Scheinkman’s basic point is that the solution to duopoly games does not only depend on variables chosen or on the timing implicit in the game form, but on the combination of the two.)

Kohlberg and Mertens were writing in the heyday of the equilibrium refinement literature – theirs was one of the last well-known refinements of Nash. Consider constructing a “reasonable set” of equilibria to a game. What properties might you like such a set to have? Best would, of course, be to define a set of axioms on “reasonableness” then show what this implies about the equilibrium, but the authors says “we do not yet feel ready for such an approach; we think the discussion…will abundantly illustrate the difficulties involved.” More on this point in the last paragraph of this post.

What non-axiomatic properties, then, might you like? Backwards induction is fairly reasonable, as has been discussed ad nauseum in an earlier philosophy and decision theory literature. Admissibility is another good one, also with roots in decision theory: all equilibria in the players’ reasonable sets should be undominated. Three other properties have to do with the game form itself. Existence of all least one equilibrium, surely, is a property we want if we only want to restrict ourselves to “reasonable sets” of potential equilibia. Invariance of equilibria to the game form in the sense of some 1950s authors also seems reasonable: I don’t want to get different reasonable sets simply by changing the way I write down a game tree if such a change has no effect on the normal form of the game (there is a technical qualification here that I ignore); one argument here is that the equilibria of a game should not depend on whether I give instructions to a computer on how I should play in every situation before the game begins, or whether I play through the game tree myself. Finally, if I delete a dominated strategy from game G to create game G’, I don’t want any equilibria to disappear from the reasonable set.

Here Kohlberg and Mertens propose their well-known KM stable set. The proofs are of limited interest except to those of you really up on your differential geometry; I assume theorems like “p-bar is homotopic to a homeomorphism” are not of broad interest to readers of this site. In any case, a KM stable set is a closed set of Nash equilibria such that, for any set of completely mixed strategies for all players, if I perturb the strategies of each player by some small delta to that set of completely mixed strategies, the perturbed game has equilibria epsilon close to the the original equilibrium (close in terms of the strategy simplex, as usual). Every game has at least one equilibrium in a KM stable set, but there may be multiple such sets. This definition is hard to work with, of course, but it satisfies all of desired properties except backward induction. Kohlberg and Mertens note in the conclusion that it would be great if someone could make a small modification such that backward induction were also captured (I believe Hillas did this, though I’ve not read his paper)

What is most interesting about this paper is how far afield the refinement literature got itself. I think the problem is evident in the list of non-axioms/properties listed by Kohlberg and Mertens. There are many properties you might think are reasonable for the solution to a game of strategic interaction. Some of them are decision-theoretic (Type 1). Some of them involve robustness to errors of logic, or limited reasoning capacity, or minor mistakes (Type 2). Some of them involve equilibria definitions that can handle problems in the way the game form are written (Type 3), as the invariance property here attempts to do.

Attempting to do all of the above simultaneously is going to be problematic. We ought first agree on a canonical game form, and given the form, ought describe errors explicitly in the game form (Type 2), then deal axiomatically with the decision theoretic issues. Looking back from 2012, I think that Type 2 has been dealt with suitably, and negatively, by papers discussed previously on this site. Essentially, if there are a set of Nash Equilibria, I can make every one of them a strict Nash Equilibria by altering super-high-order knowledge in a way that is fairly uncontroversial once you accept that reasoning with higher-order knowledge may be limited or that people make mistakes in applying such knowledge. That is, all NE would need to be part of any “reasonable set” if we are to leave the world of perfectly rational agents.

I actually think Type 3 robustness is not fully explored, though, which brings us back to Kreps and Scheinkman. I don’t think their conclusion – that the squabble about Cournot and Bertrand can be in some sense solved by suitably changing Bertrand such that information about other agents’ potential production is known when I choose price – is enough. Rather, there is a potential problem with the idea of how Nash Equilibrium treats payoffs, a problem made most clear in games with continuous action spaces where the payoffs depend on a system of variables, some of which can be solved for if we know the others. This is a bit opaque, but I hope to have more to say on that point in the near future.

http://www.dklevine.com/archive/refs4445.pdf (Final Econometrica version – big thumbs up to David Levine for his continuing acts of giving a different finger up to copyright maximalism.)

Advertisement
%d bloggers like this: