“Strongly Consistent Self-Confirming Equilibrium,” Y. Kamada (2010)

A good rule of thumb is that if, as an undergraduate, you find a major proof mistake in a well-known paper, the one mistaken is not the original author, but rather you. If you find mistakes in every theorem of that paper, then there really is no chance the errors are real. And if the paper is on learning in games, and is written by the people who literally wrote the textbook on learning in games?

It turns out that if you’re Yuichiro Kamada, a Harvard PhD student with an already massive list of papers on his cv, you may actually be right. Fudenberg and Levine introduced their self-confirming equilibrium (SCE) in the early 90s. SCE essentially says that agents optimize given beliefs on opponent play, and those beliefs are required to be correct only at information sets that are actually reached given the equilibrium strategy profile. This is clearly weaker than Nash, where agents optimize given the full opponent strategy. The point, in some sense, is that if we are playing games and I only observe your behavior along the equilibrium path, then I might never learn what actions you would take, and whether I could do better, if I tried a different strategy.

That said, it would be nice to know what restrictions on SCE ensure that the SCE is Nash. Fudenberg and Levine claimed that consistency, meaning beliefs are correct for any information set reached by my own strategy and any possible opponent strategy, and unitary independence, meaning beliefs are independent and a single belief is used for every strategy I might play (in general, SCE might allow me to pair strategy A with beliefs R but strategy B with beliefs S when maximizing) make SCE equivalent to Nash. This is not true. A simple 3-player counterexample is given in Kamada’s paper.

The correct statement involved strengthening the definition of consistency to “strong consistency”. Rather than requiring beliefs to be correct for any information set reached by my own strategy and any possible opponent strategy, we ought require beliefs to be correct for any information set that can be reached without contradicting my own on-equilibrium-path strategy. That is, imagine that player 1 can move left or right, then from each of those nodes, player 2 can move up or down, and if he moves up, then player 3 selects a number 0 or 1. Imagine that for some beliefs (L,(D,D),0) is an SCE (not that since 2 moves Down, 3 does not move on the equilibrium path), but that this SCE requires player 1 and player 2 to hold different beliefs about what player 3 will do should he actually move. Fudenberg and Levine’s consistency does not requires 2 to have correct beliefs about 3 since the SCE above has 2 always playing Down no matter what other agents do, hence 3 will never move. Kamada’s strong consistency requires correct beliefs for any actions that do not contradict the on-path play (L,D). Since (R,U) does not contradict (L,D), then 2 must have correct beliefs about what 3 will do when U is played. Likewise, 1 must also have correct beliefs about 3, hence 1 and 2 will have beliefs that coincide. The formal proof is roughly along the same lines: if an information set is relevant to a set of players, then they will have true beliefs about what will happen at that information set, hence maximizing given beliefs is just maximizing given opponent strategies as in a standard Nash equilibrium.

The moral: economic theory is hard, and even the famous guys slip up once in a while!

http://www.people.fas.harvard.edu/~ykamada/strong_consistency.pdf

Advertisement
%d bloggers like this: