Especially among non-theorists, there is a very common misunderstanding about the restrictiveness of game theory: how, many wonder, are everyone’s payoffs supposed to be common knowledge? And isn’t this common knowledge necessary for equilibrium concepts to make sense?

The answer, as Harsanyi noted in 1967, is no: we just to define a game of incomplete information with Bayesian payoffs. One way to do this is to endow players with state-contingent payoffs, a prior over states, and information partitions which may differ. That is, player 1 may be unable to tell apart state 1 and state 2 (and hence will maximize his expected payoff given his prior over those two states) while player 2 may always know which state we are in (but also will understand that player 1 does not have this information). Introducing incomplete information is particularly interesting when it *reduces* the set of Nash equilibria. Intuitively, consider a 2 player game where in state 1, player 1 will always take action A. Player 2 can’t tell states 1 and 2 apart, but given his priors and the fact that player 1 always plays A in state 1, player 2 has a unique best response B when he knows we are either in state 1 or 2. Player 1, who can’t tell states 2 and 3 apart, might then also have a unique best response C when he knows we are either in state 2 or 3, and so on. This is precisely what is happening in the Rubinstein (1989) electronic mail game. This type of cascade is generally called an *infection*.

Is there a simple way to identify when infections can occur? Morris, Rob and Shin show that, in 2 player games with a finite action space, the answer is yes; further, some classmates and I have some intuition that a similar, though more notationally burdensome, proof strategy will work for N person games with finite action space. Let an action pair (A,B) be *p-dominant* if, when each player believes the other will play the given strategy with probability at least p, the specified action is each player’s unique best response. Let Q1(p,E) be the set of states where player 1 believes with at least probability p that player 2 believes with at least probability p that a (probabilistic) event E has occurred and that E occurring is in fact true. Likewise define Q2(p,E) for the case where player 2 believes that player 1 believes the same. Q1 and Q2 are just operators on events, so we can apply them iteratively, such as Q1(p,Q1(p,E)). If the coarsest common refinement of each player’s information partitions (i.e., the join of those partitions) is the whole state space, then applying Q1 or Q2 a finite k number of times will give the entire state space. Let the belief potential of the game be the smallest p such that for some k, Q1(p,E) applied k times and Q2(p,E) applied k times covers the entire state space no matter what event E we start with. You can prove that the belief potential of a game is always less than 1/2.

We now have all the tools we need for the main proof: if 1) the game has belief potential p, 2) (A1,A2) is p-dominant in every state, and 3) at least one player in at least one state finds A(i) to be a strictly dominant action, then (A1,A2) is the unique, rationalizable strategy in the incomplete information game. This is even better than unique Nash! The proof is actually simple. Let R(i) be the set of rationalizable strategies for player i. Let W(i) be the set of states where i plays action A(i) in every rationalizable strategy. Let E be the event where A(i) is strictly dominant, and hence rationalizable, for player i; this exists by assumption 3. By assumption 2, if each player believes the other will play action A(i) with at least probability p, then A(i) is the unique best response, so Q1(p,E) is in W(1) and Q2(p,E) is in W(2). Since that argument works in every state, Q1(p,E)^k and Q2(p,E)^k are in W(1) and W(2). By definition of belief potential, Q1(p,E)^k=Q2(p,E)^k equals the whole state space for some k sufficiently large. So by set inclusion, W(1) and W(2), the sets of states where A(i) is played in every rationalizable strategy, is the whole state space.

The conditions for infection turn out to be pretty simple. The proposition proved above can show, for example, why the risk-dominant Nash equilibria is played uniquely in 2 person, 2 action games (as in Carlson and van Damme’s global games paper). Infection arguments are actually critical to understanding things like currency crises or financial meltdowns: an infection essentially occurs when players beliefs about other players’ beliefs about other players’ beliefs, and so on, result in a unique action being chosen by each player in equilibrium. These actions are often not Pareto-dominant. Writing this problem down in terms of information partitions, as we did above, makes it really clear how adding information to the game – Lehman goes bankrupt, or the Fed makes an announcement which becomes common knowledge, etc. – can radically change equilibrium behavior under incomplete information. These type of models exist in financial macro – Allen and Gale’s 2000 JPE, for instance – but are not nearly common enough.

http://www.pse.ens.fr/guesnerie/teaching/ehess/morris-rob-shin.pdf (Final Econometrica version)

[…] “contagion” arguments have been explored by many others (writeups on similar papers by Morris, Rob and Shin and Weinstein and Yildiz can be found on this […]