## “Bounded Reasoning and Higher-Order Uncertainty,” W. Kets (2010)

Willemien Kets, a post-doc at Sante Fe and UC-Irvine, is another solid job market candidate in theory this year; she presented the present paper here at Northwestern last week.

Recall Rubinstein’s electronic mail game (this is a really strange name in retrospect – everyone knows it as Halpern’s story of the two generals, right?). If we are in state A, then each of two generals (in the same army) don’t want to attack. In state B, then both want to attack. One general learns the true state, and sends a message to the other player. The other player then responds with a confirmation, and confirmations continue to be sent back and forth. Messages are not delivered with probability epsilon, arbitrarily small. After one message is sent from General 1 to General 2, General 1 does not know whether 2 received his message. After 2 confirms to 1, and 1 receives the message, 2 does not know whether 1 received his message. No matter how many confirmations are received, the generals are unable to coordinate their attach. In particular, if attacking is risk-dominated by not attacking, then the Pareto-optimal coordination on attack will not occur; that is, it will not be a rationalizable strategy no matter how small epsilon is, and no matter how many messages are sent. Rubinstein provides the proof in the linked paper if you are not familiar with it.

Rubinstein’s counterintuitive result (as he says, what would you do if you’d received 17 confirmations from your counterpart general?) relies completely on the ability of players to have infinite depth of reasoning in the standard Harsanyi types model. That is, I know what I will do, I know that you know what I will do, I know that you know that I know that you know what I will do, and so on up to infinity. In some games, it would be nicer to examine what will happen when we have lower-order beliefs; for instance, I can reason through seven levels, and you can reason through 28 levels. Colin Camerer, among others, has formalized this as k-order hierarchies of beliefs. The belief hierarchy literature does have some nice predictive power, but it is not obviously connected to the normal type space literature in game theory, and it requires specifying what people with level 0 beliefs (i.e., completely unstrategic players) will do. In the present paper, Kets tries to get around these difficulties.

The basic idea is simple. Instead of type spaces specifying just what players know and what they believe about what other players know – i.e., how many confirmatory messages have been received, and how many such messages the other general thinks have been received – an extended type space also specifies a sigma algebra over other players’ types. A sigma algebra, essentially, is just a list of sets to which I can apply probabilities. If two types of the other general are in the same element of my sigma algebra, then I “cannot tell them apart”. In particular, it might make sense to say that someone with level-7 beliefs has a sigma algebra that places all higher order beliefs about others’ types in the same element, and therefore cannot assign probabilities to individual very-high-order types of the opponent. Kets shows that these sigma algebras can be written as filtrations which allow belief hierarchies to be described directly from the primitives in a recursive manner, and that the standard Harsanyi type space can be embedded in the extended type space notation.

What happens in the electronic mail game using extended type spaces? Assume there is an arbitrarily small chance that the other player has less than infinite-order beliefs. Then even if both generals actually do have infinite-order beliefs, after a sufficiently large number of confirmations, they will believe that the other general (who by bounded rationality cannot make the complex deductions necessary to get into Rubinstein’s trap) has seen enough messages to converge on attack. We don’t, unfortunately, have a definition of “equilibrium” in the extended type space – Kets mentioned she is working on this problem for a followup paper – so the result above is using rationalizability only. This seems fine to me, since the epistemic conditions for Nash equilibrium require full rationality anyway, don’t they?

This paper provides a nice framework, but I’m not totally convinced of its general usefulness. What’s really needed is an easily computable equilibrium definition, like Nash, under conditions of bounded rationality and with arbitrary behavioral assumptions. Hierarchical beliefs and the extension in this paper come close, but they still seem too hard to use in practice for the wholly-empirical types.

http://tuvalu.santafe.edu/~willemien.kets/Kets_JMP.pdf (Working paper – final JMP version)