Aumann (1976) famously proved that with common priors and common knowledge of a posterior, individuals cannot agree to disagree about a fact. Geanakoplos and Polemarchakis (1982) explained *how* one might reach a common posterior, by showing that if two agents can communicate their posterior, and then reupdate, they will in finite transfers of information converge on a posterior belief (of course, it might not be the true state of the world that they converge on, but converge they will), and hence will not agree to disagree. This fact turns out to generalize to signal functions other than Bayesian updates, as in Cave (1983).

One might wonder, then: does this result hold for more than two people? The answer is that it does not. Parikh and Krasucki define a communication protocol among N agents as a sequence of names r(t) and s(t) specifying who is speaking and who is listening in every period t. Note that all communication is pairwise. As long as communication is “fair” (more on this shortly), meaning that everyone communicates with everyone else either directly or indirectly an infinite number of times, and as long as the information update function satisfies a convexity property (Bayesian updating does), then beliefs will converge, although unlike in Cave, Aumann and Geanakoplos, the posterior may not be common knowledge.

There is no very simple example of beliefs not converging, but a long(ish) counterexample is found in the paper. A followup by Houy and Menager notes that even when information updates are Bayesian, different order of communication can lead to beliefs converging to different points, and proves results about how much information can be gleaned when we can first discuss in which order we wish to discuss our evidence; if it is common knowledge that two groups of agents disagree about which protocol will make them better off (in the sense of giving them the finest information partition after all updates have been done), then any order of communication, along with the knowledge about who “wanted to speak first”, will leads to beliefs converging to the same point. That is, if Jim and Joe both wish to speak second, and this is common knowledge, then no matter who speaks first, beliefs will converge to the same point).

One important point about Parikh and Krasucki. First, the result is “wrong” in the sense that the method of updating beliefs is problematic. In particular, when agents get new information, the method of updating beliefs turns out to ignore some valuable information. I will make this statement clearer in a post tomorrow.

This entire line of reasoning makes you wonder whether, under common topologies of communication, we can guarantee convergence of beliefs in groups: or indeed, whether we can guarantee that “the boss”, somehow defined, knows at least as much as everyone else. This is the project I’m working on at current, and hopefully will have results to share here by the end of the year.

http://web.cs.gc.cuny.edu/~kgb/course/krasucki.pdf

Excellent post ! It’s a very interesting line of research indeed. Good luck with your work.

Amazing. This is somewhat surprising to me. I expect to read soon whatever you have to share with us.

In a related note, have you read a paper by Acemoglu, titled “Fragility of Asymptotic Agreement under Bayesian Learning”?

And what about more empirical approaches to convergence (or lack of convergence) of beliefs? This (seems to be) a hot topic in political science (see this: http://www.themonkeycage.org/2010/04/more_on_epistemic_closure.html)

I know the Acemoglu result – it’s interesting, but I guess you have to care about non-common priors in the first place to care about convergence to non-common priors. The learning justification he cites for common priors is certainly not the only reason to believe that people have them in the first place…

As for empirical approaches, I don’t really know much about the academic literature. The econ literature doesn’t guarantee that pairs converge “to the truth” however, just by communicating. If A and B gather data (refine their information partitions), as do C and D, then A-B and C-D talk only to each other, then nothing in Aumann/Geanakoplos keeps A-B and C-D from each converging to different beliefs…

I see. Thanks for your reply. If I came out with any further comment, I’ll write here.

Manoel

You might also be interested in: Tsakas and Voorneveld: “On consensus through communication without a commonly known protocol”

http://edocs.ub.unimaas.nl/loader/file.asp?id=1490