Aumann (1976) famously proved that with common priors and common knowledge of a posterior, individuals cannot agree to disagree about a fact. Geanakoplos and Polemarchakis (1982) explained how one might reach a common posterior, by showing that if two agents can communicate their posterior, and then reupdate, they will in finite transfers of information converge on a posterior belief (of course, it might not be the true state of the world that they converge on, but converge they will), and hence will not agree to disagree. This fact turns out to generalize to signal functions other than Bayesian updates, as in Cave (1983).
One might wonder, then: does this result hold for more than two people? The answer is that it does not. Parikh and Krasucki define a communication protocol among N agents as a sequence of names r(t) and s(t) specifying who is speaking and who is listening in every period t. Note that all communication is pairwise. As long as communication is “fair” (more on this shortly), meaning that everyone communicates with everyone else either directly or indirectly an infinite number of times, and as long as the information update function satisfies a convexity property (Bayesian updating does), then beliefs will converge, although unlike in Cave, Aumann and Geanakoplos, the posterior may not be common knowledge.
There is no very simple example of beliefs not converging, but a long(ish) counterexample is found in the paper. A followup by Houy and Menager notes that even when information updates are Bayesian, different order of communication can lead to beliefs converging to different points, and proves results about how much information can be gleaned when we can first discuss in which order we wish to discuss our evidence; if it is common knowledge that two groups of agents disagree about which protocol will make them better off (in the sense of giving them the finest information partition after all updates have been done), then any order of communication, along with the knowledge about who “wanted to speak first”, will leads to beliefs converging to the same point. That is, if Jim and Joe both wish to speak second, and this is common knowledge, then no matter who speaks first, beliefs will converge to the same point).
One important point about Parikh and Krasucki. First, the result is “wrong” in the sense that the method of updating beliefs is problematic. In particular, when agents get new information, the method of updating beliefs turns out to ignore some valuable information. I will make this statement clearer in a post tomorrow.
This entire line of reasoning makes you wonder whether, under common topologies of communication, we can guarantee convergence of beliefs in groups: or indeed, whether we can guarantee that “the boss”, somehow defined, knows at least as much as everyone else. This is the project I’m working on at current, and hopefully will have results to share here by the end of the year.