Category Archives: Consensus

“Finite Additivity, Another Lottery Paradox, and Conditionalisation,” C. Howson (2014)

If you know the probability theorist Bruno de Finetti, you know him either for his work on exchangeable processes, or for his legendary defense of finite additivity. Finite additivity essentially replaces the Kolmogorov assumption of countable additivity of probabilities. If Pr(i) for i=1 to N is the probability of event i, then the probability of the union of all i is just the sum of each individual probability under either countable of finite additivity, but countable additivity requires that property to hold for a countably infinite set of events.

What is objectionable about countable additivity? There are three classic problems. First, countable additivity restricts me from some very reasonable subjective beliefs. For instance, I might imagine that a Devil is going to pick one of the integers, and that he is equally likely to predict any given number. That is, my prior is uniform over the integers. Countable additivity does not allow this: if the probability of any given number being picked is greater than zero, then the sum diverges, and if the probability any given number is picked is zero, then by countable additivity the sum of the grand set is also zero, violating the usual axiom that the grand set has probability 1. The second problem, loosely related to the first, is that I literally cannot assign probabilities to some objects, such as a nonmeasurable set.

The third problem, though, is the really worrying one. To the extent that a theory of probability has epistemological meaning and is not simply a mathematical abstraction, we might want to require that it not contradict well-known philosophical premises. Imagine that every day, nature selects either 0 or 1. Let us observe 1 every day until the present (call this day N). Let H be the hypothesis that nature will select 1 every day from now until infinity. It is straightforward to show that countable additivity requires that as N grows large, continued observation of 1 implies that Pr(H)->1. But this is just saying that induction works! And if there is any great philosophical advance in the modern era, it is Hume’s (and Goodman’s, among others) demolition of the idea that induction is sensible. My own introduction to finite additivity comes from a friend’s work on consensus formation and belief updating in economics: we certainly don’t want to bake in ridiculous conclusions about beliefs that rely entirely on countable additivity, given how strongly that assumption militates for induction. Aumann was always very careful on this point.

It turns out that if you simply replace countable additivity with finite additivity, all of these problems (among others) go away. Howson, in a paper in the newest issue of Synthese, asks why, given that clear benefit, anyone still finds countable additivity justifiable? Surely there are lots of pretty theorems, from Radon-Nikodym on down, that require countable additivity, but if the theorem critically hinges on the basis of an unjustifiable assumption, then what exactly are we to infer about the justifiability of the theorem itself?

Two serious objections are tougher to deal with for de Finetti acolytes: coherence and conditionalization. Coherence, a principle closely associated with de Finetti himself, says that there should not be “fair bets” given your beliefs where you are guaranteed to lose money. It is sometimes claimed that a uniform prior over the naturals is not coherent: you are willing to take a bet that any given natural number will not be drawn, but the conjunction of such bets for all natural numbers means you will lose money with certainty. This isn’t too worrying, though; if we reject countable additivity, then why should we define coherence to apply to non-finite conjunctions of bets?

Conditionalization is more problematic. It means that given prior P(i), your posterior P(f) of event S after observing event E must be such that P(f)(S)=P(i)(S|E). This is just “Bayesian updating” off of a prior. Lester Dubins pointed out the following. Let A and B be two mutually exclusive hypothesis, such that P(A)=P(B)=.5. Let the random quantity X take positive integer values such that P(X=n|B)=0 (you have a uniform prior over the naturals conditional on B obtaining, which finite additivity allows), and P(X=n|A)=2^(-n). By the law of total probability, for all n, P(X=n)>0, and therefore by Bayes’ Theorem, P(B|X=n)=1 and P(A|X=n)=0, no matter which n obtains! Something is odd here. Before seeing the resolution of n, you would take a fair bet on A obtaining. But once n obtains (no matter which n!), you are guaranteed to lose money by betting on A.

Here is where Howson tries to save de Finetti with an unexpected tack. The problem in Dubins example is not finite additivity, but conditionalization – Bayesian updating from priors – itself! Here’s why. By a principle called “reflection”, if using a suitable updating rule, your future probability of event A is p with certainty, then your current probability of event A must also be p. By Dubins argument, then, P(A)=0 must hold before X realizes. But that means your prior must be 0, which means that whatever independent reasons you had for the prior being .5 must be rejected. If we are to give up one of Reflection, Finite Additivity, Conditionalization, Bayes’ Theorem or the Existence of Priors, Howson says we ought give up conditionalization. Now, there are lots of good reasons why conditionalization is sensible within a utility framework, so at this point, I will simply point your toward the full paper and let you decide for yourself whether Howson’s conclusion is sensible. In any case, the problems with countable additivity should be better known by economists.

Final version in Synthese, March 2014 [gated]. Incidentially, de Finetti was very tightly linked to the early econometricians. His philosophy – that probability is a form of logic and hence non-ampliative (“That which is logical is exact, but tells us nothing”) – simply oozes out of Savage/Aumann/Selten methods of dealing with reasoning under uncertainty. Read, for example, what Keynes had to say about what a probability is, and you will see just how radical de Finetti really was.

Advertisement

“Wall Street and Silicon Valley: A Delicate Interaction,” G.-M. Angeletos, G. Lorenzoni & A. Pavan (2012)

The Keynesian Beauty Contest – is there any better example of an “old” concept in economics that, when read in its original form, is just screaming out for a modern analysis? You’ve got coordination problems, higher-order beliefs, signal extraction about underlying fundamentals, optimal policy response by a planner herself informationally constrained: all of these, of course, problems that have consumed micro theorists over the past few decades. The general problem of irrational exuberance when we start to model things formally, though, is that it turns out to be very difficult to generate “irrational” actions by rational, forward-looking agents. Angeletos et al have a very nice model that can generate irrational-looking asset price movements even when all agents are perfectly rational, based on the idea of information frictions between the real and financial sector.

Here is the basic plot. Entrepreneurs get an individual signal and a correlated signal about the “real” state of the economy (the correlation in error about fundamentals may be a reduced-form measure of previous herding, for instance). The entrepreneurs then make a costly investment. In the next period, some percentage of the entrepreneurs have to sell their asset on a competitive market. This may represent, say, idiosyncratic liquidity shocks, but really it is just in the model to abstract away from the finance sector learning about entrepreneur signals based on the extensive margin choice of whether to sell or not. The price paid for the asset depends on the financial sector’s beliefs about the real state of the economy, which come from a public noisy signal and the trader’s observations about how much investment was made by entrepreneurs. Note that the price traders pay is partially a function of trader beliefs about the state of the economy derived from the total investment made by entrepreneurs, and the total investment made is partially a function of the price at which entrepreneurs expect to be able to sell capital should a liquidity crisis hit a given firm. That is, higher order beliefs of both the traders and entrepreneurs about what the other aggregate class will do determine equilibrium investment and prices.

What does this imply? Capital investment is higher in the first stage if either the state of the world is believed to be good by entrepreneurs, or if the price paid in the following period for assets is expected to be high. Traders will pay a high price for an asset if the state of the world is believed to be good. These traders look at capital investment and essentially see another noisy signal about the state of the world. When an entrepreneur sees a correlated signal that is higher than his private signal, he increases investment due to a rational belief that the state of the world is better, but then increases it even more because of an endogenous strategic complementarity among the entrepreneurs, all of whom prefer higher investment by the class as a whole since that leads to more positive beliefs by traders and hence higher asset prices tomorrow. Of course, traders understand this effect, but a fixed point argument shows that even accounting for the aggregate strategic increase in investment when the correlated signal is high, aggregate capital can be read by traders precisely as a noisy signal of the actual state of the world. This means that when when entrepreneurs invest partially on the basis of a signal correlated among their class (i.e., there are information spillovers), investment is based too heavily on noise. An overweighting of public signals in a type of coordination game is right along the lines of the lesson in Morris and Shin (2002). Note that the individual signals for entrepreneurs are necessary to keep the traders from being able to completely invert the information contained in capital production.

What can a planner who doesn’t observe these signals do? Consider taxing investment as a function of asset prices, where high taxes appear when the market gets particularly frothy. This is good on the one hand: entrepreneurs build too much capital following a high correlated signal because other entrepreneurs will be doing the same and therefore traders will infer the state of the world is high and pay high prices for the asset. Taxing high asset prices lowers the incentive for entrepreneurs to shade capital production up when the correlated signal is good. But this tax will also lower the incentive to produce more capital when the actual state of the world, and not just the correlated signal, is good. The authors discuss how taxing capital and the financial sector separately can help alleviate that concern.

Proving all of this formally, it should be noted, is quite a challenge. And the formality is really a blessing, because we can see what is necessary and what is not if a beauty contest story is to explain excess aggregate volatility. First, we require some correlation in signals in the real sector to get the Morris-Shin effect operating. Second, we do not require the correlation to be on a signal about the real world; it could instead be correlation about a higher order belief held by the financial sector! The correlation merely allows entrepreneurs to figure something out about how much capital they as a class will produce, and hence about what traders in the next period will infer about the state of the world from that aggregate capital production. Instead of a signal that correlates entrepreneur beliefs about the state of the world, then, we could have a correlated signal about higher-order beliefs, say, how traders will interpret how entrepreneurs interpret how traders interpret capital production. The basic mechanism will remain: traders essentially read from aggregate actions of entrepreneurs a noisy signal about the true state of the world. And all this beauty contest logic holds in an otherwise perfectly standard Neokeynesian rational expectations model!

2012 working paper (IDEAS version). This paper used to go by the title “Beauty Contests and Irrational Exuberance”; I prefer the old name!

“The Nash Bargaining Solution in Economic Modeling,” K. Binmore, A. Rubinsten & A. Wolinsky (1986)

If we form a joint venture, our two firms will jointly earn a profit of N dollars. If our two countries agree to this costly treaty, total world welfare will increase by the equivalent of N dollars. How should we split the profit in the joint venture case, or the costs in the case of the treaty? There are two main ways of thinking about this problem: the static bargaining approach developed first by John Nash, and bargaining outcomes that form the perfect outcome of a strategic game, for which Rubinstein (1982) really opened the field.

The Nash solution says the following. Let us have some pie of size 1 to divide. Let each of us have a threat point, S1 and S2. Then if certain axioms are followed (symmetry, invariance to unimportant transformations of the utility function, Pareto optimality and something called the IIA condition), the bargain is the one that maximizes (u1(p)-u1(S1))*(u2(1-p)-u2(S2)), where p is the share of the pie of size 1 that accrues to player 1. So if we both have linear utility, player 1 can leave and collect .3, and player 2 can leave and collect 0, but a total of 1 is earned by our joint venture, the Nash bargaining solution is the p that maximizes (p-.3)*(1-p-0); that is, p=.65. This is pretty intuitive: 1-.3-0=.7 of surplus is generated by the joint venture, and we each get our outside option plus half of that surplus.

The static outcome is not very compelling, however, as Tom Schelling long ago pointed out. In particular, the outside option looks like a noncredible threat: If player 2 refused to offer player 1 more than .31, then Player 1 would accept given his outside option is only .3. That is, in a one-shot bargaining game, any p between .3 and 1 looks like an equilibrium. It is also not totally clear how we should interpret the utility functions u1 and u2, and the threat points S1 and S2.

Rubinstein bargaining began to fix this. Let players make offers back and forth, and let there be a time period D between each offer. If no agreement is reached after T periods, we both get our outside options. Under some pretty compelling axioms, there is a unique perfect equilibrium whereby player 1 gets p* if he makes the first offer, and p** if player 2 makes the first offer. Roughly, if the time between offers is D, player 1 must offer player 2 a high enough share that player 2 is indifferent between that share today and the amount he could earn when he makes an offer in the next period. Note that the outside options do not come into play unless, say, player 1’s outside option is higher than min{p*,p**}. Note also that as D goes to 0, all of the difference in bargaining power has to do with who is more patient. Binmore et al modify this game so that, instead of discounting the future, rather there is a small chance that the gains from negotiation will disappear (“breakdown”) in between every period; for instance, we may want to form a joint venture to invent some product, but while we negotiate, another firm may swoop in and invent it. It turns out that this model, with von Neumann-Morganstern utility functions for each player (though perhaps differing levels of risk aversion) is a special case of Rubinstein bargaining.

Binmore et al prove that as D goes to zero, both strategic cases above have unique perfect equilibria equal to a Nash bargaining solution. But a Nash solution for what utility functions and threat points? The Rubinstein game limits to Nash bargaining where the difference in utilities has to do with time preference, and the threat points S1 and S2 are equal to zero. The breakdown game limits to Nash bargaining where the difference in utilities has to do with risk aversion, and the threat points S1 and S2 are equal to whatever utility we would get from the world after breakdown.

Two important points: first, it was well known that a concave transformation of a utility function leads to a worse outcome in Nash bargaining for that player. But we know from the previous paragraph that this concave transformation is equivalent to a more impatient Rubinstein bargainer: a concave transformation of the utilities in the Nash outcome has to do with changing the patience, not the risk aversion, of players. Second, Schelling was right when he argued that the Nash threat points involve noncredible threats. As long as players prefer their Rubinstein equilibrium outcome to their outside option, the outside option does not matter for the bargaining outcome. Take the example above where one player could leave the joint venture and still earn .3. The limit of Rubinstein bargaining is for each player to earn .5 from the joint venture, not .65 and .35. The fact that one player could leave the joint venture and still earn .3 is totally inconsequential to the negotiation, since the other player knows that this threat is not credible whenever the first player could earn at least .31 by staying. This point is often wildly misunderstood when people apply Nash bargaining solutions: properly defining the threat point matters!

Final RAND version (IDEAS). There has been substantial work since the 80s on the problem of bargaining, particularly in trying to construct models where delay is generated, since Rubinstein guarantees agreement immediately and real-world bargaining rarely ends in one step; unsurprisingly, these newer papers tend to rely on difficult manipulation of theorems using asymmetric information.

“Being Realistic about Common Knowledge: A Lewisian Approach,” C. Paternotte (2011)

(Site note: apologies for the recent slow rate of posting. In my defense, this is surely the first post in the economics blogosphere to be sent from Somalia, where I am running through a bunch of ministerial and businessman meetings before returning to the US for AEA. The main AEA site is right down the street from my apartment, so if you can’t make it next week, I will be providing daily updates on any interesting presentations I happen across. Of course, I will post some brief thoughts on the Somali economy as well.)

We economists know common knowledge via the mathematical rigor of Aumann, but priority for the idea goes to a series of linguists in the 1960s and to the superfamous philosopher David Lewis and his 1969 book “Conventions.” Even within philosophy, the formal presentation of Aumann has proven more influential. But the economic conception of common knowledge is subject to some serious critiques as a standard model of how we should think about knowledge. One, it is equivalent to an infinite series of epistemic iterations: I know X, know you know that I know X, and so on. Second, and you may know this argument via Monderer and Samet, the standard “common knowledge is created when something is announced publicly” is surely spurious: how do I know that you heard correctly? Perhaps you were daydreaming. Third, Aumann-style common knowledge is totally predicated on deductive reasoning: every agent correctly deduces the effect of every new piece of information on their own knowledge partition. This is asking quite a bit, to say the least. The first objection is not too worrying: any student of game theory knows the self-evident event definition of common knowledge, which implies that epistemic iteration definition. Indeed, you can think of the “I know, know that you know, know you know that I know, etc.” iterations as the consequence of knowing some public event. Paternotte gives the great example of any inductive proof in mathematics: knowing X holds for the first element and X holding for element i implies it holds for i+1 is not terribly cognitively demanding, but knowing those two facts implies knowledge of an infinite string of implications. The second objection, fallibility, has been treated with economists using p-belief: assign a probability distribution to the state space, and talk about having .99-common belief rather than common knowledge. The third, it seems, is less readily handled.

But how did Lewis think of common knowledge? And can we formalize his ideas? What is then represented? This paper is similar to Cubitt and Sugden (2003, Economics and Philosophy), though it strikes me as the more interesting take. Lewis said the following:

It is common knowledge among a population that X iff some state of affairs holds such that
1: Everyone has reason to believe that A holds
2: A indicates to everyone that everyone has reason to believe that A holds, and
3: A indicates to everyone that X.

Note that the Lewisian definition is not susceptible to the three arguments noted above. Agents don’t necessarily believe something, but rather just have reason to do so. They know how each other reason, but the method of reasoning is not necessarily deductive. Let’s try to formalize those conditions in a standard state space world. Let B(p,i)E be the belief operator of agent i: B(.7,John):”It rains today” means John believes with probability .7 that it will rain today. Condition 1 in Lewis looks like claiming that all agents believe with p>.5 that A holds (have a “reason to believe A”). The word “A indicate X” should mean that there is a reasoning function of agent i, f(i), such that if A is believed with p>.5, then so is X (we will need some technical conditions here to ensure the function f(i) is defined uniquely for a given reasoning standard).

What is interesting is that this definition is tightly linked to standard Monderer-Samet common p-belief. For every common p-belief, p>.5, there are a set of parameters for which Lewisian common knowledge exists. For every set of parameters where Lewisian common knowledge exists, there is at least .5-common belief. Thus, though Lewisian common knowledge appears to be not that strict, it in fact is in a strong sense equivalent to common p-belief, and thus implies any of the myriad results published using that simpler concept. What an interesting result! I take this to mean that many common complaints about common knowledge are not that serious at all, and that p-belief, quite standard these days in economics, is much more broadly applicable than I previously believed.

http://www.springerlink.com/content/n81219v23334n610/ (GATED. Philosophy community: you have to do something about the lack of working papers freely accessible! Final version in Synthese 183.2 – if you are a micro theorist, you should definitely be reading this journal, as it is definitely the top journal in philosophy publishing analytic, formal results in theory of knowledge.)

“Centralizing Information in Networks,” J. Hagenbach (2011)

Ah…strategic action on network topologies. There is a wily problem. Tons of work has gone into the problem of strategic action on networks in the past 15 years, and I think it’s safe to say that the vast majority is either trivial or has proved too hard of a problem to say anything useful at all. This recent paper by Jeanne Hagenbach is a nice exception: it’s not all obvious, and it addresses an important question.

There is a fairly well-known experimental paper by Bonacich in the American Sociological Review from 1990 in which he examines how communications structure affects the centralizing of information. A group of N players attempt to gather N pieces of information (for example, a 10-digit string of numbers). They each start with one piece. A communication network is endowed on the group. Every period, each player can either share each piece of information they know with everyone they are connected to, or hide their information. When some person collects all the information, a prize is awarded to everybody, and the size of the prize decreases in the amount of time it took to gather the info. The person (or persons) who have all of the information in this last period are awarded a bonus, and if there are multiple solvers in the final period, the bonus is split among them. Assume throughout that the communications graph is undirected and connected.

Hagenbach formalizes this paper as a game, using SPNE instead of Nash as a solution concept in order to avoid the oft-seen problem of networks where “everybody do nothing” is an equilibrium. She proves the following. First, if the maximum game length is at least N-1 periods, then every SPNE involves information being aggregated. Second, in any game where a player i could potentially solve the puzzle first (i.e., the maximum length of shortest paths of player i to other players is less than the maximum time T the game lasts), there is an SPNE where she does win, and further she wins in the shortest possible amount of time. Third, for a group of communication networks that includes graphs like the tree and the complete graph, then every SPNE is solved by some player is no more than N-1 periods. Fourth, for other simple graph structures, there are SPNEs for which an arbitrary amount of time passes before some player solves the game.

The intuition for all of these results boils down to the following. Every complete graph involves at least two agents connected to each other who will potentially each hold every piece of information the opponent lacks. When this happens, we are in the normal Game of Chicken. Since the problem has a final period T and we are looking for SPNE, in the final period T the two players just play chicken with each other, and chicken has two pure strategy Nash equilibria: I go straight, you swerve, or you go straight and I swerve. Either way, one of us “swerves”/shares information, and the other player solves the puzzle. The second theorem just relies on the strategy where whichever player we want to solve the puzzle refuses to share ever; every other player can only win nonzero payoff by getting their information to her, and they want to do so as quickly as possible. The fourth result is pretty interesting as well. Consider a 1000 period game, with four players arranged in a square: A talks to B and D, B talks to A and C, C to B and D, and D to A and C. We can be in a situation where B needs what A has, and A needs what B has, but not be in a duel. Why? Because A may be able to get the information from C, and B the information he needs from D. Consider the following hypothesized SPNE, though: everyone hides until period 999, then everyone passes information on in 999 and 1000. In this SPNE, everyone solves the puzzle simultaneously in period 1000 and gets one-fourth of the bonus reward. If any player deviates and, say, shares information before period 999, then the other players all play an easily constructed strategy whereby the three of them solve the following period but the deviator does not. If the reward is big enough, then all the discounting we need to get to period 1000 will not be enough to make anyone want to deviate.

What does this all mean for social science? Essentially, if I want information to be shared and I have both team and individual bonuses, then no matter what individual and team bonuses I give, the information will be properly aggregated by strategic agents quite quickly if I make communication follow something like a hierarchy. Every (subgame perfect) equilibrium involves quick coordination. On the other hand, if the individual and team bonuses are not properly calibrated and communication involves cycles, it may take arbitrarily long to coordinate. I think a lot more could be done with these ideas applied to traditional team theory/multitasking.

One caveat: I am not a fan at all of modeling this game as having a terminal period. The assumption that the game ends after T periods is clearly driving the result, and I have some hunch that simply using a different equilibrium concept than SPNE and allowing an infinite horizon, you could solve for very similar results. If so, that would be much more satisfying. I always find it strange when hold-up problems or bargaining problems are modeled as having necessarily a firm “explosion date”. This avoids much of the great complexity of negotiation problems!

http://hal.archives-ouvertes.fr/docs/00/36/78/94/PDF/09011.pdf (2009 WP – final version with nice graphs in GEB 72 (2011). Hagenbach and a coauthor also have an interesting recent ReStud where they model something like Keynes’ beauty contest allowing cheap talk communication about the state among agents who have slight heterogeneity in preferences.

“The Temporal Structure of Scientific Consensus Formation,” U. Shwed & P. Bearman (2010)

This great little paper about the mechanics of scientific consensus appeared in the last copy of the American Sociological Review. The problem is the following: how can we identify when the scientific community – seen here as an actor, following the underrated-among-economists-yet-possibly-crazy philosopher Bruno Latour – achieves consensus on a topic? When do scientists agree that cancer is caused by sun exposure, or that smoking is carcinogenic, or that global warming is caused by man? The perspective here is a school in sociology known as STS that is quite postmodern, so there is certainly no claim that this scientific consensus means we have learned the “truth”, somehow defined. Rather, we just want to know when scientists have stopped arguing about the basics and have moved on to extensions and minor issues. Kuhn would call this consensus “normal science”, but Latour and the STS guys often refer to it as “black boxing,” in which scientific consensus allows scientists to state something like “smoking causes cancer” without having to defend it. Economists contains many such black boxed facts in its current paradigm: agents are expected utility maximizers, for example. (Note that “the black box”, in economics, can also refer to growth in multifactor productivity, as in the title of Nate Rosenberg’s book on innovation; this is somewhat, but not entirely, the same concept).

But how do we identify which facts have been black boxed? Traditionally, sociologists of science have used expert conclusions. For instance, IPCC reports survey experts on climate change. The first IPCC report in the 1980s did not identify climate change as anthropogenic, but all future reports did. The problem is that such expert reports are not available for all problems where we wish to investigate consensus, and second that it is in some sense “undemocratic” to rely on expert judgments alone. It would be better to have a method whereby an ignorant observer can look down from on high at the world of science and pronounce that “topic A is in a state of consensus and topic B is not.”

This is precisely what Bearman and Shwed do. They construct citation networks, using keyword searches, on a number of potentially contested ideas over time, ranging from those which are traditionally considered to have little epistemic rivalry (coffee causes cancer) to those with well-known scientific debates (the nature of gravitational waves). They use a dynamic method where the sample at time t includes all papers which were cited within the past X years (where X is the median age of papers cited by new research at time t), as well as any older articles in the same field which were cited by those papers. They then examine the modularity of the citation network. A high level of modularity, in network terms, essentially means that the network is made up of relatively distinct communities vis-a-vis a random graph. Since citations to papers viewed favorably are known to be more common, high modularity means there are multiple cliques who cite each other, but do not cite their epistemic rivals.

With this in hand, the authors show that areas considered by expert studies to have little rivalry do indeed have flat and low levels of modularity. Those traditionally considered to be contentious do indeed show a lot of variance in their modularity, and a high absolute level thereof. The “calibrating” examples show evidence, in the citation network, of consensus being reached before any expert study proclaimed such consensus. In some sense, then, network evaluation can pinpoint scientific consensus faster, and with less specialized knowledge, than expert studies. Applying the methodology to current debates, the authors see little contention over the non-carcinogenicity of cell phones or the lack of a causal relation between MMR vaccines and autism. The methodology could obviously be applied to other fields – literature and philosophy would both be interesting cases to examine.

One final note: this article is published in a sociology journal. I would greatly encourage economists to sign up for eTOC emails from our sister fields, which often publish content on econ-style topics, though often with data, tools and methodologies an economist wouldn’t think of using. In sociology, I get the ASR and AJS, though if you like network theory, you may want to also look at a few of the more quantitative field journals. In political science, APSR is the journal to get. In philosophy, Journal of Philosophy and Philosophical Review, as well as Synthese, are top-notch, and all fairly regularly publish articles on knowledge which would not be out of place in a micro theory journal. I read Philosophy of Science as well, which you might want to take a look at if you like methodological questions. The hardcore econometricians and math theory guys surely would want to look at journals in stats and mathematics; I don’t read these per se, but I often follow citations to interesting (for economics) papers in JASA, Annals of Statistics and Biometrika. I’m sure experimentalists should be reading the psych and anthropology literature as well, but I have both little knowledge and little interest in that area, so I’m afraid I have no suggestions; perhaps a commenter can add some.

http://asr.sagepub.com/content/75/6/817.full.pdf+html (Final version, ASR December 2010. GATED. Not only could I not find an ungated working paper version, I can’t even find a personal webpage at all for either of the authors; there’s nothing for Shwed and only a group page including a subset of Bearman’s work. It’s 2011! And worse, these guys are literally writing about epistemic closure and scientific communities. If anyone should understand the importance of open access for new research, it’s them, right?)

“On Consensus through Communication with a Commonly Known Protocol,” E. Tsakas & M. Voorneveld (2010)

(Site note: I will be down in Cuba until Dec. 24, so posting will be light until then, though I do have a few new papers to discuss. I’m going to meet with some folks there about the recent economic reforms and their effects, so perhaps I’ll have something interesting to pass along on that front.)

A couple weeks ago, I posted about the nice result of Parikh and Krasucki (1990), who show that when communication is pairwise, beliefs can fail to converge under many types of pre-specified orders of communication. In their paper, and in every paper following it that I know of, common knowledge of the order of communication is always assumed. For instance, if Amanda talks with Bob and then Bob talks with Carol, since only common knowledge of the original information partitions is assumed, for Carol to update “properly” she needs to know whether has Bob has talked to Amanda previously.

In a paper pointed out by a commenter, Tsakas and Voorneveld point out through counterexample just how strict this requirement is. They expand the state space to include knowledge of the order of communication (using knowledge in the standard Aumann way). It turns out that with all of the necessary conditions of Parikh and Krasucki holding, and uncertainty about whether a single act of communication occurred, consensus can fail to be reached. What’s worrying here from a modeling perspective is that it is really convenient to model communication as a directed graph, where A links to B if A talks to B infinite times. I see the Tsakas and Voorneveld result as giving some pause to that assumption. In particular, in the example, all agents have common knowledge of the communications graph, since the only uncertainty is in one period and therefore no uncertainty about the structure of the graph.

There is no positive result here: we don’t have useful conditions guaranteeing belief convergence under uncertainty about the protocol. In the paper I’m working on, I restrict all results to “regular” communication, meaning the only communication is through formal channels that occur infinite times, and because of this I only need to assume knowledge of the graph.

http://edocs.ub.unimaas.nl/loader/file.asp?id=1490 (Working Paper. Tsakas and Voorneveld also have a 2007 paper on this topic that corrects some erroneous earlier work: https://gupea.ub.gu.se/dspace/bitstream/2077/4576/1/gunwpe0255.pdf. In particular, even if consensus is reached, information only becomes common knowledge among under really restrictive assumptions. This is important if, for instance, you are studying mechanisms on a network, since many results in game theory require common knowledge about what opponents will do: see Dekel and Brandenburger (1987) and Aumann and Brandenburger (1995), for instance. I’ll have more to say about this about this once I get a few more results proved.)

%d bloggers like this: