“The Temporal Structure of Scientific Consensus Formation,” U. Shwed & P. Bearman (2010)

This great little paper about the mechanics of scientific consensus appeared in the last copy of the American Sociological Review. The problem is the following: how can we identify when the scientific community – seen here as an actor, following the underrated-among-economists-yet-possibly-crazy philosopher Bruno Latour – achieves consensus on a topic? When do scientists agree that cancer is caused by sun exposure, or that smoking is carcinogenic, or that global warming is caused by man? The perspective here is a school in sociology known as STS that is quite postmodern, so there is certainly no claim that this scientific consensus means we have learned the “truth”, somehow defined. Rather, we just want to know when scientists have stopped arguing about the basics and have moved on to extensions and minor issues. Kuhn would call this consensus “normal science”, but Latour and the STS guys often refer to it as “black boxing,” in which scientific consensus allows scientists to state something like “smoking causes cancer” without having to defend it. Economists contains many such black boxed facts in its current paradigm: agents are expected utility maximizers, for example. (Note that “the black box”, in economics, can also refer to growth in multifactor productivity, as in the title of Nate Rosenberg’s book on innovation; this is somewhat, but not entirely, the same concept).

But how do we identify which facts have been black boxed? Traditionally, sociologists of science have used expert conclusions. For instance, IPCC reports survey experts on climate change. The first IPCC report in the 1980s did not identify climate change as anthropogenic, but all future reports did. The problem is that such expert reports are not available for all problems where we wish to investigate consensus, and second that it is in some sense “undemocratic” to rely on expert judgments alone. It would be better to have a method whereby an ignorant observer can look down from on high at the world of science and pronounce that “topic A is in a state of consensus and topic B is not.”

This is precisely what Bearman and Shwed do. They construct citation networks, using keyword searches, on a number of potentially contested ideas over time, ranging from those which are traditionally considered to have little epistemic rivalry (coffee causes cancer) to those with well-known scientific debates (the nature of gravitational waves). They use a dynamic method where the sample at time t includes all papers which were cited within the past X years (where X is the median age of papers cited by new research at time t), as well as any older articles in the same field which were cited by those papers. They then examine the modularity of the citation network. A high level of modularity, in network terms, essentially means that the network is made up of relatively distinct communities vis-a-vis a random graph. Since citations to papers viewed favorably are known to be more common, high modularity means there are multiple cliques who cite each other, but do not cite their epistemic rivals.

With this in hand, the authors show that areas considered by expert studies to have little rivalry do indeed have flat and low levels of modularity. Those traditionally considered to be contentious do indeed show a lot of variance in their modularity, and a high absolute level thereof. The “calibrating” examples show evidence, in the citation network, of consensus being reached before any expert study proclaimed such consensus. In some sense, then, network evaluation can pinpoint scientific consensus faster, and with less specialized knowledge, than expert studies. Applying the methodology to current debates, the authors see little contention over the non-carcinogenicity of cell phones or the lack of a causal relation between MMR vaccines and autism. The methodology could obviously be applied to other fields – literature and philosophy would both be interesting cases to examine.

One final note: this article is published in a sociology journal. I would greatly encourage economists to sign up for eTOC emails from our sister fields, which often publish content on econ-style topics, though often with data, tools and methodologies an economist wouldn’t think of using. In sociology, I get the ASR and AJS, though if you like network theory, you may want to also look at a few of the more quantitative field journals. In political science, APSR is the journal to get. In philosophy, Journal of Philosophy and Philosophical Review, as well as Synthese, are top-notch, and all fairly regularly publish articles on knowledge which would not be out of place in a micro theory journal. I read Philosophy of Science as well, which you might want to take a look at if you like methodological questions. The hardcore econometricians and math theory guys surely would want to look at journals in stats and mathematics; I don’t read these per se, but I often follow citations to interesting (for economics) papers in JASA, Annals of Statistics and Biometrika. I’m sure experimentalists should be reading the psych and anthropology literature as well, but I have both little knowledge and little interest in that area, so I’m afraid I have no suggestions; perhaps a commenter can add some.

http://asr.sagepub.com/content/75/6/817.full.pdf+html (Final version, ASR December 2010. GATED. Not only could I not find an ungated working paper version, I can’t even find a personal webpage at all for either of the authors; there’s nothing for Shwed and only a group page including a subset of Bearman’s work. It’s 2011! And worse, these guys are literally writing about epistemic closure and scientific communities. If anyone should understand the importance of open access for new research, it’s them, right?)

Advertisement

2 thoughts on ““The Temporal Structure of Scientific Consensus Formation,” U. Shwed & P. Bearman (2010)

Comments are closed.

%d bloggers like this: