Category Archives: Philosophy of Science

“What Does it Mean to Say that Economics is Performative?,” M. Callon (2007)

With the last three posts being high mathematical-economic theory, let’s go 180 degrees and look at this recent essay – the introduction of a book, actually – by Michel Callon, one of the deans of actor-network theory (along with Bruno Latour, of course). I know what you’re thinking: a French sociologist of science who thinks objects have agency? You’re probably running away already! But stay; I promise it won’t be so bad. And as Callon mentions, sociologists of science and economic theory have a long connection: Robert K. Merton, legendary author of The Sociology of Science, is the father of Robert C. Merton, the Nobel winning economist.

The concept here is performativity in economics. An essay by William Baumol and a coauthor in the JEL tried to examine whether economic theory had made any major contributions. Of the 9 theories they studied (marginalism, Black-Scholes, etc.), only a couple could reasonably be said to be invented and disseminated by academic economists. But performativity is not so sanguine. Performativity suggests that, rather than theories being true or false, they are accepted or not accepted, and there are many roles to be played in this acceptance process by humans and non-humans alike. For example, the theory of Black-Scholes could be accepted in academia, but to be performed by a broader network, certain technologies were needed (frequent stock quotes), market participants needed to believe the theory, regulators needed to be persuaded (that, for one, options are not just gambling); this process is reflexive and the way the theory is performed feeds back into the construction of novel theories. A role exists for economists as scientists across this entire performance.

The above does not simply mean that beliefs matter, or that economic theories are “performed” as self-fulfilling prophecies. Callon again: “The notion of expression is a powerful vaccination against a reductionist interpretation of performativity; a reminder that performativity is not about creating but about making happen.” Not all potential self-fulfilling prophecies are equal: traders did in fact use Black-Scholes, but they never began to use sunspots to coordinate. Sometimes theories outside academia are performed in economics: witness financial chartism. It’s not about “truth” or “falsehood”: Callon’s school of sociology/anthropology is fundamentally agnostic.

There is an interesting link between the jargon of the actor-network theory literature and standard economics. I think you can see it in the following passage:

“In the paper world to which it belongs, marginalist analysis thrives. All it needs are some propositions on decreasing returns, the convexity of utility curves, and so forth. Transported into an electricity utility (for example Electricit√© de France), it needs the addition of time-ofday meters set up wherever people consume electricity and without which calculations are impossible; introduced into a private firm, it requires analytical accounting and a system of recording and cost assessment that prove to be hardly feasible. This does not mean that marginalist analysis has become false. As everyone knows, it is still true in (most) universities.”

Economists surely see a quote like the above and think, surely there is something more to this theory of performance than information economics and technological constraints. But really there isn’t. Rather, we economists generally do model why information is the way it is, or why certain agents get certain signals. A lot of this branch of sociology should be read as an investigation into how agents (including nonhumans, such as firms) get, or search for, information, particularly to the extent that such a search is reflexive to a new economic theory being proposed.

http://halshs.archives-ouvertes.fr/docs/00/09/15/96/PDF/WP_CSI_005.pdf (July 2006 working paper – final version published in McKenzie et al (Eds.), Do Economists Make Markets by Princeton Univ. Press)

“How the Growth of Science Ends Theory Change,” L. Fahrbach (2011)

The realist perspective in philosophy of science makes a seemingly basic claim: our best, most “verified”/”unrefuted”/whatever scientific theories are likely to be true. Many philosophers actually do not agree with this claim. There are two really solid arguments against it.

First, we have copious evidence of “accepted” scientific theories that later turned out to be false: consider the nature of light, which was thought to be a particle, then a wave in ether, then a wave not in ether, then a wave and a particle, etc. No evidence while a given theory of light was accepted was convincingly against it, and note that the revolutions listed in the previous sentence are not simply refinements of a theory, but wholesale replacements. Given that our understanding of the nature of light has been overturned four or five times in the last three hundred years, why should we think our current understanding of light is likely to be true in some greater sense? And since the history of science provides many, many examples of such fundamental refutations of a theory (I think this claim is not at all controversial), can we not extend the same pessimistic argument to scientific claims more generally? This argument is called PMI, or pessimistic meta-induction.

Second, there is an argument straight from classical empiricism. Most scientific theories predict many things that are observable (by the senses) or potentially observable in the future, but also predict many things that will never be directly observable by humans. Since there are an infinite number of theories which predict precisely the same thing about about observables but make different claims about the unobservables, we ought be completely agnostic about unobservables no matter how much evidence is found in favor of some theory. That is, science will never be found to be “true” when it comes to these claims about unobservables. Note that this not simply an argument against induction: every theory in this post accepts the validity of induction in some sense as an “approximate axiom”. Rather than arguing the problems of induction, the empirical argument simply says that science and scientific data fundamentally underdetermines the claims of competing theories. There are some meta-arguments (you know one: Ockham’s Razor) but I’ve never seen a convincing one.

Rothbard argues in this new paper in Synthese that the first school, PMI, is not supported by the history of science. The argument is simple: the “amount” of science, whether measured by practicing scientists or by research articles published, has been increasing exponentially probably for at least the last three centuries. What this means is that if the dominant theory of light has been overturned four times in 300 years, but not in the last 80, then there is actually very little reason to induct we’ll see another refutation in, say, the next hundred years. This is because the exponential growth of science means that a huge majority of all scientific work on light has been done in the last eighty years, and no convincing contrary theories have been found. That is, the probability that a theory is successful forever given it is successful so far is close to 1. The underdetermination argument may still be valid, however, since the exponential growth of science says nothing about the probability a theory is “true” given that is it true conditional on all empirical data.

A few notes: First, whether we should even care at all about the truth of science is actually not a settled question. Two pretty convincing arguments here are that theories only have meaning within the context of a paradigm (i.e., mathematical statements are only true within the context of a given axiomatic system) and that acquisition of knowledge, not acquisition of truth, is a better definition of “scientific progress”. Second, the argument against PMI actually isn’t totally satisfying to me. Let society be a Bayesian who updates beliefs about the truth of a scientific statement using some metatheory like falsification or verification. As evidence in favor of a theory comes in, I become more and more confident the theory is true. But continue the exponential growth of science argument into the future. In every period as t goes to infinity, I receive more and more potential falsifications, for example, of a theory. The fact that the theory of light has not really changed in the last 80 years is not that important because in the next 80 years we are going to see an order of magnitude more tests of the theory. Even if only one in ten “well-established” theories has been overturned in the past 80 years, and even if most science in human history was done in that period, a pessimistic inductionist might still think many of these theories will be overturned because, after all, every test of theory ever done up until now is going to seem an inconsequentially small amount of testing given what will be done in the relatively near future.

This post is too long, so I won’t elaborate here, but the question of whether science is actually settling claims is really important for economics. Consider papers like Ben Jones’ arguments about the fishing out of “possible” inventions and discoveries.

http://www.phil-fak.uni-duesseldorf.de/…Version%20pdf%20.pdf (Final online preprint; published in Synthese April 2011)

%d bloggers like this: