The realist perspective in philosophy of science makes a seemingly basic claim: our best, most “verified”/”unrefuted”/whatever scientific theories are likely to be true. Many philosophers actually do not agree with this claim. There are two really solid arguments against it.
First, we have copious evidence of “accepted” scientific theories that later turned out to be false: consider the nature of light, which was thought to be a particle, then a wave in ether, then a wave not in ether, then a wave and a particle, etc. No evidence while a given theory of light was accepted was convincingly against it, and note that the revolutions listed in the previous sentence are not simply refinements of a theory, but wholesale replacements. Given that our understanding of the nature of light has been overturned four or five times in the last three hundred years, why should we think our current understanding of light is likely to be true in some greater sense? And since the history of science provides many, many examples of such fundamental refutations of a theory (I think this claim is not at all controversial), can we not extend the same pessimistic argument to scientific claims more generally? This argument is called PMI, or pessimistic meta-induction.
Second, there is an argument straight from classical empiricism. Most scientific theories predict many things that are observable (by the senses) or potentially observable in the future, but also predict many things that will never be directly observable by humans. Since there are an infinite number of theories which predict precisely the same thing about about observables but make different claims about the unobservables, we ought be completely agnostic about unobservables no matter how much evidence is found in favor of some theory. That is, science will never be found to be “true” when it comes to these claims about unobservables. Note that this not simply an argument against induction: every theory in this post accepts the validity of induction in some sense as an “approximate axiom”. Rather than arguing the problems of induction, the empirical argument simply says that science and scientific data fundamentally underdetermines the claims of competing theories. There are some meta-arguments (you know one: Ockham’s Razor) but I’ve never seen a convincing one.
Rothbard argues in this new paper in Synthese that the first school, PMI, is not supported by the history of science. The argument is simple: the “amount” of science, whether measured by practicing scientists or by research articles published, has been increasing exponentially probably for at least the last three centuries. What this means is that if the dominant theory of light has been overturned four times in 300 years, but not in the last 80, then there is actually very little reason to induct we’ll see another refutation in, say, the next hundred years. This is because the exponential growth of science means that a huge majority of all scientific work on light has been done in the last eighty years, and no convincing contrary theories have been found. That is, the probability that a theory is successful forever given it is successful so far is close to 1. The underdetermination argument may still be valid, however, since the exponential growth of science says nothing about the probability a theory is “true” given that is it true conditional on all empirical data.
A few notes: First, whether we should even care at all about the truth of science is actually not a settled question. Two pretty convincing arguments here are that theories only have meaning within the context of a paradigm (i.e., mathematical statements are only true within the context of a given axiomatic system) and that acquisition of knowledge, not acquisition of truth, is a better definition of “scientific progress”. Second, the argument against PMI actually isn’t totally satisfying to me. Let society be a Bayesian who updates beliefs about the truth of a scientific statement using some metatheory like falsification or verification. As evidence in favor of a theory comes in, I become more and more confident the theory is true. But continue the exponential growth of science argument into the future. In every period as t goes to infinity, I receive more and more potential falsifications, for example, of a theory. The fact that the theory of light has not really changed in the last 80 years is not that important because in the next 80 years we are going to see an order of magnitude more tests of the theory. Even if only one in ten “well-established” theories has been overturned in the past 80 years, and even if most science in human history was done in that period, a pessimistic inductionist might still think many of these theories will be overturned because, after all, every test of theory ever done up until now is going to seem an inconsequentially small amount of testing given what will be done in the relatively near future.
This post is too long, so I won’t elaborate here, but the question of whether science is actually settling claims is really important for economics. Consider papers like Ben Jones’ arguments about the fishing out of “possible” inventions and discoveries.
http://www.phil-fak.uni-duesseldorf.de/…Version%20pdf%20.pdf (Final online preprint; published in Synthese April 2011)
Something about the argument against PMI seems odd to me: while there is clearly an exponential growth in amount of scientific work being done, there hasn’t necessarily been an exponential growth in the number of experiments designed to test traditional theories. Rather it seems likely that traditional theories are supplemented by a huge number of new studies on subjects that might in principle provide no evidence for or against our existing theory of light.
In some sense, I agree. It really depends on what is meant by “test a theory”, doesn’t it? Surely the Duhem-Quine thesis is relevant here? This particular paper isn’t really clear on *what* evidence we should take as a refutation, but I think the author is making a somewhat hand-waving-esque argument that *any* experiment involving light is inherently testing quantum theory. You and I both don’t really think this is true.