Popper famously argued that good scientific theories are ones that are falsifiable. Unfortunately, his branch of philosophy neglected the heart of economics: that agents can act strategically. Consider a world where we want to encourage true scientific theories, and where there are some “experts” who know about past data and attempt to infer the future based on it, as well as “nonexperts” who know nothing about the field. Let a falsifiability contract be a contract where the scientist is paid at time 0 an amount u (in utils), but must repay society d (in utils) when (if) their theory is refuted. No matter how high d, and how low u, and how averse to their own uncertainty nonexperts are, it turns out that whenever the expert finds it worthwhile to propose a theory, the nonexpert will find it worthwhile to randomize over all possible future states to which such a theory might speak and thus propose a bogus theory of his own! When the theories are proposed, society has no way to distinguish between the two! Worse, letting “society’s contract” depend on verification (pay the scientist once the future states, to some probability, appear to verify the theory) gives precisely the same problem. It turns out, however, that there exist classes of refutation contracts (where the scientist pays back society if future data suggest that his theorem, in probability, is less likely) that can distinguish between real science and bogus science; of course, falsification is a type of refutation contract, but is not one of the types that can thus distinguish. The results seem to suggest another reason, beyond those of Kuhn and Lakatos, for downgrading the importance of falsifiability in science.
(Sidenote 1: The format of this paper is nice. The proofs are quite involved, so are all moved to the appendix; nonetheless, an intuitive proof suitable for the general reader is always included in the main text. This keeps the paper moving without become bogged down in a morass of math. Both the authors received math PhD’s before studying graduate economics, so perhaps they feel less of a need to show off their mathematical prowess!)
(Sidenote 2: There is an interesting meta-philosophy in this paper, in that it uses a particular branch of “proof” (mathematical modeling + logic) to refute a separate branch of philosophy. This paper is a good example of why economics is not wannabe-physics, as is often claimed – our mathematical models are internally-valid proofs, not external-valid (?) hypotheses.)
http://faculty.wcas.northwestern.edu/~wol737/Fals.pdf (Final WP – published version forthcoming in AER)
I just read this paper a few days ago, and got stuck on an assumption about the preferences of the (uninformed) expert. The uninformed are assumed to have maximin preferences for uncertainty, and so a deterministic submission of a theory will result in the lower bound payoff no matter what theory is submitted. The informed expert, on the other hand, knows the correct theory (or at worst has a subjective prior belief about this) and is an expected utility maximizer, and hence some theory submissions are yield higher utility than others.
However, the authors then suppose that the uninformed expert can randomize over theories, and consider the expected utility of lotteries of theories, and then maximin over the expected utility. Since randomizing puts some positive probability on picking the correct theory, lotteries have expected value slightly greater than the lowest payoff, and uniform randomization implies that this is true no matter what the state of the world is. Thus, the uninformed expert, in the face of uncertainty, is made better off by randomizing his choice, which is a curious result!
I don’t like this assumption because the randomness of the lottery is resolved before the uncertainty. Thus, once the randomization device has picked a theory, the uninformed expert has no incentive to submit this theory, no matter what theory is picked by the randomization device! Basically, we have a lottery, which yields an ex post payoff that is strictly lower than the ex ante utility no matter what the realization of the lottery. This appears to be a pathological feature of the Gilboa, Schmeidler theory (or a feature, you might say, considering it is an axiom in their model!).