Economics has a very strong methodological paradigm, but economists on the whole are incapable of expressing what it is. And this can get us in trouble. Chris Sims and Tom Sargent have both been shooting around the media echo chamber the last week because they have, by and large, refused to answer questions like “What will happen to the economy?” or “What will be the impact of policy X?” Not having an answer is fine, of course: I’m sure Sims would gladly answer any question about the econometric techniques he pioneered, but not being an expert on the details of policy X, he doesn’t feel it’s his place to give (relatively) uninformed comment on such a policy. Unfortunately, parts of the media take his remarks as an excuse to take potshots at “useless” mathematical formalization and axiomatization. What, then, is the point of our models?
Dekel and Lipman answer this question with respect to the most theoretical of all economics: decision theory. Why should we care that, say, the Savage axioms imply subjective expected utility maximization? We all (aside from Savage, perhaps) agree that the axioms are not always satisfied in real life, not should they necessarily be satisfied on normative grounds. Further, the theory, strictly speaking, makes few if any predictions that the statement “People maximize subjective expected utility” does not.
I leave most of the details of their exposition to the paper, but I found the following very compelling. It concerns Gilboa-Schmeidler preferences. These preferences give a utility function where, in the face of ambiguity about probabilities, agents always assume the worst. Dekel and Lipman:
The importance of knowing we have all the implications is particularly clear when the story of the model is potentially misleading about its predictions. For example, the multiple priors model seems to describe an extraordinarily pessimistic agent. Yet the axioms that characterize behavior in this model do not have this feature. The sufficiency theorem ensures that there is not some unrecognized pessimism requirement.
And this is the point. You might think, seeing only the utility representation, that Gilboa-Schmeidler agents are super pessimistic. This turns out not to be necessary at all – the axioms gives seemingly mild conditions on choice under ambiguity which lead to such seeming pessimism. Understanding this gives us a lot of insight into what might be going on when we see Ellsberg-style pessimism in the face of ambiguity.
My problem with Dekel and Lipman here, though, is that, like almost all economists, they are implicitly infected by the most damaging economics article ever written: Milton Friedman’s 1953 Methodology of Positive Economics. That essay roughly says that the goal of an economic model is not to be true, but to predict within a limited sphere of things we want to predict. Such belief suggests that we can “test” models by checking whether predictions in their given sphere are true. I think both of these concepts are totally contrary both to how we should use models in economics, and how we do use them; if you like appeals to authority, I should note that philosophers of social science are equally dismayed as I am by Friedman ’53.
So how should we judge and use models? My standard is that a model is good if end users of the model find that it helps guide their intuition. You might also say that a model is good if it “subjectively compelling.” Surely prediction of the future is a nice property a model might have, but it is by no means necessary, not does “refuting” the predictions implicit in a model mean the model is worthless. What follows is a list of what I would consider subjectively useful uses of a model, accepting that how you weight these uses is entirely subjective, but keeping in mind that our theory has end users and we ought keep some guess at how the model will be used in mind when we write it:
1) Dealing with unforeseen situations. The vast majority of social situations that could be modeled by an economist will not be so modeled. That is, we don’t even claim to make predictions in essentially every situation. There are situations that are inconceivable at the time a paper is written – who knows what the world will care about in 50 years. Does this mean economics is useless in these unforeseen situations? Of course not. Theoretical models can still be useful: Sandeep Baliga has a post at Cheap Talk today where he gains intuition into Pakistan-US bargaining from a Stiglitz-Shapiro model of equilibrium unemployment. The thought experiments, the why of the model, are as relevant, if not more relevant, than the consequence/prediction/etc. of the model. Indeed, look at the introduction – often a summary of results – of your favorite theory paper. Rarely are the theorems stated alone. Instead, the theory and the basic intuition behind the proof are usually given. If we knew a theorem to be true given its assumptions, but the proof was in a black box, the paper would be judged much less compelling by essentially all economists, even though such a paper could “predict” equally well as a paper with proofs.
2) Justifying identification restrictions and other unfalsifiable assumptions in empirical work. Sometimes these are trivial and do not to be formally modeled. Sometimes less so: I have a old note which I’m mentioned here a few times that gives an example from health care. A paper found that hospital report cards that were mandated at a subset of hospitals and otherwise voluntary were totally ineffective in changing patient or hospital behavior. A simple game theoretic model (well known in reputational games) shows that such effects are discontinuous: I need a sufficiently large number of patients to pay attention to the report cards before I (discontinuously) begin to see real effects. Such theoretical intuition guides the choice of empirical model in many, many cases.
3) Counterfactual analysis. By assumption, no “predictions” can or will ever be checked in counterfactual worlds. Ccounterfactual analysis is the basis of a ton of policy work. Even if you care about predictions, somehow defined, on a counterfactual space, surely we agree that such predictions cannot be tested. Which brings us to…
4) Model selection. Even within the class of purely predictive theories, it is trivial to create theories which “overfit” the past such that they match past data perfectly. How do I choose among the infinitely large class of models which predict all data thus seen perfectly? “Intuition” is the only reasonable answer: the explanations in Model A are more compelling than in Model B. And good economic models can help guide this intuition in future papers. Quine-Duhem Thesis is relevant here as well: when a model I have is “refuted” by new data, what was wrong with the explanation proposed? Quine-Duhem essentially says there is no procedure that will answer that question. (I only write this because there are some Popperians left in Economics, despite the fact that every philosopher of science after Popper has pointed out how ridiculous his model of science should work is: it says nothing about prediction in a stochastic world, it says nothing about how to select what questions to work on, etc.)
Obviously these aren’t the only non-predictive uses of theory – theory helps tie the literature together letting economics as a science progress rather than stand as a series of independent papers; theory can serve to check qualitative intuition, since many seemingly obvious arguments turn out to much less obvious when written down formally (more on this point in Dekel and Lipman). Nonetheless they are enough, I hope, to make the point that prediction is but one goal among many in good social science modeling. I think the Friedman idea about methodology would be long gone in economics if graduate training required the type of methodology/philosophy course, taught by faculty well read in philosophical issues, that every other social and policy science requires. Would that it were so!
http://people.bu.edu/blipman/Papers/dekel-lipman2.pdf (2009 Working Paper; final version in the 2010 Annual Review of Economics)
Yes, this seems to correspond to my intuition of what a model is about 🙂
On this, see also Sugden’s credible worlds (http://dx.doi.org/10.1080/135017800362220) though his point is that the model does not necessarily have to fit (or represent) the situation under study to be helpful in guiding intuition. (I think!)
“…if you like appeals to authority, I should note that philosophers of social science are equally dismayed as I am by Friedman ’53.”
Care to give any examples of philosophical treatments of Friedman? You’ve piqued my curiosity.
“I think the Friedman idea about methodology would be long gone in economics if graduate training required the type of methodology/philosophy course”
What about history of thought?
1) Alexia: agreed. I think Mary Morgan’s “The Model in the World” (monograph, forthcoming) may be interesting to you as well.
2) Well, broadly Friedman is incompatible with the major strands of philosophy of science following Kuhn: Lakatos, Feyerabrand, etc. look very little like Friedman. But Friedman has 2000+ citations, tons by philosophers. The main problem (though not the only one) is mentioned with citations in Caldwell 1983 SEJ: “Philosophers of science since the l940’s have been unanimous in their rejection of the notion that the only goal of science is prediction. Even such positivist philosophers as Carl Hempel have claimed that explanation, not prediction, is the goal of science; it was Hempel who with Paul Oppenheim developed the covering law models of scientific explanation 13]. More recent models of the structure and nature of explanation in science admit to even broader definitions of the concept than did the covering law models.“ Once one takes the position that explanation is the goal of science, the instrumentalist view of theories and theoretical terms is considerably weakened. If science seeks theories that have explanatory as well as predictive powers, then theories that merely predict well may not be satisfactory, and the view that theories are nothing more than instruments for prediction must be rejected.”
3) Michael: there is a lot of overlap between history of thought and methodology, of course, so the two can’t be totally separated. But if I were given dictatorial powers, economic history and pure philosophy of science would be added to a graduate curriculum before history of thought. History of thought is in a bad equilibrium right now: few good graduate students make it their field because there are no jobs, and there are no jobs in the field for exactly that reason!
Perhaps I am being simplistic here, but if theories do not provide predictive capabilities, then are they not just stories based on hindsight bias?
Even predictive theories are just stories based on hindsight bias – we obviously can’t “test” how well the future is predicted at the time the theory is written, and further the choice of what theory to write, and on what topic, is clearly driven by our subjective experience with the past.
Beyond this, though…as I said in the post, not only should we, be we in practice do use theories all the time for explanation in addition to prediction. There are tons of examples in history of “more false” theories that predicted *something* well (epicycles vs. Copernican heliocentrism, for example) but the explanation provided in the poorly predicting theory was useful indeed for helping us think both about improving our theories and about related problems which were never the focus of the original researcher.