Category Archives: Methodology

“The Axiomatic Structure of Empirical Content,” C. Chambers, F. Echenique & E. Shmaya (2013)

Here’s a particularly interesting article at the intersection of philosophy of science and economic theory. Economic theorists have, for much of the twentieth century, linked high theory to observable data using the technique of axiomatization. Many axiomatizations operate by proving that if an agent has such-and-such behavioral properties, their observed actions will encompass certain other properties, and vice versa. For example, demand functions over convex budget sets satisfy the strong axiom of revealed preference if and only if they are generated by the usual restrictions on preference.

You may wonder, however: to what extent is the axiomatization interesting when you care about falsification (not that you should care, necessarily, but if you did)? Note first that we only observe partial data about the world. I can observe that you choose apples when apples and oranges are available (A>=B or B>=A, perhaps strictly if I offer you a bit of money as well) but not whether you prefer apples or bananas when those are the only two options. This shows that a theory may be falsifiable in principle (I may observe that you prefer strictly A to B, B to C and C to A, violating transitivity, falsifying rational preferences) yet still make nonfalsifiable statements (rational preferences also require completeness, yet with only partial data, I can’t observe that you either weakly prefer apples to bananas, or weakly prefer bananas to apples).

Note something interesting here, if you know your Popper. The theory of rational preferences (complete and transitive, with strict preferences defined as the strict part of the >= relation) is universal in Popper’s sense: these axioms can be written using the “for all” quantifier only. So universality under partial observation cannot be all we mean if we wish to consider only the empirical content of a theory. And partial observability is yet harsher on Popper. Consider the classic falsifiable statement, “All swans are white.” If I can in principle only observe a subset of all of the swans in the world, then that statement is not, in fact, falsifiable, since any of the unobserved swans may actually be black.

What Chambers et al do is show that you can take any theory (a set of data generating processes which can be examined with your empirical data) and reduce it to stricter and stricter theories, in the sense that any data which would reject the original theory still reject the restricted theory. The strongest restriction has the following property: every axiom is UNCAF, meaning it can be written using only “for all” operators which negate a conjunction of atomic formulas. So “for all swans s, the swan is white” is not UNCAF (since it lacks a negation). In economics, the strict preference transitivity axiom “for all x,y,z, not x>y and y>z and z>x” is UNCAF and the completeness axiom “for all x,y, x>=y or y>=x” is not, since it is an “or” statement and cannot be reduced to the negation of a conjunction. It is straightforward to extend this to checking for empirical content relative to a technical axiom like continuity.

Proving this result requires some technical complexity, but the result itself is very easy to use for consumers and creators of axiomatizations. Very nice. The authors also note that Samuelson, in his rejoinder to Friedman’s awful ’53 methodology paper, more or less got things right. Friedman claimed that the truth of axioms is not terribly important. Samuelson pointed out that either all of a theory can falsified, in which case since the axioms themselves are always implied by a theory Friedman’s arguments are in trouble, or the theory makes some non-falsifiable claims, in which case attempts to test the theory as a whole are uninformative. Either way, if you care about predictive theories, you ought choose those the weakest theory that generates some given empirical content. In Chambers et al’s result, this means you better be choosing theories whose axioms are UNCAF with respect to technical assumptions. (And of course, if you are writing a theory for explanation, or lucidity, or simplicity, or whatever non-predictive goal you have in mind, continue not to worry about any of this!)

Dec 2012 Working Paper (no IDEAS version).

“An Elementary Theory of Comparative Advantage,” A. Costinot (2009)

Arnaud Costinot is one of many young economists doing interesting work in trade theory. In this 2009 Econometrica, he uses a mathematical technique familiar to any auction theorist – log-supermodularity – to derive a number of general results about trade which have long been seen as intractable, using few assumptions other than free trade and immobile factors of production.

Take two standard reasons for the existence of trade. First is differences in factor productivity. Country A ought produce good 1 and Country B good 2 if A has higher relative productivity in good 1 than B, f(1,A)/f(2,A) > f(1,B)/f(2,B). This is simply Ricardo’s law of comparative advantage. Ricardo showed that comparative advantage in good 1 by country A means that under (efficient) free trade, country A will actually produce more of good A than country B. The problem is when you have a large number of countries and a large number of goods; the simple algebra of Ricardo is no longer sufficient. Here’s the trick, then. Note that the 2-country, 2-good condition just says that the production function f is log-supermodular in countries and goods; “higher” countries are relatively more productive producing “higher” goods, under an appropriate ranking (for instance, more educated workforce countries might be “higher” and more complicated products might be “higher”; all that matters is that such an order exists). If the production function is log-supermodular, then aggregate production is also log-supermodular in goods and countries. Why? In this elementary model, each country specializes in producing only one good. If aggregate production is not log-supermodular, then maximizing behavior by countries means the marginal return to factors of production for a “low” good must be high in the “high” countries and low in the “low” countries. This cannot happen if countries are maximizing their incomes since each country can move factors of production around to different goods as they like and the production function is log-supermodular. What does this theorem tell me? It tells me that under trade with any number of countries and goods, there is a technology ladder, where “higher” countries produce “higher” goods. The proof is literally one paragraph, but it is impossible without the use of mathematics of lattices and supermodularity. Nice!

Consider an alternative model, Heckscher-Ohlin’s trade model which suggests that differences in factor endowments, not differences in technological or institutional capabilities which generate Ricardian comparative advantage, are what drives trade. Let the set of factors of production be distributed across countries according to F, and let technology vary across countries but only in a Hicks-neutral way (i.e., “technology” is just a parameter that scales aggregate production up or down, regardless of how that production is created or what that production happens to be). Let the production function, then, be A(c)h(g,p); that is, a country-specific technology parameter A(c) times a log-supermodular function of the goods produced g and the factors of production p. Assume further that factors are distributed such that “high” countries are relatively more-endowed with “high” factors of production, according to some order; many common distribution functions will give you this property. Under these assumptions, again, “high” countries produce “high” goods in a technology ladder. Why? Efficiency requires that each country assign “high” factors of production to “high” goods. The distributional assumption tells me that “high” factors are more likely to appear in “high” countries. Hence it can be proven using some simple results from lattice theory that “high” countries produce more “high” goods.

There are many further extensions, the most interesting one being that even though the extensions of Ricardo and Heckscher-Ohlin both suggest a ladder of “higher” and “lower” goods, these ladders might not be the same, and hence if both effects are important, we need more restrictive assumptions on the production function to generate interesting results about the worldwide distribution of trade. Costinot also points out that the basic three type (country, good, factor of production) model with log-supermodularity assumptions fits many other fields, since all it roughly says is that heterogeneous agents (countries) with some density of characteristics (goods and factors of productions) then sort into outcomes according to some payoff function of the three types; e.g., heterogeneous firms may be choosing different financial instruments depending on heterogeneous productivity. Ordinal discussion of which types of productivity lead firms to choose which types of financial instruments (or any similar problem) are often far, far easier using log-supermodularity arguments that using functional forms plus derivatives.

Final 2009 ECTA (IDEAS version). Big thumbs up to Costinot for putting the final, published version of his papers on his website.

“The Flexible Unity of Economics,” M. J. Reay (2012)

Michael Reay recently published this article on the economics profession in the esteemed American Journal of Sociology, and as he is a sociologist, I hope the econ navel-gazing can be excused. What Reay points out is that critical discourse about modern economics entails a paradox. On the one hand, economics is a unified, neoliberal-policy-endorsing monolith with great power, and on the other hand, in practice economists often disagree with each other and their memoirs are filled with sighs about how little their advice is valued by policymakers. In my field, innovation policy, there is a wonderful example of this impotence: the US Patent and Trademark Office did not hire a chief economist until – and this is almost impossible to believe – 2010. Lawyers with hugely different analytic techniques (I am being kind here) and policy suggestions both did and still continue to run the show at every important world venue for patent and copyright policy.

How ought we explain this? Reay interviews a number of practicing economists in and out of academia. Nearly all agree on a core of techniques: mathematical formalism, a focus on incentives at the level of individuals, and a focus on unexpected “general equilibrium” effects. None of these core ideas really has anything to do with “markets” or their supremacy as a form of economic organization, of course; indeed, Reay points out that roughly the same core was used in the 1960s when economists as a whole were much more likely to support various forms of government intervention. Further, none of the core ideas suggest that economic efficiency need be prioritized over concerns like equity, as the technique of mathematical optimization says very little about what is to be optimized.

However, the choice of which questions to work on, and what evidence to accept, is guided by “subframes” that are often informed by local contexts. To analyze the power of economists, it is essential to focus on existing local power situations. Neoliberal economic policy enters certain Latin American countries hand-in-hand with political leaders already persuaded that government involvement in the economy must decrease, whereas it enters the US and Europe in a much more limited way due to countervailing institutional forces. That is, regardless of what modern economic theory suggests on a given topic, policymakers have their priors, and they will frame questions such that they advice their economic advisers gives is limited in relation to those frames. Further, regardless of the particular institutional setup, the basic core ideas about what is accepted as evidence to all economists means that the set of possible policy advice is not unbounded.

One idea Reay should have considered further, and which I think is a useful way for non-economists to understand what we do, is the question of why mathematical formalism is so central a part of the economics core vis-a-vis other social sciences. I suggest that it is the economists’ historic interest in counterfactual policy that implies the mathematical formalism rather than the other way around. A mere collection of data a la Gustav Schmoller can say nothing about counterfactuals; for this, theory is essential. Where theory is concerned, limiting the scope for gifted rhetoricians to win the debate by de facto obfuscation requires theoretical statements to be made in a clear way, and for deductive consequences of those statements to be clear as well. Modern logic, roughly equivalent to the type of mathematics economists use in practice, does precisely that. I find that focusing on “quantitative economics” meaning “numerical data” misleading, as it suggests that the data economists collect and use is the reason certain conclusions (say, neoliberal policy) follow. Rather, much of economics uses no quantitative data at all, and therefore it is the limits of mathematics as logic rather than the limits of mathematics as counting that must provide whatever implicit bias exists.

Final July 2012 AJS version (Note: only the Google Docs Preview allows the full article to be viewed, so I’ve linked to that. Sociologists, get on the open access train and put your articles on your personal websites! It’s 2012!

“Mathematical Models in the Social Sciences,” K. Arrow (1951)

I have given Paul Samuelson the title of “greatest economist ever” many times on this site. If he is number one, though, Ken Arrow is surely second. And this essay, an early Cowles discussion paper, is an absolute must-read.

Right on the first page is an absolute destruction of every ridiculous statement you’ve ever heard about mathematical economics. Quoting the physicist Gibbs: “Mathematics is a language.” On whether quantitative methods are appropriate for studying human action: “Doubtless many branches of mathematics – especially those most familiar to the average individual, such as algebra and the calculus – are quantitative in nature. But the whole field of symbolic logic is purely qualitative. We can frame such questions as the following: Does the occurrence of one event imply the occurrence of another? Is it impossible that two events should both occur?” This is spot on. What is most surprising to me, wearing my theorist hat, is how little twentieth century mathematics occurs in economics vis-a-vis the pure sciences, not how much. The most prominent mathematics in economics are the theories of probability, various forms of mathematical logic, and existence theorems on wholly abstract spaces, meaning spaces that don’t have any obvious correspondence with the physical world. These techniques tell us little about numbers, but rather help us answer questions like “How does X relate to Y?” and “Is Z a logical possibility?” and “For some perhaps unknown sets of beliefs, how serious a problem can Q cause?” All of these statements look to me to be exactly equivalent to the types of a priori logical reasoning which appear everywhere in 18th and 19th century “nonmathematical” social science.

There is a common objection to mathematical theorizing, that mathematics is limited in nature compared to the directed intuition which a good social scientist can verbalize. This is particularly true compared to the pure sciences. We have very little intuition about atoms, but great intuition about the social world we inhabit. Arrow argues, however, that making valid logical implication is a difficult task indeed, particularly if we’re using any deductive reasoning beyond the simplest tools in Aristotle. Writing our verbal thoughts as mathematics allows the use of more complicated deductive tools. And the same is true of induction: mathematical model building allows for the use of (what was then very modern) statistical tools to identify relationships. Naive regression identifies correlations, but is rarely able to discuss any more complex relationship between data.

A final note: if you’re interested in history of thought, there are some interesting discussions of decision theory pre-Savage and game theory pre-Nash and pre-Harsanyi in Arrow’s article. A number of interpretations are given that seem somewhat strange given our current understanding, such as interpreting mixed strategies as “bluffing,” or writing down positive-sum n-person cooperative games as zero-sum n+1 player games where a “fictitious player” eats the negative outcome. Less strange, but still by no means mainstream, is Wald’s interpretation of statistical inference as a zero-sum game against nature, where the statistician with a known loss function chooses a decision function (perhaps mixed) and nature simultaneously chooses a realization in order to maximize the expected loss. There is an interesting discussion of something that looks an awful lot like evolutionary game theory, proposed by Nicholas Rachevsky in 1947; I hadn’t known these non-equilibrium linear ODE games existed that far before Maynard Smith. Arrow, and no doubt his contemporaries, also appear to have been quite optimistic about the possibility of a dynamic game theory that incorporated learning about opponent’s play axiomatically, but I would say that, in 2012, we have no such theory and for a variety of reasons, a suitable one may not be possible. Finally, Arrow notes an interesting discussion between Koopmans and his contemporaries about methodological individualism; Arrow endorses the idea that, would we have the data, society’s aggregate outcomes are necessarily determined wholly by the actions of individuals. There is no “societal organism”. Many economists, no doubt, agree completely with that statement, though there are broad groups in the social sciences who both think that the phrase “would we have the data” is a more serious concern that economists generally consider it, and that conceive of non-human social actors. It’s worthwhile to at least know these arguments are out there.

http://128.36.236.35/P/cp/p00a/p0048.pdf (Final version provided thanks to the Cowles Commission’s lovely open access policy)

“David Hume and Modern Economics,” S. Dow (2009)

(This post also deals with a 2011 JEP Retrospective by Schabas and Wennerlind entitled “Hume on Money, Commerce and the Science of Economics.”)

Hume is first and foremost a philosopher and historian, but his social science is not unknown to us economists. Many economists, I imagine, know Hume as the guy who first wrote down the quantity theory of money (more on this shortly). A smaller number, though I hope a nontrivial number, also know Hume as Adam Smith’s best friend, with obvious influence on the Theory of Moral Sentiments and less obvious but still extant influence on The Wealth of Nations. Given Hume’s massive standing in philosophy – was Hume the Paul Samuelson of philosophers, or Samuelson the Hume of economists? – I want to jot down a few notes on particularly interesting comments of his. Readers particularly interested in this topic who have already read the Treatise and the Enquiry might want to pick up the newest edition of Rotwein’s collection of Hume essays on economics, as without such a collection his purely economic content is rather scattered.

First, on money. Hume claims prices are determined by the ratio of circulating currency to the number of goods, and lays out what we now call the specie-flow mechanism: an inflow of specie causes domestic prices to rise, causing imports to become more attractive, causing specie to flow out. He doesn’t say so explicitly, as far as I can tell, but this is basically a long-run equilibrium concept. The problem with Hume as monetarist, as pointed out by basically everyone who has ever written on this topic, is that Hume also has passages where he notes that during a monetary expansion, people are (not become! Remember Hume on causality!) more industrious, increasing the national product. Arguments that Hume is basically modern – money is neutral in the long run and not the short – are not terribly convincing.

Better, perhaps, to note that Hume has a strange understanding of the role of money creation. On many questions of moral behavior, Hume stresses the role of conventions and particularly the role of government in establishing conventions. He therefore treats different types of monetary expansions differently. An exogenous increase in the monetary supply, from a silver discovery or other temporary inflow of specie, does not affect conventions about the worth of money, but an increase in money supply deriving from excess credit creation by banks and sovereigns can affect conventions, hence affecting moral behavior, hence affecting the real economy. The above interpretation of Hume’s monetary writings is by no means universal, but I think, at least, it is an important framework to keep in the back of the mind.

Concerning methodology of social science, Hume makes one particularly striking claim: the human sciences are in a sense easier than natural sciences. A more common argument – due to Comte, perhaps, though my memory fails me – is that physics is simpler than chemistry, which is simpler than biology, then again simpler than psychology, and then social sciences, because each builds upon the other. I understand how particles work, hence understand physics, but I need to know how they interact to understand chemistry, how molecules affect lifeforms for biology, how the brain operates to understand psychology, and how brains and bodies interact with each other and history to understand social science. Hume flips this around entirely. He is an empiricist, and notes that to the extent we know anything, it is through our perceptions, and our own accounts as well as those of other humans are biased and distorted. To interpret perceptions of the natural world, we must first generalize about the human mind: “the science of man is the only solid foundation.” Concerning the social world, we are able to observe the actions and accounts of many people during our lives, so if we are to use induction (and this is Hume, so of course we are wary here), we have many examples from which to draw. Interesting.

https://dspace.stir.ac.uk/bitstream/1893/3167/1/2009%20Hume%20and%20Modern%20Economics.pdf (Working paper – final version in Capitalism and Society, 2009)

“On the Creative Role of Axiomatics,” D. Schlimm (2011)

The mathematician Felix Klein: “The abstract formulation is excellently suited for the elaboration of proofs, but it is clearly not suited for finding new ideas and methods; rather, it constitutes the end of a previous development.” Such a view, Dirk Schlimm argues, is common among philosophers of science as well as mathematicians and other practitioners of axiomatic science (like economic theory). But is axiomatics limited to formalization, to consolidation, or can the axiomatic method be a creative act, one that opens up new venues and suggests new ideas? Given the emphasis on this site of the explanatory value of theory, it will come as no surprise that I see axiomatics as fundamentally creative. The author of the present paper agrees, diagramming the interesting history of the mathematic idea of a lattice.

Lattices are wholly familiar to economists at this stage, but it is worth recapping that they can be formulated in two identical ways: either as a set of elements plus two operations satisfying commutative, associative and absorption laws, which together ensure the set of elements is a partially ordered set (the standard “axiomatic” definition), or else as a set in which each subset has a well-defined infimum and supremum, from which the meet and join operators can be defined and shown to satisfy the laws mentioned above. We use lattices all the time in economic theory: proofs involving preferences, generally a poset, are an obvious example, but also results using monotone comparative statics, among many others. In mathematics more generally, proofs using lattices unify results in a huge number of fields: number theory, projective geometry, abstract algebra and group theory, logic, and many more.

With all these great uses of lattice theory, you might imagine early results proved these important connections between fields, and that the axiomatic definition merely consolidated precisely what was assumed about lattices, ensuring we know the minimum number of things we need to assume. This is not the case at all.

Ernst Schroder, in the late 19th century, noted a mistake in a claim by CS Peirce concerning the axioms of Boolean algebra (algebra with 0 and 1 only). In particular, one of the two distributive laws – say, a+bc=(a+b)(a+c) – turns out to be completely independent from the other standard axioms. In other interesting areas of group theory, Schroder noticed that the distributive axiom was not satisfied, though other axioms of Boolean algebra were. This led him to list what would be the axioms of lattices as something interesting in their own right. That is, work on axiomatizing one area, Boolean algebra, led to an interesting subset of axioms in another area, with the second axiomatization being fundamentally creative.

Dedekind (of the famous cuts), around the same time, also wrote down the axioms for a lattice while considering properties of least common multiples and greatest common divisors in number theory. He listed a set of properties held by lcms and gcds, and noted that distributive laws did not hold for those operations. He then notes a number of interesting other mathematical structures which are described by those properties if taken as axioms: ideals, fields, points in n-dimensional space, etc. Again, this is creativity stemming from axiomatization. Dedekind was unable to find much further use for this line of reasoning in his own field, algebraic number theory, however.

Little was done on lattices until the 1930s; perhaps this is not surprising, as the set theory revolution hit math after the turn of the century, and modern uses of lattices are most common when we deal with ordered sets. Karl Menger (son of the economist, I believe) wrote a common axiomatization of projective and affine geometries, mentioning that only the 6th axiom separates the two, suggesting that further modification of that axiom may suggest interesting new geometries, a creative insight not available without axiomatization. Albert Bennett, unaware of earlier work, rediscovered the axiom of the lattice, and more interestingly listed dozens of novel connections and uses for the idea that are made clear from the axioms. Oystein Ore in the 1930s showed that the axiomatization of a lattice is equivalent to a partial order relation, and showed that it is in a sense as useful a generalization of algebraic structure as you might get. (Interesting for Paul Samuelson hagiographers: the preference relation foundation of utility theory was really cutting edge math in the late 1930s! Mathematical tools to deal with utility in such a modern way literally did not exist before Samuelson’s era.)

I skip many other interesting mathematicians who helped develop the theory, of which much more detail is available in the linked paper. The examples above, Schlimm claims, essentially filter down to three creative purposes served by axiomatics. First, axioms analogize, suggesting the similarity of different domains, leading to a more general set of axioms encompassing those smaller sets, leading to investigation of the resulting larger domain – Aristotle in Analytica Posteriora 1.5 makes precisely this argument. Second, axioms guide the discovery of similar domains that were not, without axiomatization, thought to be similar. Third, axioms suggest modification of an axiom or two, leading to a newly defined domain from the modified axioms which might also be of interest. I can see all three of these creative acts in economic areas like decision theory. Certainly for the theorist working in axiomatic systems, it is worth keeping an open mind for creative, rather than summary, uses of such a tool.

http://axiom.vu.nl/cmsone/SchlimmOnline.pdf (2009 working paper – final version in Synthese 183)

“Decentralization, Hierarchies and Incentives: A Mechanism Design Perspective,” D. Mookherjee (2006)

Lerner, Hayek, Lange and many others in the middle of the 20th century wrote exhaustively about the possibility for centralized systems like communism to perform better than decentralized systems like capitalism. The basic tradeoff is straightforward: in a centralized system, we can account for distributional concerns, negative externalities, etc., while a decentralized system can more effectively use local information. This type of abstract discussion about ideal worlds actually has great applications even to the noncommunist world: we often have to decide between centralization or decentralization within the firm, or within the set of regulators. I am continually amazed by how often the important Hayekian argument is misunderstood. The benefit of capitalism can’t have much to do with profit incentives per se, since (almost) every employee of a modern firm is a not an owner, and hence is incentivized to work hard only by her labor contract. A government agency could conceivably use precisely the same set of contracts and get precisely the same outcome as the private firm (the principle-agent problem is identical in the two cases). The big difference is thus not profit incentive but the use of dispersed information.

Mookherjee, in a recent JEL survey, considers decentralization from the perspective of mechanism design. What is interesting here is that, if the revelation principle applies, there is no reason to use any decentralized decisionmaking system over a centralized one where the boss tells everyone exactly what they should do. That is, any contract where I could subcontract to A who then subsubcontracts to B is weakly dominated by a contract where I get both A and B to truthfully reveal their types and then contract with each myself. The same logic applies, for example, to whether a firm should have middle management or not. This suggests that if we want to explain decentralization in firms, we have only two roads to go down: first, show conditions where decentralization is equally good to centralization, or second, investigate cases where the revelation principle does not apply. In the context of recent discussions on this site of what “good theory” is, I would suggest that this is a great example of a totally nonpredictive theorem (revelation) being quite useful (in narrowing down potential explanations of decentralization) to a specific set of users (applied economic theorists).

(I am assuming most readers of a site like this are familiar with the revelation principle, but if not, it is just a couple lines of math to prove. Assume agents have information or types a in a set A. If I write them a contract F, they will tell me their type is G(a)=a’ where G is just a function that, for all a in A, chooses a’ to maximize u(F(a’)), where u is the utility the agent gets from the contract F by reporting a’. The contract given to an agent of type a, then, leads to outcome F(G(a)). If this contract exists, then just let H be “the function concatenating F(G(.))”. H is now a “truthful” contract, since it is in each agent’s interest just to reveal their true type. That is, the revelation principle guarantees that any outcome from a mechanism, no matter how complicated or involving how many side payments or whatever, can be replicated by a contract where each agent just states what they know truthfully to the principal.)

First, when can we do just as well with decentralization and centralization even when the revelation principle applies? Consider choosing whether to (case 1) hire A who also subcontracts some work to B, or (case 2) just hiring both A and B directly. If A is the only one who knows B’s production costs, then A will need to get informational rents in case 1 unless A and B produce perfectly complementary goods: without such rents, A has an incentive to produce a larger share of production by reporting that B is a high cost producer. Indeed, A is essentially “extracting” information rents both from B and from the principal by virtue of holding information that the principal cannot access. A number of papers have shown that this problem can be eliminated if A is risk-neutral and has an absence of limited liability (so I can tax away ex-ante information rents), contracting is top-down (I contract with A before she learns B’s costs), and A’s production quantity is known (so I can optimally subsidize or tax this production).

More interesting is to consider when revelation fails. Mookherjee notes that the proof of the revelation principle requires 1) noncollusion among agents, 2) absence of communication costs, information processing costs, or contract complexity costs, and 3) no possibility of ex-post contract renegotiation by the principal. I note here that both the present paper, and the hierarchy literature in general, tends to shy away from ongoing relationships, but these are obviously relevant in many cases, and we know that in dynamic mechanism design, the revelation principle will not hold. The restricted message space literature is still rather limited, mainly because mechanism design theory at this point does not give any simple results like the revelation principle when the message space is restricted. It’s impossible to go over every result Mookherjee describes – this is a survey paper after all – but here is a brief summary. Limited message spaces are not a panacea since the restrictions required for limited message space to motivate decentralization, and particularly middle management, are quite strong. Collusion among agents does offer some promise, though. Imagine A and B are next to each other on an assembly line, and B can see A’s effort. The principal just sees whether the joint production is successful or not. For a large number of parameters, Baliga and Sjostrom (1998) proved that delegation is optimal: for example, pay B a wage conditional on output, and let him and A negotiate on the side how to divvy up that payment.

Much more work on the design of organizations is needed, that is for sure.

http://people.bu.edu/dilipm/publications/jeldecsurvrev.pdf (Final working paper – published in June 2006 JEL)

“The F-Twist and the Methodology of Paul Samuelson,” S. Wong (1973)

When reading (many) economists’ take on methodology, I feel the urge to mutter “Popper!” in the tone that Jerry Seinfeld used when Newman walked in the room. His long and wrongheaded shadow casts itself darkly across contemporary economics, and we are the worse because of it. And Popper’s influence is mediated through the two most famous essays on methodology written by economists: Friedman’s 1953 Positive Economics paper, and Paul Samuelson’s response. I linked to the paper in this post, from a 1973 AER, because it gives a nice, simple summary of both (all economists should at least know the outlines of this debate), and argues against Samuelson – this is rare indeed, since it is quite a struggle to find anyone with a philosophic background who supports Friedman’s take. Easier is to find a philosopher who thinks both Samuelson and Friedman are mistaken: more on this shortly. I should note that this post will, in all likelihood, be the only time anything negative will be said about Samuelson (aka, the GOAT).

Friedman is basically an instrumentalist. This means that he sees theories in economics as a generators of predictions. Good theories predict what we care about well. Bad theories predict that poorly. Wrong assumptions do not matter, only wrong predictions about what we care about do. (A quick note: Kevin Hoover argues that we should think of Friedman as a “causal realist” rather than a pure instrumentalist. To Hoover, Friedman’s methodological stance is that economists should look for the 1) true 2) causal mechanisms underlying 3) observational phenomena. He is firmly against a priori axioms, and also thinks that the social world is so complex that theories are by necessity limited; these two facts mean that the relevant “assumptions” of a theory are generated in a cyclic process by which data gives rise to assumptions which are held until we get better ones. Testing assumptions is done by testing the data. There’s something to this point, but I think Hoover is missing just how much Friedman emphasizes prediction as goal in his ’53 essay: prediction is everywhere in that essay!)

Samuelson responded with his famous “F-Twist,” the F being Friedman. Imagine that assumptions A in a theory B lead to conclusions C, where C-, subset of C, is what we care about. Imagine that C+ is true. If that is the case, then a better theory B is one which uses assumptions A-, subset of A, in theory B-, to make only predictions C-. In such a case, B- is a representation theorem, and A- and C- are logically equivalent: A- implies C- and C- implies A-. But then how can one say the reality of assumptions do not matter but the reality of conclusions do? False assumptions necessarily imply false conclusions. Samuelson, like many modern decision theorists, sees economics as a tool for description only. We show that two sets of statements – say, SARP-satisfying revealed preference and ordinal utility theory – are logically equivalent. The best theories are the simple statements that are logically equivalent to some complex social or economic phenomena. Theory is a method of filing. Lawrence Boland, in a few papers, has argued that Samuelson and Friedman are not that far apart. They are both “conventionalists” who do not think the best theory is the “true” one, but rather judge theorems based on some other convention, prediction in Friedman’s case and simplicity in Samuelson’s.

Wong’s point in this essay is that Samuelson’s ignores difficulties in parsing theories. Going from A to A- can be a tricky thing indeed! And, in Friedman and Popper’s worlds, it is not even that important, since the relevant predictions won’t change. And once you think theory should, in practice, be more than just a logical equivalence, you can see that theories have much more content that their assumptions alone: the truth of A can imply the truth of C, but the truth of C does not necessarily imply the truth of A. To Wong (and presumably Friedman), statements like “any deductive proposition is tautological; we can only say that some are less obvious or less interesting than others” (Solow) miss the point that logical equivalence is not the only theoretical relation.

Now pure instrumentalism is pretty ridiculous. As Harry Johnson noted, “the demand for clarification of the mechanism by which results can be explained is contrary to the methodology of positive economics with its “as if” approach.” Restricting the use of theory to making good prediction goes against everything writers on the methodology of social science have argued since the 19th century! In particular, there are three huge problems. First, many models can predict the past well, so will be evaluated at the present equally by an instrumentalist. How shall I choose between them? Second, in a stochastic world, all theories will make “wrong” predictions – see Lakatos. Third, “lightning never strikes twice” in social science. The social world is ever changing. To the extent that we think there are deep tendencies guiding human behavior, or the concatenation of human behavior in a market, theory can elucidate the implications of such tendencies even if nothing is explicitly predicted. Equilibrium concepts in games can help guide your thinking about all sorts of social situations without presuming to “predict” what will happen in complex social environments. I’ve got one more methodology post coming shortly, from the Cartwright/Giere style of philosophy of science, which discusses why even pure scientists should (and do!) reject “the one true model goal”, or the search for a true (in a realist sense) description of the world. The one paragraph argument against the one true modelers is Borges’ great parable “Del rigor en la ciencia”.

http://www2.warwick.ac.uk/fac/soc/philosophy/intranet/modules/honours/ph331/polecon/f-twist_wong.pdf (Final AER copy)

“How (Not) to Do Decision Theory,” E. Dekel & B. Lipman (2009)

Economics has a very strong methodological paradigm, but economists on the whole are incapable of expressing what it is. And this can get us in trouble. Chris Sims and Tom Sargent have both been shooting around the media echo chamber the last week because they have, by and large, refused to answer questions like “What will happen to the economy?” or “What will be the impact of policy X?” Not having an answer is fine, of course: I’m sure Sims would gladly answer any question about the econometric techniques he pioneered, but not being an expert on the details of policy X, he doesn’t feel it’s his place to give (relatively) uninformed comment on such a policy. Unfortunately, parts of the media take his remarks as an excuse to take potshots at “useless” mathematical formalization and axiomatization. What, then, is the point of our models?

Dekel and Lipman answer this question with respect to the most theoretical of all economics: decision theory. Why should we care that, say, the Savage axioms imply subjective expected utility maximization? We all (aside from Savage, perhaps) agree that the axioms are not always satisfied in real life, not should they necessarily be satisfied on normative grounds. Further, the theory, strictly speaking, makes few if any predictions that the statement “People maximize subjective expected utility” does not.

I leave most of the details of their exposition to the paper, but I found the following very compelling. It concerns Gilboa-Schmeidler preferences. These preferences give a utility function where, in the face of ambiguity about probabilities, agents always assume the worst. Dekel and Lipman:

The importance of knowing we have all the implications is particularly clear when the story of the model is potentially misleading about its predictions. For example, the multiple priors model seems to describe an extraordinarily pessimistic agent. Yet the axioms that characterize behavior in this model do not have this feature. The sufficiency theorem ensures that there is not some unrecognized pessimism requirement.

And this is the point. You might think, seeing only the utility representation, that Gilboa-Schmeidler agents are super pessimistic. This turns out not to be necessary at all – the axioms gives seemingly mild conditions on choice under ambiguity which lead to such seeming pessimism. Understanding this gives us a lot of insight into what might be going on when we see Ellsberg-style pessimism in the face of ambiguity.

My problem with Dekel and Lipman here, though, is that, like almost all economists, they are implicitly infected by the most damaging economics article ever written: Milton Friedman’s 1953 Methodology of Positive Economics. That essay roughly says that the goal of an economic model is not to be true, but to predict within a limited sphere of things we want to predict. Such belief suggests that we can “test” models by checking whether predictions in their given sphere are true. I think both of these concepts are totally contrary both to how we should use models in economics, and how we do use them; if you like appeals to authority, I should note that philosophers of social science are equally dismayed as I am by Friedman ’53.

So how should we judge and use models? My standard is that a model is good if end users of the model find that it helps guide their intuition. You might also say that a model is good if it “subjectively compelling.” Surely prediction of the future is a nice property a model might have, but it is by no means necessary, not does “refuting” the predictions implicit in a model mean the model is worthless. What follows is a list of what I would consider subjectively useful uses of a model, accepting that how you weight these uses is entirely subjective, but keeping in mind that our theory has end users and we ought keep some guess at how the model will be used in mind when we write it:

1) Dealing with unforeseen situations. The vast majority of social situations that could be modeled by an economist will not be so modeled. That is, we don’t even claim to make predictions in essentially every situation. There are situations that are inconceivable at the time a paper is written – who knows what the world will care about in 50 years. Does this mean economics is useless in these unforeseen situations? Of course not. Theoretical models can still be useful: Sandeep Baliga has a post at Cheap Talk today where he gains intuition into Pakistan-US bargaining from a Stiglitz-Shapiro model of equilibrium unemployment. The thought experiments, the why of the model, are as relevant, if not more relevant, than the consequence/prediction/etc. of the model. Indeed, look at the introduction – often a summary of results – of your favorite theory paper. Rarely are the theorems stated alone. Instead, the theory and the basic intuition behind the proof are usually given. If we knew a theorem to be true given its assumptions, but the proof was in a black box, the paper would be judged much less compelling by essentially all economists, even though such a paper could “predict” equally well as a paper with proofs.

2) Justifying identification restrictions and other unfalsifiable assumptions in empirical work. Sometimes these are trivial and do not to be formally modeled. Sometimes less so: I have a old note which I’m mentioned here a few times that gives an example from health care. A paper found that hospital report cards that were mandated at a subset of hospitals and otherwise voluntary were totally ineffective in changing patient or hospital behavior. A simple game theoretic model (well known in reputational games) shows that such effects are discontinuous: I need a sufficiently large number of patients to pay attention to the report cards before I (discontinuously) begin to see real effects. Such theoretical intuition guides the choice of empirical model in many, many cases.

3) Counterfactual analysis. By assumption, no “predictions” can or will ever be checked in counterfactual worlds. Ccounterfactual analysis is the basis of a ton of policy work. Even if you care about predictions, somehow defined, on a counterfactual space, surely we agree that such predictions cannot be tested. Which brings us to…

4) Model selection. Even within the class of purely predictive theories, it is trivial to create theories which “overfit” the past such that they match past data perfectly. How do I choose among the infinitely large class of models which predict all data thus seen perfectly? “Intuition” is the only reasonable answer: the explanations in Model A are more compelling than in Model B. And good economic models can help guide this intuition in future papers. Quine-Duhem Thesis is relevant here as well: when a model I have is “refuted” by new data, what was wrong with the explanation proposed? Quine-Duhem essentially says there is no procedure that will answer that question. (I only write this because there are some Popperians left in Economics, despite the fact that every philosopher of science after Popper has pointed out how ridiculous his model of science should work is: it says nothing about prediction in a stochastic world, it says nothing about how to select what questions to work on, etc.)

Obviously these aren’t the only non-predictive uses of theory – theory helps tie the literature together letting economics as a science progress rather than stand as a series of independent papers; theory can serve to check qualitative intuition, since many seemingly obvious arguments turn out to much less obvious when written down formally (more on this point in Dekel and Lipman). Nonetheless they are enough, I hope, to make the point that prediction is but one goal among many in good social science modeling. I think the Friedman idea about methodology would be long gone in economics if graduate training required the type of methodology/philosophy course, taught by faculty well read in philosophical issues, that every other social and policy science requires. Would that it were so!

http://people.bu.edu/blipman/Papers/dekel-lipman2.pdf (2009 Working Paper; final version in the 2010 Annual Review of Economics)

“What Does it Mean to Say that Economics is Performative?,” M. Callon (2007)

With the last three posts being high mathematical-economic theory, let’s go 180 degrees and look at this recent essay – the introduction of a book, actually – by Michel Callon, one of the deans of actor-network theory (along with Bruno Latour, of course). I know what you’re thinking: a French sociologist of science who thinks objects have agency? You’re probably running away already! But stay; I promise it won’t be so bad. And as Callon mentions, sociologists of science and economic theory have a long connection: Robert K. Merton, legendary author of The Sociology of Science, is the father of Robert C. Merton, the Nobel winning economist.

The concept here is performativity in economics. An essay by William Baumol and a coauthor in the JEL tried to examine whether economic theory had made any major contributions. Of the 9 theories they studied (marginalism, Black-Scholes, etc.), only a couple could reasonably be said to be invented and disseminated by academic economists. But performativity is not so sanguine. Performativity suggests that, rather than theories being true or false, they are accepted or not accepted, and there are many roles to be played in this acceptance process by humans and non-humans alike. For example, the theory of Black-Scholes could be accepted in academia, but to be performed by a broader network, certain technologies were needed (frequent stock quotes), market participants needed to believe the theory, regulators needed to be persuaded (that, for one, options are not just gambling); this process is reflexive and the way the theory is performed feeds back into the construction of novel theories. A role exists for economists as scientists across this entire performance.

The above does not simply mean that beliefs matter, or that economic theories are “performed” as self-fulfilling prophecies. Callon again: “The notion of expression is a powerful vaccination against a reductionist interpretation of performativity; a reminder that performativity is not about creating but about making happen.” Not all potential self-fulfilling prophecies are equal: traders did in fact use Black-Scholes, but they never began to use sunspots to coordinate. Sometimes theories outside academia are performed in economics: witness financial chartism. It’s not about “truth” or “falsehood”: Callon’s school of sociology/anthropology is fundamentally agnostic.

There is an interesting link between the jargon of the actor-network theory literature and standard economics. I think you can see it in the following passage:

“In the paper world to which it belongs, marginalist analysis thrives. All it needs are some propositions on decreasing returns, the convexity of utility curves, and so forth. Transported into an electricity utility (for example Electricité de France), it needs the addition of time-ofday meters set up wherever people consume electricity and without which calculations are impossible; introduced into a private firm, it requires analytical accounting and a system of recording and cost assessment that prove to be hardly feasible. This does not mean that marginalist analysis has become false. As everyone knows, it is still true in (most) universities.”

Economists surely see a quote like the above and think, surely there is something more to this theory of performance than information economics and technological constraints. But really there isn’t. Rather, we economists generally do model why information is the way it is, or why certain agents get certain signals. A lot of this branch of sociology should be read as an investigation into how agents (including nonhumans, such as firms) get, or search for, information, particularly to the extent that such a search is reflexive to a new economic theory being proposed.

http://halshs.archives-ouvertes.fr/docs/00/09/15/96/PDF/WP_CSI_005.pdf (July 2006 working paper – final version published in McKenzie et al (Eds.), Do Economists Make Markets by Princeton Univ. Press)