Category Archives: Methodology

“Aggregation in Production Functions: What Applied Economists Should Know,” J. Felipe & F. Fisher (2003)

Consider a firm that takes heterogeneous labor and capital inputs L1, L2… and K1, K2…, using these to produce some output Y. Define a firm production function Y=F(K1, K2…, L1, L2…) as the maximal output that can be produced using the given vector of outputs – and note the implicit optimization condition in that definition, which means that production functions are not simply technical relationships. What conditions are required to construct an aggregated production function Y=F(K,L), or more broadly to aggregate across firms an economy-wide production function Y=F(K,L)? Note that the question is not about the definition of capital per se, since defining “labor” is equally problematic when man-hours are clearly heterogeneous, and this question is also not about the more general capital controversy worries, like reswitching (see Samuelson’s champagne example) or the dependence of the return to capital on the distribution of income which, itself, depends on the return to capital.

(A brief aside: on that last worry, why the Cambridge UK types and their modern day followers are so worried about the circularity of the definition of the interest rate, yet so unconcerned about the exact same property of the object we call “wage”, is quite strange to me, since surely if wages equal marginal product, and marginal product in dollars is a function of aggregate demand, and aggregate demand is a function of the budget constraint determined by wages, we are in an identical philosophical situation. I think it’s pretty clear that the focus on “r” rather than “w” is because of the moral implications of capitalists “earning their marginal product” which are less than desirable for people of a certain political persuasion. But I digress; let’s return to more technical concerns.)

It turns out, and this should be fairly well-known, that the conditions under which factors can be aggregated are ridiculously stringent. If we literally want to add up K or L when firms use different production functions, the condition (due to Leontief) is that the marginal rate of substitution between different types of factors in one aggregation, e.g. capital, does not depend on the level of factors not in that aggregation, e.g. labor. Surely this is a condition that rarely holds: how much I want to use, in an example due to Solow, different types of trucks will depend on how much labor I have at hand. A follow-up by Nataf in the 1940s is even more discouraging. Assume every firm uses homogenous labor, every firm uses capital which though homogenous within each firms differs across firms, and every firm has identical constant returns to scale production technology. When can I now write an aggregate production function Y=F(K,L) summing up the capital in each firm K1, K2…? That aggregate function exists if and only if every firm’s production function is additively separable in capital and labor (in which case, the aggregation function is pretty obvious)! Pretty stringent, indeed.

Fisher helps things just a bit in a pair of papers from the 1960s. Essentially, he points out that we don’t want to aggregate for all vectors K and L, but rather we need to remember that production functions measure the maximum output possible when all inputs are used most efficiently. Competitive factor markets guarantee that this assumption will hold in equilibrium. That said, even assuming only one type of labor, efficient factor markets, and a constant returns to scale production function, aggregation is possible if and only if every firm has the same production function Y=F(b(v)K(v),L), where v denotes a given firm and b(v) is a measure of how efficiently capital is employed in that firm. That is, aside from capital efficiency, every firm’s production function must be identical if we want to construct an aggregate production function. This is somewhat better than Nataf’s result, but still seems highly unlikely across a sector (to say nothing of an economy!).

Why, then, do empirical exercises using, say, aggregate Cobb-Douglas seem to give such reasonable parameters, even though the above theoretical results suggest that parameters like “aggregate elasticity of substitution between labor and capital” don’t even exist? That is, when we estimate elasticities or total factor productivities from Y=AK^a*L^b, using some measure of aggregated capital, what are we even estimating? Two things. First, Nelson and Winter in their seminal book generate aggregate date which can almost perfectly be fitted using Cobb-Douglas even though their model is completely evolutionary and does not even involve maximizing behavior by firms, so the existence of a “good fit” alone is, and this should go without saying, not great evidence in support of a model. Second, since ex-post production Y must equal the wage bill plus the capital payments plus profits, Felipe notes that this identity can be algebraically manipulated to Y=AF(K,L) where the form of F depends on the nature of the factor shares. That is, the good fit of Cobb-Douglas or CES can simply reflect an accounting identity even when nothing is known about micro-level elasticities or similar.

So what to do? I am not totally convinced we should throw out aggregate production functions – it surely isn’t a coincidence that Solow residuals for TFP match are estimated to be high in places where our intuition says technological change has been rapid. Because of results like this, it doesn’t strike me that aggregate production functions are measuring arbitrary things. However, if we are using parameters from these functions to do counterfactual analysis, we really ought know better exactly what approximations or assumptions are being baked into the cake, and it doesn’t seem that we are quite there yet. Until we are, a great deal of care should be taken in assigning interpretations to estimates based on aggregate production models. I’d be grateful for any pointers in the comments to recent work on this problem.

Final published version (RePEc IDEAS. The “F. Fisher” on this paper is the former Clark Medal winner and well-known IO economist Franklin Fisher; rare is it to find a nice discussion of capital issues written by someone who is firmly part of the economics mainstream and completely aware of the major theoretical results from “both Cambridges”. Tip of the cap to Cosma Shalizi for pointing out this paper.

“Epistemic Game Theory,” E. Dekel & M. Siniscalchi (2014)

Here is a handbook chapter that is long overdue. The theory of epistemic games concerns a fairly novel justification for solution concepts under strategic uncertainty – that is, situations where what I want to do depends on other people do, and vice versa. We generally analyze these as games, and have a bunch of equilibrium (Nash, subgame perfection, etc.) and nonequilibrium (Nash bargain, rationalizability, etc.) solution concepts. So which should you use? I can think of four classes of justification for a game solution. First, the solution might be stable: if you told each player what to do, no one person (or sometimes group) would want to deviate. Maskin mentions this justification is particularly worthy when it comes to mechanism design. Second, the solution might be the outcome of a dynamic selection process, such as evolution or a particular learning rule. Third, the solution may be justified by certain axiomatic first principles; Shapley value is a good example in this class. The fourth class, however, is the one we most often teach students: a solution concept is good because it is justified by individual behavior assumptions. Nash, for example, is often thought to be justified by “rationality plus correct beliefs”. Backward induction is similarly justified by “common knowledge of rationality at all states.”

Those are informal arguments, however. The epistemic games (or sometimes, “interactive epistemology”) program seeks to formally analyze assumptions about the knowledge and rationality of players and what it implies for behavior. There remain many results we don’t know (for instance, I asked around and could only come up with one paper on the epistemics of coalitional games), but the results proven so far are actually fascinating. Let me give you three: rationality and common belief in rationality implies rationalizable strategies are played, the requirements for Nash are different depending on how players there are, and backward induction is surprisingly difficult to justify on epistemic grounds.

First, rationalizability. Take a game and remove any strictly dominated strategy for each player. Now in the reduced game, remove anything that is strictly dominated. Continue doing this until nothing is left to remove. The remaining strategies for each player are “rationalizable”. If players can hold any belief they want about what potential “types” opponents may be – where a given (Harsanyi) type specifies what an opponent will do – then as long as we are all rational, we all believe the opponents are rational, we all believe the opponents all believe that we all are rational, ad infinitum, the only possible outcomes to the game are the rationalizable ones. Proving this is actually quite complex: if we take as primitive the “hierarchy of beliefs” of each player (what do I believe my opponents will do, what do I believe they believe I will do, and so on), then we need to show that any hierarchy of beliefs can be written down in a type structure, then we need to be careful about how we define “rational” and “common belief” on a type structure, but all of this can be done. Note that many rationalizable strategies are not Nash equilibria.

So what further assumptions do we need to justify Nash? Recall the naive explanation: “rationality plus correct beliefs”. Nash takes us from rationalizability, where play is based on conjectures about opponent’s play, to an equilibrium, where play is based on correct conjectures. But which beliefs need to be correct? With two players and no uncertainty, the result is actually fairly straightforward: if our first order beliefs are (f,g), we mutually believe our first order beliefs are (f,g), and we mutually believe we are rational, then beliefs (f,g) represent a Nash equilibrium. You should notice three things here. First, we only need mutual belief (I know X, and you know I know X), not common belief, in rationality and in our first order beliefs. Second, the result is that our first-order beliefs are that a Nash equilibrium strategy will be played by all players; the result is about beliefs, not actual play. Third, with more than two players, we are clearly going to need assumptions about how my beliefs about our mutual opponent are related to your beliefs; that is, Nash will require more, epistemically, than “basic strategic reasoning”. Knowing these conditions can be quite useful. For instance, Terri Kneeland at UCL has investigated experimentally the extent to which each of the required epistemic conditions are satisfied, which helps us to understand situations in which Nash is harder to justify.

Finally, how about backward induction? Consider a centipede game. The backward induction rationale is that if we reached the final stage, the final player would defect, hence if we are in the second-to-last stage I should see that coming and defect before her, hence if we are in the third-to-last stage she will see that coming and defect before me, and so on. Imagine that, however, player 1 does not defect in the first stage. What am I to infer? Was this a mistake or am I perhaps facing an irrational opponent? Backward induction requires that I never make such an inference, and hence I defect in stage 2.

Here is a better justification for defection in the centipede game, though. If player 1 doesn’t defect in the first stage, then I “try my best” to retain a belief in his rationality. That is, if it is possible for him to have some belief about my actions in the second stage which rationally justified his first stage action, then I must believe that he holds those beliefs. For example, he may believe that I believe he will continue again in the third stage, hence that I will continue in the second stage, hence he will continue in the first stage then plan to defect in the third stage. Given his beliefs about me, his actions in the first stage were rational. But if that plan to defect in stage three were his justification, then I should defect in stage two. He realizes I will make these inferences, hence he will defect in stage 1. That is, the backward induction outcome is justified by forward induction. Now, it can be proven that rationality and common “strong belief in rationality” as loosely explained above, along with a suitably rich type structure for all players, generates a backward induction outcome. But the epistemic justification is completely based on the equivalence between forward and backward induction under those assumptions, not on any epistemic justification for backward induction reasoning per se. I think that’s a fantastic result.

Final version, prepared for the new Handbook of Game Theory. I don’t see a version on RePEc IDEAS.

Dale Mortensen as Micro Theorist

Northwestern’s sole Nobel Laureate in economics, Dale Mortensen, passed overnight; he remained active as a teacher and researcher over the past few years, though I’d be hearing word through the grapevine about his declining health over the past few months. Surely everyone knows Mortensen the macroeconomist for his work on search models in the labor market. There is something odd here, though: Northwestern has really never been known as a hotbed of labor research. To the extent that researchers rely on their coworkers to generate and work through ideas, how exactly did Mortensen became such a productive and influential researcher?

Here’s an interpretation: Mortensen’s critical contribution to economics is as the vector by which important ideas in micro theory entered real world macro; his first well-known paper is literally published in a 1970 book called “Microeconomic Foundations of Employment and Inflation Theory.” Mortensen had the good fortune to be a labor economist working in the 1970s and 1980s at a school with a frankly incredible collection of microeconomic theorists; during those two decades, Myerson, Milgrom, Loury, Schwartz, Kamien, Judd, Matt Jackson, Kalai, Wolinsky, Satterthwaite, Reinganum and many others were associated with Northwestern. And this was a rare condition! Game theory is everywhere today, and pioneers in that field (von Neumann, Nash, Blackwell, etc.) were active in the middle of the century. Nonetheless, by the late 1970s, game theory in the social sciences was close to dead. Paul Samuelson, the great theorist, wrote essentially nothing using game theory between the early 1950s and the 1990s. Quickly scanning the American Economic Review from 1970-1974, I find, at best, one article per year that can be called game-theoretic.

What is the link between Mortensen’s work and developments in microeconomic theory? The essential labor market insight of search models (an insight which predates Mortensen) is that the number of hires and layoffs is substantial even in the depth of a recession. That is, the rise in the unemployment rate cannot simply be because the marginal revenue of the potential workers is always less than the cost, since huge numbers of the unemployed are hired during recessions (as others are fired). Therefore, a model which explains changes in churn rather than changes in the aggregate rate seems qualitatively important if we are to develop policies to address unemployment. This suggests that there might be some use in a model where workers and firms search for each other, perhaps with costs or other frictions. Early models along this line by Mortensen and others were generally one-sided and hence non-strategic: they had the flavor of optimal stopping problems.

Unfortunately, Diamond in a 1971 JET pointed out that Nash equilibrium in two-sided search leads to a conclusion that all workers are paid their reservation wage: all employers pay the reservation wage, workers believe this to be true hence do not engage in costly search to switch jobs, hence the belief is accurate and nobody can profitably deviate. Getting around the “Diamond Paradox” involved enriching the model of who searches when and the extent to which old offers can be recovered; Mortensen’s work with Burdett is a nice example. One also might ask whether laissez faire search is efficient or not: given the contemporaneous work of micro theorists like Glenn Loury on mathematically similar problems like the patent race, you might imagine that efficient search is unlikely.

Beyond the efficiency of matches themselves is the question of how to split surplus. Consider a labor market. In the absence of search frictions, Shapley (first with Gale, later with Shubik) had shown in the 1960s and early 1970s the existence of stable two-sided matches even when “wages” are included. It turns out these stable matches are tightly linked to the cooperative idea of a core. But what if this matching is dynamic? Firms and workers meet with some probability over time. A match generates surplus. Who gets this surplus? Surely you might imagine that the firm should have to pay a higher wage (more of the surplus) to workers who expect to get good future offers if they do not accept the job today. Now we have something that sounds familiar from non-cooperative game theory: wage is based on the endogenous outside options of the two parties. It turns out that noncooperative game theory had very little to say about bargaining until Rubinstein’s famous bargaining game in 1982 and the powerful extensions by Wolinsky and his coauthors. Mortensen’s dynamic search models were a natural fit for those theoretic developments.

I imagine that when people hear “microfoundations”, they have in mind esoteric calibrated rational expectations models. But microfoundations in the style of Mortensen’s work is much more straightforward: we simply cannot understand even the qualitative nature of counterfactual policy in the absence of models that account for strategic behavior. And thus the role for even high micro theory, which investigates the nature of uniqueness of strategic outcomes (game theory) and the potential for a planner to improve welfare through alternative rules (mechanism design). Powerful tools indeed, and well used by Mortensen.

“Price Formation of Fish,” A.P Barten & L.J. Bettendorf (1989)

I came across this nice piece of IO in a recent methodological book by John Sutton, which I hope to cover soon. Sutton recalls Lionel Robbins’ famous Essay on the Nature of Significance of Economic Science. In that essay, Robbins claims the goal of the empirically-minded economist is to estimate stable (what we now call “structural”) parameters whose stability we know a priori from theory. (As an aside, it is tragic that Hurwicz’ 1962 “On the Structural Form of Interdependent Systems”, from which Robbins’ idea gets its modern treatment, is not freely available online; all I see is a snippet from the conference volume it appeared at here). Robbins gives the example of an empiricist trying to estimate the demand for haddock by measuring prices and quantities each day, controlling for weather and the like, and claiming that the average elasticity has some long-run meaning; this, he says, is a fool’s errand.

Sutton points out how interesting that example is: if anything, fish are an easy good to examine! They are a good with easy-to-define technical characteristics sold in competitive wholesale markets. Barten and Bettendorf point out another interesting property: fish are best described by an inverse demand system, where consumers determine the price paid as a function of the quantity of fish in the market rather than vice versa, since quantity in the short run is essentially fixed. To the theorist, there is no difference between demand and inverse demand, but to the empiricist, that little error term must be added to the exogenous variables if we are to handle statistical variation correctly. Any IO economist worth their salt knows how to estimate common demand systems like AIDS, but how should we interpret parameters in inverse demand systems?

Recall that, in theory, Marshallian demand is a homogeneous of degree zero function of total expenditures and prices. Using the homogeneity, we have that the vector quantity demand q is a function of P, the fraction of total expenditure paid for each unit of each good. Inverting that function gives P as a function of q. Since inverse demand is the result of a first-order condition from utility maximization, we can restate P as a function of marginal utilities and quantities. Taking the derivative of P, with some judicious algebra, one can state the (normalized) inverse demand as the sum of moves along an indifference surface and moves across indifference surfaces; in particular, dP=gP’dq+Gdq, where g is a scalar and G is an analogue of the Slutsky matrix for inverse demand, symmetric and negative semidefinite. All we need to do know is to difference our data and estimate that system (although the authors do a bit more judicious algebra to simplify the computational estimation).

One more subtle step is required. When we estimate an inverse demand system, we may wish to know how substitutable or complementary any two goods are. Further, we want such an estimate to be invariant to arbitrary monotone increasing changes in an underlying utility function (the form of which is not assumed here). It turns out that Allais (in his 1943 text on “pure economics” which, as far as I know, is yet to be translated!) has shown how to construct just such a measure. Yet another win for theory, and for Robbins’ intuition: it is hopeless to atheoretically estimate cross-price elasticities or similar measures of substitutability atheoretically, since these parameters are determined simultaneously. It is only as a result of theory (here, nothing more than “demand comes from utility maximizers” is used) that we can even hope to tease out underlying parameters like these elasticities. The huge numbers of “reduced-form” economists these days who do not understand what the problem is here really need to read through papers of this type; atheoretical training is, in my view, a serious danger to the grand progress made by economics since Haavelmo and Samuelson.

It is the methodology that is important here; the actual estimates are secondary. But let’s state them anyway: the fish sold in the Belgian markets are quite own-price elastic, have elasticities that are consistent with demand-maximizing consumers, and have patterns of cross-price elasticities across fish varieties that are qualitatively reasonable (bottom-feeders are highly substitutable with each other, etc.) and fairly constant across a period of two decades.

Final version in EER (No IDEAS version). This paper was in the European Economic Review, an Elsevier journal that is quickly being killed off since the European Economic Association pulled out of their association with Elsevier to run their own journal, the JEEA. The editors of the main journal in environmental economics have recently made the same type of switch, and of course, a group of eminent theorist made a similar exit when Theoretical Economics began. Jeff Ely has recently described how TE came about; that example makes it quite clear that journals are actually quite inexpensive to run. Even though we economists are lucky to have nearly 100% “green” open access, where preprints are self-archived by authors, we still have lots of work to do to get to a properly ungated world. The Econometric Society, for example, spends about $900,000 for all of its activities aside from physically printing journals, a cost that could still be recouped in an open access world. Much of that is for running conferences, giving honoraria, etc, but let us be very conservative and estimate no income is received aside from subscriptions to its three journals, including archives. This suggests that a complete open access journal and archives for the 50 most important journals in the field requires, very conservatively, revenue of $15 million per year, and probably much less. This seems a much more effective use of NSF and EU moneys that funding a few more graduate research assistants.

“The Axiomatic Structure of Empirical Content,” C. Chambers, F. Echenique & E. Shmaya (2013)

Here’s a particularly interesting article at the intersection of philosophy of science and economic theory. Economic theorists have, for much of the twentieth century, linked high theory to observable data using the technique of axiomatization. Many axiomatizations operate by proving that if an agent has such-and-such behavioral properties, their observed actions will encompass certain other properties, and vice versa. For example, demand functions over convex budget sets satisfy the strong axiom of revealed preference if and only if they are generated by the usual restrictions on preference.

You may wonder, however: to what extent is the axiomatization interesting when you care about falsification (not that you should care, necessarily, but if you did)? Note first that we only observe partial data about the world. I can observe that you choose apples when apples and oranges are available (A>=B or B>=A, perhaps strictly if I offer you a bit of money as well) but not whether you prefer apples or bananas when those are the only two options. This shows that a theory may be falsifiable in principle (I may observe that you prefer strictly A to B, B to C and C to A, violating transitivity, falsifying rational preferences) yet still make nonfalsifiable statements (rational preferences also require completeness, yet with only partial data, I can’t observe that you either weakly prefer apples to bananas, or weakly prefer bananas to apples).

Note something interesting here, if you know your Popper. The theory of rational preferences (complete and transitive, with strict preferences defined as the strict part of the >= relation) is universal in Popper’s sense: these axioms can be written using the “for all” quantifier only. So universality under partial observation cannot be all we mean if we wish to consider only the empirical content of a theory. And partial observability is yet harsher on Popper. Consider the classic falsifiable statement, “All swans are white.” If I can in principle only observe a subset of all of the swans in the world, then that statement is not, in fact, falsifiable, since any of the unobserved swans may actually be black.

What Chambers et al do is show that you can take any theory (a set of data generating processes which can be examined with your empirical data) and reduce it to stricter and stricter theories, in the sense that any data which would reject the original theory still reject the restricted theory. The strongest restriction has the following property: every axiom is UNCAF, meaning it can be written using only “for all” operators which negate a conjunction of atomic formulas. So “for all swans s, the swan is white” is not UNCAF (since it lacks a negation). In economics, the strict preference transitivity axiom “for all x,y,z, not x>y and y>z and z>x” is UNCAF and the completeness axiom “for all x,y, x>=y or y>=x” is not, since it is an “or” statement and cannot be reduced to the negation of a conjunction. It is straightforward to extend this to checking for empirical content relative to a technical axiom like continuity.

Proving this result requires some technical complexity, but the result itself is very easy to use for consumers and creators of axiomatizations. Very nice. The authors also note that Samuelson, in his rejoinder to Friedman’s awful ’53 methodology paper, more or less got things right. Friedman claimed that the truth of axioms is not terribly important. Samuelson pointed out that either all of a theory can falsified, in which case since the axioms themselves are always implied by a theory Friedman’s arguments are in trouble, or the theory makes some non-falsifiable claims, in which case attempts to test the theory as a whole are uninformative. Either way, if you care about predictive theories, you ought choose those the weakest theory that generates some given empirical content. In Chambers et al’s result, this means you better be choosing theories whose axioms are UNCAF with respect to technical assumptions. (And of course, if you are writing a theory for explanation, or lucidity, or simplicity, or whatever non-predictive goal you have in mind, continue not to worry about any of this!)

Dec 2012 Working Paper (no IDEAS version).

“An Elementary Theory of Comparative Advantage,” A. Costinot (2009)

Arnaud Costinot is one of many young economists doing interesting work in trade theory. In this 2009 Econometrica, he uses a mathematical technique familiar to any auction theorist – log-supermodularity – to derive a number of general results about trade which have long been seen as intractable, using few assumptions other than free trade and immobile factors of production.

Take two standard reasons for the existence of trade. First is differences in factor productivity. Country A ought produce good 1 and Country B good 2 if A has higher relative productivity in good 1 than B, f(1,A)/f(2,A) > f(1,B)/f(2,B). This is simply Ricardo’s law of comparative advantage. Ricardo showed that comparative advantage in good 1 by country A means that under (efficient) free trade, country A will actually produce more of good A than country B. The problem is when you have a large number of countries and a large number of goods; the simple algebra of Ricardo is no longer sufficient. Here’s the trick, then. Note that the 2-country, 2-good condition just says that the production function f is log-supermodular in countries and goods; “higher” countries are relatively more productive producing “higher” goods, under an appropriate ranking (for instance, more educated workforce countries might be “higher” and more complicated products might be “higher”; all that matters is that such an order exists). If the production function is log-supermodular, then aggregate production is also log-supermodular in goods and countries. Why? In this elementary model, each country specializes in producing only one good. If aggregate production is not log-supermodular, then maximizing behavior by countries means the marginal return to factors of production for a “low” good must be high in the “high” countries and low in the “low” countries. This cannot happen if countries are maximizing their incomes since each country can move factors of production around to different goods as they like and the production function is log-supermodular. What does this theorem tell me? It tells me that under trade with any number of countries and goods, there is a technology ladder, where “higher” countries produce “higher” goods. The proof is literally one paragraph, but it is impossible without the use of mathematics of lattices and supermodularity. Nice!

Consider an alternative model, Heckscher-Ohlin’s trade model which suggests that differences in factor endowments, not differences in technological or institutional capabilities which generate Ricardian comparative advantage, are what drives trade. Let the set of factors of production be distributed across countries according to F, and let technology vary across countries but only in a Hicks-neutral way (i.e., “technology” is just a parameter that scales aggregate production up or down, regardless of how that production is created or what that production happens to be). Let the production function, then, be A(c)h(g,p); that is, a country-specific technology parameter A(c) times a log-supermodular function of the goods produced g and the factors of production p. Assume further that factors are distributed such that “high” countries are relatively more-endowed with “high” factors of production, according to some order; many common distribution functions will give you this property. Under these assumptions, again, “high” countries produce “high” goods in a technology ladder. Why? Efficiency requires that each country assign “high” factors of production to “high” goods. The distributional assumption tells me that “high” factors are more likely to appear in “high” countries. Hence it can be proven using some simple results from lattice theory that “high” countries produce more “high” goods.

There are many further extensions, the most interesting one being that even though the extensions of Ricardo and Heckscher-Ohlin both suggest a ladder of “higher” and “lower” goods, these ladders might not be the same, and hence if both effects are important, we need more restrictive assumptions on the production function to generate interesting results about the worldwide distribution of trade. Costinot also points out that the basic three type (country, good, factor of production) model with log-supermodularity assumptions fits many other fields, since all it roughly says is that heterogeneous agents (countries) with some density of characteristics (goods and factors of productions) then sort into outcomes according to some payoff function of the three types; e.g., heterogeneous firms may be choosing different financial instruments depending on heterogeneous productivity. Ordinal discussion of which types of productivity lead firms to choose which types of financial instruments (or any similar problem) are often far, far easier using log-supermodularity arguments that using functional forms plus derivatives.

Final 2009 ECTA (IDEAS version). Big thumbs up to Costinot for putting the final, published version of his papers on his website.

“The Flexible Unity of Economics,” M. J. Reay (2012)

Michael Reay recently published this article on the economics profession in the esteemed American Journal of Sociology, and as he is a sociologist, I hope the econ navel-gazing can be excused. What Reay points out is that critical discourse about modern economics entails a paradox. On the one hand, economics is a unified, neoliberal-policy-endorsing monolith with great power, and on the other hand, in practice economists often disagree with each other and their memoirs are filled with sighs about how little their advice is valued by policymakers. In my field, innovation policy, there is a wonderful example of this impotence: the US Patent and Trademark Office did not hire a chief economist until – and this is almost impossible to believe – 2010. Lawyers with hugely different analytic techniques (I am being kind here) and policy suggestions both did and still continue to run the show at every important world venue for patent and copyright policy.

How ought we explain this? Reay interviews a number of practicing economists in and out of academia. Nearly all agree on a core of techniques: mathematical formalism, a focus on incentives at the level of individuals, and a focus on unexpected “general equilibrium” effects. None of these core ideas really has anything to do with “markets” or their supremacy as a form of economic organization, of course; indeed, Reay points out that roughly the same core was used in the 1960s when economists as a whole were much more likely to support various forms of government intervention. Further, none of the core ideas suggest that economic efficiency need be prioritized over concerns like equity, as the technique of mathematical optimization says very little about what is to be optimized.

However, the choice of which questions to work on, and what evidence to accept, is guided by “subframes” that are often informed by local contexts. To analyze the power of economists, it is essential to focus on existing local power situations. Neoliberal economic policy enters certain Latin American countries hand-in-hand with political leaders already persuaded that government involvement in the economy must decrease, whereas it enters the US and Europe in a much more limited way due to countervailing institutional forces. That is, regardless of what modern economic theory suggests on a given topic, policymakers have their priors, and they will frame questions such that they advice their economic advisers gives is limited in relation to those frames. Further, regardless of the particular institutional setup, the basic core ideas about what is accepted as evidence to all economists means that the set of possible policy advice is not unbounded.

One idea Reay should have considered further, and which I think is a useful way for non-economists to understand what we do, is the question of why mathematical formalism is so central a part of the economics core vis-a-vis other social sciences. I suggest that it is the economists’ historic interest in counterfactual policy that implies the mathematical formalism rather than the other way around. A mere collection of data a la Gustav Schmoller can say nothing about counterfactuals; for this, theory is essential. Where theory is concerned, limiting the scope for gifted rhetoricians to win the debate by de facto obfuscation requires theoretical statements to be made in a clear way, and for deductive consequences of those statements to be clear as well. Modern logic, roughly equivalent to the type of mathematics economists use in practice, does precisely that. I find that focusing on “quantitative economics” meaning “numerical data” misleading, as it suggests that the data economists collect and use is the reason certain conclusions (say, neoliberal policy) follow. Rather, much of economics uses no quantitative data at all, and therefore it is the limits of mathematics as logic rather than the limits of mathematics as counting that must provide whatever implicit bias exists.

Final July 2012 AJS version (Note: only the Google Docs Preview allows the full article to be viewed, so I’ve linked to that. Sociologists, get on the open access train and put your articles on your personal websites! It’s 2012!

“Mathematical Models in the Social Sciences,” K. Arrow (1951)

I have given Paul Samuelson the title of “greatest economist ever” many times on this site. If he is number one, though, Ken Arrow is surely second. And this essay, an early Cowles discussion paper, is an absolute must-read.

Right on the first page is an absolute destruction of every ridiculous statement you’ve ever heard about mathematical economics. Quoting the physicist Gibbs: “Mathematics is a language.” On whether quantitative methods are appropriate for studying human action: “Doubtless many branches of mathematics – especially those most familiar to the average individual, such as algebra and the calculus – are quantitative in nature. But the whole field of symbolic logic is purely qualitative. We can frame such questions as the following: Does the occurrence of one event imply the occurrence of another? Is it impossible that two events should both occur?” This is spot on. What is most surprising to me, wearing my theorist hat, is how little twentieth century mathematics occurs in economics vis-a-vis the pure sciences, not how much. The most prominent mathematics in economics are the theories of probability, various forms of mathematical logic, and existence theorems on wholly abstract spaces, meaning spaces that don’t have any obvious correspondence with the physical world. These techniques tell us little about numbers, but rather help us answer questions like “How does X relate to Y?” and “Is Z a logical possibility?” and “For some perhaps unknown sets of beliefs, how serious a problem can Q cause?” All of these statements look to me to be exactly equivalent to the types of a priori logical reasoning which appear everywhere in 18th and 19th century “nonmathematical” social science.

There is a common objection to mathematical theorizing, that mathematics is limited in nature compared to the directed intuition which a good social scientist can verbalize. This is particularly true compared to the pure sciences. We have very little intuition about atoms, but great intuition about the social world we inhabit. Arrow argues, however, that making valid logical implication is a difficult task indeed, particularly if we’re using any deductive reasoning beyond the simplest tools in Aristotle. Writing our verbal thoughts as mathematics allows the use of more complicated deductive tools. And the same is true of induction: mathematical model building allows for the use of (what was then very modern) statistical tools to identify relationships. Naive regression identifies correlations, but is rarely able to discuss any more complex relationship between data.

A final note: if you’re interested in history of thought, there are some interesting discussions of decision theory pre-Savage and game theory pre-Nash and pre-Harsanyi in Arrow’s article. A number of interpretations are given that seem somewhat strange given our current understanding, such as interpreting mixed strategies as “bluffing,” or writing down positive-sum n-person cooperative games as zero-sum n+1 player games where a “fictitious player” eats the negative outcome. Less strange, but still by no means mainstream, is Wald’s interpretation of statistical inference as a zero-sum game against nature, where the statistician with a known loss function chooses a decision function (perhaps mixed) and nature simultaneously chooses a realization in order to maximize the expected loss. There is an interesting discussion of something that looks an awful lot like evolutionary game theory, proposed by Nicholas Rachevsky in 1947; I hadn’t known these non-equilibrium linear ODE games existed that far before Maynard Smith. Arrow, and no doubt his contemporaries, also appear to have been quite optimistic about the possibility of a dynamic game theory that incorporated learning about opponent’s play axiomatically, but I would say that, in 2012, we have no such theory and for a variety of reasons, a suitable one may not be possible. Finally, Arrow notes an interesting discussion between Koopmans and his contemporaries about methodological individualism; Arrow endorses the idea that, would we have the data, society’s aggregate outcomes are necessarily determined wholly by the actions of individuals. There is no “societal organism”. Many economists, no doubt, agree completely with that statement, though there are broad groups in the social sciences who both think that the phrase “would we have the data” is a more serious concern that economists generally consider it, and that conceive of non-human social actors. It’s worthwhile to at least know these arguments are out there.

http://128.36.236.35/P/cp/p00a/p0048.pdf (Final version provided thanks to the Cowles Commission’s lovely open access policy)

“David Hume and Modern Economics,” S. Dow (2009)

(This post also deals with a 2011 JEP Retrospective by Schabas and Wennerlind entitled “Hume on Money, Commerce and the Science of Economics.”)

Hume is first and foremost a philosopher and historian, but his social science is not unknown to us economists. Many economists, I imagine, know Hume as the guy who first wrote down the quantity theory of money (more on this shortly). A smaller number, though I hope a nontrivial number, also know Hume as Adam Smith’s best friend, with obvious influence on the Theory of Moral Sentiments and less obvious but still extant influence on The Wealth of Nations. Given Hume’s massive standing in philosophy – was Hume the Paul Samuelson of philosophers, or Samuelson the Hume of economists? – I want to jot down a few notes on particularly interesting comments of his. Readers particularly interested in this topic who have already read the Treatise and the Enquiry might want to pick up the newest edition of Rotwein’s collection of Hume essays on economics, as without such a collection his purely economic content is rather scattered.

First, on money. Hume claims prices are determined by the ratio of circulating currency to the number of goods, and lays out what we now call the specie-flow mechanism: an inflow of specie causes domestic prices to rise, causing imports to become more attractive, causing specie to flow out. He doesn’t say so explicitly, as far as I can tell, but this is basically a long-run equilibrium concept. The problem with Hume as monetarist, as pointed out by basically everyone who has ever written on this topic, is that Hume also has passages where he notes that during a monetary expansion, people are (not become! Remember Hume on causality!) more industrious, increasing the national product. Arguments that Hume is basically modern – money is neutral in the long run and not the short – are not terribly convincing.

Better, perhaps, to note that Hume has a strange understanding of the role of money creation. On many questions of moral behavior, Hume stresses the role of conventions and particularly the role of government in establishing conventions. He therefore treats different types of monetary expansions differently. An exogenous increase in the monetary supply, from a silver discovery or other temporary inflow of specie, does not affect conventions about the worth of money, but an increase in money supply deriving from excess credit creation by banks and sovereigns can affect conventions, hence affecting moral behavior, hence affecting the real economy. The above interpretation of Hume’s monetary writings is by no means universal, but I think, at least, it is an important framework to keep in the back of the mind.

Concerning methodology of social science, Hume makes one particularly striking claim: the human sciences are in a sense easier than natural sciences. A more common argument – due to Comte, perhaps, though my memory fails me – is that physics is simpler than chemistry, which is simpler than biology, then again simpler than psychology, and then social sciences, because each builds upon the other. I understand how particles work, hence understand physics, but I need to know how they interact to understand chemistry, how molecules affect lifeforms for biology, how the brain operates to understand psychology, and how brains and bodies interact with each other and history to understand social science. Hume flips this around entirely. He is an empiricist, and notes that to the extent we know anything, it is through our perceptions, and our own accounts as well as those of other humans are biased and distorted. To interpret perceptions of the natural world, we must first generalize about the human mind: “the science of man is the only solid foundation.” Concerning the social world, we are able to observe the actions and accounts of many people during our lives, so if we are to use induction (and this is Hume, so of course we are wary here), we have many examples from which to draw. Interesting.

https://dspace.stir.ac.uk/bitstream/1893/3167/1/2009%20Hume%20and%20Modern%20Economics.pdf (Working paper – final version in Capitalism and Society, 2009)

“On the Creative Role of Axiomatics,” D. Schlimm (2011)

The mathematician Felix Klein: “The abstract formulation is excellently suited for the elaboration of proofs, but it is clearly not suited for finding new ideas and methods; rather, it constitutes the end of a previous development.” Such a view, Dirk Schlimm argues, is common among philosophers of science as well as mathematicians and other practitioners of axiomatic science (like economic theory). But is axiomatics limited to formalization, to consolidation, or can the axiomatic method be a creative act, one that opens up new venues and suggests new ideas? Given the emphasis on this site of the explanatory value of theory, it will come as no surprise that I see axiomatics as fundamentally creative. The author of the present paper agrees, diagramming the interesting history of the mathematic idea of a lattice.

Lattices are wholly familiar to economists at this stage, but it is worth recapping that they can be formulated in two identical ways: either as a set of elements plus two operations satisfying commutative, associative and absorption laws, which together ensure the set of elements is a partially ordered set (the standard “axiomatic” definition), or else as a set in which each subset has a well-defined infimum and supremum, from which the meet and join operators can be defined and shown to satisfy the laws mentioned above. We use lattices all the time in economic theory: proofs involving preferences, generally a poset, are an obvious example, but also results using monotone comparative statics, among many others. In mathematics more generally, proofs using lattices unify results in a huge number of fields: number theory, projective geometry, abstract algebra and group theory, logic, and many more.

With all these great uses of lattice theory, you might imagine early results proved these important connections between fields, and that the axiomatic definition merely consolidated precisely what was assumed about lattices, ensuring we know the minimum number of things we need to assume. This is not the case at all.

Ernst Schroder, in the late 19th century, noted a mistake in a claim by CS Peirce concerning the axioms of Boolean algebra (algebra with 0 and 1 only). In particular, one of the two distributive laws – say, a+bc=(a+b)(a+c) – turns out to be completely independent from the other standard axioms. In other interesting areas of group theory, Schroder noticed that the distributive axiom was not satisfied, though other axioms of Boolean algebra were. This led him to list what would be the axioms of lattices as something interesting in their own right. That is, work on axiomatizing one area, Boolean algebra, led to an interesting subset of axioms in another area, with the second axiomatization being fundamentally creative.

Dedekind (of the famous cuts), around the same time, also wrote down the axioms for a lattice while considering properties of least common multiples and greatest common divisors in number theory. He listed a set of properties held by lcms and gcds, and noted that distributive laws did not hold for those operations. He then notes a number of interesting other mathematical structures which are described by those properties if taken as axioms: ideals, fields, points in n-dimensional space, etc. Again, this is creativity stemming from axiomatization. Dedekind was unable to find much further use for this line of reasoning in his own field, algebraic number theory, however.

Little was done on lattices until the 1930s; perhaps this is not surprising, as the set theory revolution hit math after the turn of the century, and modern uses of lattices are most common when we deal with ordered sets. Karl Menger (son of the economist, I believe) wrote a common axiomatization of projective and affine geometries, mentioning that only the 6th axiom separates the two, suggesting that further modification of that axiom may suggest interesting new geometries, a creative insight not available without axiomatization. Albert Bennett, unaware of earlier work, rediscovered the axiom of the lattice, and more interestingly listed dozens of novel connections and uses for the idea that are made clear from the axioms. Oystein Ore in the 1930s showed that the axiomatization of a lattice is equivalent to a partial order relation, and showed that it is in a sense as useful a generalization of algebraic structure as you might get. (Interesting for Paul Samuelson hagiographers: the preference relation foundation of utility theory was really cutting edge math in the late 1930s! Mathematical tools to deal with utility in such a modern way literally did not exist before Samuelson’s era.)

I skip many other interesting mathematicians who helped develop the theory, of which much more detail is available in the linked paper. The examples above, Schlimm claims, essentially filter down to three creative purposes served by axiomatics. First, axioms analogize, suggesting the similarity of different domains, leading to a more general set of axioms encompassing those smaller sets, leading to investigation of the resulting larger domain – Aristotle in Analytica Posteriora 1.5 makes precisely this argument. Second, axioms guide the discovery of similar domains that were not, without axiomatization, thought to be similar. Third, axioms suggest modification of an axiom or two, leading to a newly defined domain from the modified axioms which might also be of interest. I can see all three of these creative acts in economic areas like decision theory. Certainly for the theorist working in axiomatic systems, it is worth keeping an open mind for creative, rather than summary, uses of such a tool.

http://axiom.vu.nl/cmsone/SchlimmOnline.pdf (2009 working paper – final version in Synthese 183)

Follow

Get every new post delivered to your Inbox.

Join 205 other followers

%d bloggers like this: