“Aggregation in Production Functions: What Applied Economists Should Know,” J. Felipe & F. Fisher (2003)

Consider a firm that takes heterogeneous labor and capital inputs L1, L2… and K1, K2…, using these to produce some output Y. Define a firm production function Y=F(K1, K2…, L1, L2…) as the maximal output that can be produced using the given vector of outputs – and note the implicit optimization condition in that definition, which means that production functions are not simply technical relationships. What conditions are required to construct an aggregated production function Y=F(K,L), or more broadly to aggregate across firms an economy-wide production function Y=F(K,L)? Note that the question is not about the definition of capital per se, since defining “labor” is equally problematic when man-hours are clearly heterogeneous, and this question is also not about the more general capital controversy worries, like reswitching (see Samuelson’s champagne example) or the dependence of the return to capital on the distribution of income which, itself, depends on the return to capital.

(A brief aside: on that last worry, why the Cambridge UK types and their modern day followers are so worried about the circularity of the definition of the interest rate, yet so unconcerned about the exact same property of the object we call “wage”, is quite strange to me, since surely if wages equal marginal product, and marginal product in dollars is a function of aggregate demand, and aggregate demand is a function of the budget constraint determined by wages, we are in an identical philosophical situation. I think it’s pretty clear that the focus on “r” rather than “w” is because of the moral implications of capitalists “earning their marginal product” which are less than desirable for people of a certain political persuasion. But I digress; let’s return to more technical concerns.)

It turns out, and this should be fairly well-known, that the conditions under which factors can be aggregated are ridiculously stringent. If we literally want to add up K or L when firms use different production functions, the condition (due to Leontief) is that the marginal rate of substitution between different types of factors in one aggregation, e.g. capital, does not depend on the level of factors not in that aggregation, e.g. labor. Surely this is a condition that rarely holds: how much I want to use, in an example due to Solow, different types of trucks will depend on how much labor I have at hand. A follow-up by Nataf in the 1940s is even more discouraging. Assume every firm uses homogenous labor, every firm uses capital which though homogenous within each firms differs across firms, and every firm has identical constant returns to scale production technology. When can I now write an aggregate production function Y=F(K,L) summing up the capital in each firm K1, K2…? That aggregate function exists if and only if every firm’s production function is additively separable in capital and labor (in which case, the aggregation function is pretty obvious)! Pretty stringent, indeed.

Fisher helps things just a bit in a pair of papers from the 1960s. Essentially, he points out that we don’t want to aggregate for all vectors K and L, but rather we need to remember that production functions measure the maximum output possible when all inputs are used most efficiently. Competitive factor markets guarantee that this assumption will hold in equilibrium. That said, even assuming only one type of labor, efficient factor markets, and a constant returns to scale production function, aggregation is possible if and only if every firm has the same production function Y=F(b(v)K(v),L), where v denotes a given firm and b(v) is a measure of how efficiently capital is employed in that firm. That is, aside from capital efficiency, every firm’s production function must be identical if we want to construct an aggregate production function. This is somewhat better than Nataf’s result, but still seems highly unlikely across a sector (to say nothing of an economy!).

Why, then, do empirical exercises using, say, aggregate Cobb-Douglas seem to give such reasonable parameters, even though the above theoretical results suggest that parameters like “aggregate elasticity of substitution between labor and capital” don’t even exist? That is, when we estimate elasticities or total factor productivities from Y=AK^a*L^b, using some measure of aggregated capital, what are we even estimating? Two things. First, Nelson and Winter in their seminal book generate aggregate date which can almost perfectly be fitted using Cobb-Douglas even though their model is completely evolutionary and does not even involve maximizing behavior by firms, so the existence of a “good fit” alone is, and this should go without saying, not great evidence in support of a model. Second, since ex-post production Y must equal the wage bill plus the capital payments plus profits, Felipe notes that this identity can be algebraically manipulated to Y=AF(K,L) where the form of F depends on the nature of the factor shares. That is, the good fit of Cobb-Douglas or CES can simply reflect an accounting identity even when nothing is known about micro-level elasticities or similar.

So what to do? I am not totally convinced we should throw out aggregate production functions – it surely isn’t a coincidence that Solow residuals for TFP match are estimated to be high in places where our intuition says technological change has been rapid. Because of results like this, it doesn’t strike me that aggregate production functions are measuring arbitrary things. However, if we are using parameters from these functions to do counterfactual analysis, we really ought know better exactly what approximations or assumptions are being baked into the cake, and it doesn’t seem that we are quite there yet. Until we are, a great deal of care should be taken in assigning interpretations to estimates based on aggregate production models. I’d be grateful for any pointers in the comments to recent work on this problem.

Final published version (RePEc IDEAS. The “F. Fisher” on this paper is the former Clark Medal winner and well-known IO economist Franklin Fisher; rare is it to find a nice discussion of capital issues written by someone who is firmly part of the economics mainstream and completely aware of the major theoretical results from “both Cambridges”. Tip of the cap to Cosma Shalizi for pointing out this paper.

Some Results Related to Arrow’s Theorem

Arrow’s (Im)possibility Theorem is, and I think this is universally acknowledged, one of the great social science theorems of all time. I particularly love it because of its value when arguing with Popperians and other anti-theory types: the theorem is “untestable” in that it quite literally does not make any predictions, yet surely all would consider it a valuable scientific insight.

In this post, I want to talk about a couple. new papers using Arrow’s result is unusual ways. First, a philosopher has shown exactly how Arrow’s result is related to the general philosophical problem of choosing which scientific theory to accept. Second, a pair of computer scientists have used AI techniques to generate an interesting new method for proving Arrow.

The philosophic problem is the following. A good theory should satisfy a number of criteria; for Kuhn, these included accuracy, consistency, breadth, simplicity and fruitfulness. Imagine now there are a group of theories (about, e.g., how galaxies form, why birds have wings, etc.) and we ordinally rank them based on these criteria. Also imagine that we have ranked each theory according to these criteria and we all agree on the rankings. Which theory ought we accept? Arrow applied to theory choice gives us the worrying result that not only is there no unique method of choosing among theories but also that there may not exist any such method at all, at least if we want to satisfy unanimity, non-dictatorship and independence of irrelevant alternatives. That is, even if you and I all agree about how each theory ranks according to different desirability criteria, we still don’t have a good, general method of aggregating among criteria.

So what to do? Davide Rizza, in a new paper in Synthese (gated, I’m afraid), discusses a number of solutions. Of course, if we have more than just ordinal information about each criterion, then we can construct aggregated orders. For instance, if we assigned a number for the relative rankings on each criterion, we could just add these up for each theory and hence have an order. Note that this theory choice rule can be done even if we just have ordinal data – if there are N theories, then on criterion C, give the best theorem in that criterion N points, the second best N-1, and so on, then add up the scores. This is the famous Borda Count.

Why can’t we choose theories by the Borda Count or similar, then? Well, Borda (and any other rule that could construct an aggregate order while satisfying unanimity and non-dictatorship) must be violating the IIA assumption in Arrow. Unanimity, which insists a rule accept a theory if it considered best along every criterion, and non-dictatorship, where more than one criterion can at least matter in principle, seem totally unobjectionable. So maybe we ought just toss IIA from our theory choice rule, as perhaps Donald Saari would wish us to do. And IIA is a bit strange indeed. If I rank A>B>C, and if you require me to have transitive preferences, then just knowing the binary rankings A>B and B>C is enough to tell you that I prefer A>C even if I didn’t know that particular binary relationship. In this case, adding B isn’t “irrelevant”; there is information in the binary pairs generated by transitivity which IIA does not allow me to take advantage of. Some people call the IIA assumption “binary independence” since it aggregates using only binary relations, an odd thing given that the individual orders contain, by virtue of being orders, more than just binary relations. It turns out that there are aggregation rules which generate an order if we loosen IIA to an alternative restriction on how to use information in sequences. IIA, rather than ordinal rankings across criteria, is where Arrow poses a problem for theory choice. Now, Rizza points out that these aggregation rules needn’t be unique so we still can have situations where we all agree about how different theories rank according to each criterion, and agree on the axiomatic properties we want in an aggregation rules, yet nonetheless disagree about which theory to accept. Still worrying, though not for Kuhn, and certainly not for us crazier Feyerabend and Latour fans!

(A quick aside: How strange it is that Arrow’s Theorem is so heavily associated with voting? That every voting rule is subject to tactical behavior is Gibbard-Satterthwaite, not Arrow, and this result about strategic voting imposes nothing like an IIA assumption. Arrow’s result is about the far more general problem of aggregating orders, a problem which fundamentally has nothing to do with individual behavior. Indeed, I seem to recall that Arrow came up with his theorem while working one summer as a grad student at RAND on the problem of what, if anything, it could mean for a country to have preferences when voting on behalf of its citizens in bodies like the UN. The story also goes that when he showed his advisor – perhaps Hotelling? – what he had been working on over the summer, he was basically told the result was so good that he might as well just graduate right away!)

The second paper today comes from two computer scientists. There are lots of proofs of Arrow’s theorem – the original proof in Arrow’s 1951 book is actually incorrect! – but the CS guys use a technique I hadn’t seen before. Essentially, they first prove with a simple induction that iff you can find a case with 2 voters and 3 options that satisfies the Arrow axioms, can you find such a case with N>=2 voters and M>=3 options. This doesn’t actually narrow the problem a great deal: there are still 3!=6 ways to order 3 options, hence 6^2=36 permutations of the joint vote of the 2 voters, hence 6^36 functions mapping the voter orders to a social order. Nonetheless, the problem is small enough to be tackled by a Constraint Satisfaction algortihm which checks IIA and unanimity and finds only two social welfare functions not violating one of those constraints, which are just the cases where Agents 1 and 2 are dictators. Their algorithm took one second to run on a standard computer (clearly they are better algorithm writers than the average economist!). Sen’s theorem and Muller-Satterthwaite can also be proven using a similar restriction to the base case followed by algorithmic search.

Of course, algorithmic proofs tend to lack the insight and elegance of standard proofs. But they have benefits as well. Just as you can show that only 2 social welfare functions with N=2 voters and M=3 options satisfy IIA and unanimity, you can also show that only 94 (out of 6^36!) satisfy IIA. That is, it is IIA rather than other assumptions which is doing most of the work in Arrow. Inspecting those 94 remaining social welfare functions by hand can help elucidate alternative sets of axioms which also generate aggregation possibility or impossibility.

(And a third paper, just for fun: it turns out that Kiribati and Nauru actually use Borda counts in their elections, and that there does appear to be strategic candidate nomination behavior designed to take advantage of the non-IIA nature of Borda! IIA looks in many ways like a restriction on tactical behavior by candidates or those nominating issues, rather than a restriction on tactical behavior by voters. If you happen to teach Borda counts, this is a great case to give students.)

“Seeking the Roots of Entrepreneurship: Insights from Behavioral Economics,” T. Astebro, H. Herz, R. Nanda & R. Weber (2014)

Entrepreneurship is a strange thing. Entrepreneurs work longer hours, make less money in expectation, and have higher variance earnings than those working for firms; if anyone knows of solid evidence to the contrary, I would love to see the reference. The social value of entrepreneurship through greater product market competition, new goods, etc., is very high, so as a society the strange choice of entrepreneurs may be a net benefit. We even encourage it here at UT! Given these facts, why does anyone start a company anyway?

Astebro and coauthors, as part of a new JEP symposium on entrepreneurship, look at evidence from behavioral economics. The evidence isn’t totally conclusive, but it appears entrepreneurs are not any more risk-loving or ambiguity-loving than the average person. Though they are overoptimistic, you still see entrepreneurs in high-risk, low-performance firms even ten years after they are founded, at which point surely any overoptimism must have long since been beaten out of them.

It is, however, true that entrepreneurship is much more common among the well-off. If risk aversion can’t explain things, then perhaps entrepreneurship is in some sense consumption: the founders value independence and control. Experimental evidence provides fairly strong evidence for this hypothesis. For many entrepreneurs, it is more about not having a boss than about the small chance of becoming very rich.

This leads to a couple questions: why so many immigrant entrepreneurs, and what are we make of the declining rate of firm formation in the US? Pardon me if I speculate a bit here. The immigrant story may just be selection; almost by definition, those who move across borders, especially those who move for graduate school, tend to be quite independent! The declining rate of firm formation may be tied with inequality changes; to the extent that entrepreneurship involves consumption of a luxury good (control over one’s working life) in addition to standard risk-adjusted cost-benefit analysis, then changes in the income distribution will change that consumption pattern. More work is needed on these questions.

Summer 2014 JEP (RePEc IDEAS). As always, a big thumbs up to the JEP for being free to read! It is also worth checking out the companion articles by Bill Kerr and coauthors on experimentation, with some amazing stats using internal VC project evaluation data for which ex-ante projections were basically identical for ex-post failures and ex-post huge successes, and one by Haltiwanger and coauthors documenting the important role played by startups in job creation, the collapse in startup formation and job churn which began well before 2008, and the utter mystery about what is causing this collapse (which we can see across regions and across industries).

“Dynamic Commercialization Strategies for Disruptive Technologies: Evidence from the Speech Recognition Industry,” M. Marx, J. Gans & D. Hsu (2014)

Disruption. You can’t read a book about the tech industry without Clayton Christensen’s Innovator’s Dilemma coming up. Jobs loved it. Bezos loved it. Economists – well, they were a bit more confused. Here’s the story at its most elemental: in many industries, radical technologies are introduced. They perform very poorly initially, and so are ignored by the incumbent. These technologies rapidly improve, however, and the previously ignored entrants go on to dominate the industry. The lesson many tech industry folks take from this is that you ought to “disrupt yourself”. If there is a technology that can harm your most profitable business, then you should be the one to develop it; take Amazon’s “Lab126″ Kindle skunkworks as an example.

There are a couple problems with this strategy, however (well, many problems actually, but I’ll save the rest for Jill Lepore’s harsh but lucid takedown of the disruption concept which recently made waves in the New Yorker). First, it simply isn’t true that all innovative industries are swept by “gales of creative destruction” – consider automobiles or pharma or oil, where the major players are essentially all quite old. Gans, Hsu and Scott Stern pointed out in a RAND article many years ago that if the market for ideas worked well, you would expect entrants with good ideas to just sell to incumbents, since the total surplus would be higher (less duplication of sales assets and the like) and since rents captured by the incumbent would be higher (less product market competition). That is, there’s no particular reason that highly innovative industries require constant churn of industry leaders.

The second problem concerns disrupting oneself or waiting to see which technologies will last. Imagine it is costly to investigate potentially disruptive technologies for the incumbent. For instance, selling mp3s in 2002 would have cannibalized existing CD sales at a retailer with a large existing CD business. Early on, the potentially disruptive technology isn’t “that good”, hence it is not in and of itself that profitable. Eventually, some of these potentially disruptive technologies will reveal themselves to actually be great improvements on the status quo. If that is the case, then, why not just let the entrant make these improvements/drive down costs/learn about market demand, and then buy them once they reveal that the potentially disruptive product is actually great? Presumably the incumbent even by this time still retains its initial advantage in logistics, sales, brand, etc. By waiting and buying instead of disrupting yourself, you can still earn those high profits on the CD business in 2002 even if mp3s had turned out to be a flash in the pan.

This is roughly the intuition in a new paper by Matt Marx – you may know his work on non-compete agreements – Gans and Hsu. Matt has also collected a great dataset from industry journals on every firm that ever operated in automated speech recognition. Using this data, the authors show that a policy by entrants of initial competition followed by licensing or acquisition is particularly common when the entrants come in with a “disruptive technology”. You should see these strategies, where the entrant proves the value of their technology and the incumbent waits to acquire, in industries where ideas are not terribly appropriable (why buy if you can steal?) and entry is not terribly expensive (in an area like biotech, clinical trials and the like are too expensive for very small firms). I would add that you also need complementary assets to be relatively hard to replicate; if they aren’t, the incumbent may well wind up being acquired rather than the entrant should the new technology prove successful!

Final July 2014 working paper (RePEc IDEAS). The paper is forthcoming in Management Science.

“The Rise and Fall of General Laws of Capitalism,” D. Acemoglu & J. Robinson (2014)

If there is one general economic law, it is that every economist worth their salt is obligated to put out twenty pages responding to Piketty’s Capital. An essay by Acemoglu and Robinson on this topic, though, is certainly worth reading. They present three particularly compelling arguments. First, in a series of appendices, they follow Debraj Ray, Krusell and Smith and others in trying to clarify exactly what Piketty is trying to say, theoretically. Second, they show that it is basically impossible to find any effect of the famed r-g on top inequality in statistical data. Third, they claim that institutional features are much more relevant to the impact of economic changes on societal outcomes, using South Africa and Sweden as examples. Let’s tackle these in turn.

First, the theory. It has been noted before that Piketty is, despite beginning his career as a very capable economist theorist (hired at MIT at age 22!), very disdainful of the prominence of theory. He, quite correctly, points out that we don’t even have any descriptive data on a huge number of topics of economic interest, inequality being principal among these. And indeed he is correct! But, shades of the Methodenstreit, he then goes on to ignore theory where it is most useful, in helping to understand, and extrapolate from, his wonderful data. It turns out that even in simple growth models, not only is it untrue that r>g necessarily holds, but the endogeneity of r and our standard estimates of the elasticity of substitution between labor and capital do not at all imply that capital-to-income ratios will continue to grow (see Matt Rognlie on this point). Further, Acemoglu and Robinson show that even relatively minor movement between classes is sufficient to keep the capital share from skyrocketing. Do not skip the appendices to A and R’s paper – these are what should have been included in the original Piketty book!

Second, the data. Acemoglu and Robinson point out, and it really is odd, that despite the claims of “fundamental laws of capitalism”, there is no formal statistical investigation of these laws in Piketty’s book. A and R look at data on growth rates, top inequality and the rate of return (either on government bonds, or on a computed economy-wide marginal return on capital), and find that, if anything, as r-g grows, top inequality shrinks. All of the data is post WW2, so there is no Great Depression or World War confounding things. How could this be?

The answer lies in the feedback between inequality and the economy. As inequality grows, political pressures change, the endogenous development and diffusion of technology changes, the relative use of capital and labor change, and so on. These effects, in the long run, dominate any “fundamental law” like r>g, even if such a law were theoretically supported. For instance, Sweden and South Africa have very similar patterns of top 1% inequality over the twentieth century: very high at the start, then falling in mid-century, and rising again recently. But the causes are totally different: in Sweden’s case, labor unrest led to a new political equilibrium with a high-growth welfare state. In South Africa’s case, the “poor white” supporters of Apartheid led to compressed wages at the top despite growing black-white inequality until 1994. So where are we left? The traditional explanations for inequality changes: technology and politics. And even without r>g, these issues are complex and interesting enough – what could be a more interesting economic problem for an American economist than diagnosing the stagnant incomes of Americans over the past 40 years?

August 2014 working paper (No IDEAS version yet). Incidentally, I have a little tracker on my web browser that lets me know when certain pages are updated. Having such a tracker follow Acemoglu’s working papers pages is, frankly, depressing – how does he write so many papers in such a short amount of time?

“Epistemic Game Theory,” E. Dekel & M. Siniscalchi (2014)

Here is a handbook chapter that is long overdue. The theory of epistemic games concerns a fairly novel justification for solution concepts under strategic uncertainty – that is, situations where what I want to do depends on other people do, and vice versa. We generally analyze these as games, and have a bunch of equilibrium (Nash, subgame perfection, etc.) and nonequilibrium (Nash bargain, rationalizability, etc.) solution concepts. So which should you use? I can think of four classes of justification for a game solution. First, the solution might be stable: if you told each player what to do, no one person (or sometimes group) would want to deviate. Maskin mentions this justification is particularly worthy when it comes to mechanism design. Second, the solution might be the outcome of a dynamic selection process, such as evolution or a particular learning rule. Third, the solution may be justified by certain axiomatic first principles; Shapley value is a good example in this class. The fourth class, however, is the one we most often teach students: a solution concept is good because it is justified by individual behavior assumptions. Nash, for example, is often thought to be justified by “rationality plus correct beliefs”. Backward induction is similarly justified by “common knowledge of rationality at all states.”

Those are informal arguments, however. The epistemic games (or sometimes, “interactive epistemology”) program seeks to formally analyze assumptions about the knowledge and rationality of players and what it implies for behavior. There remain many results we don’t know (for instance, I asked around and could only come up with one paper on the epistemics of coalitional games), but the results proven so far are actually fascinating. Let me give you three: rationality and common belief in rationality implies rationalizable strategies are played, the requirements for Nash are different depending on how players there are, and backward induction is surprisingly difficult to justify on epistemic grounds.

First, rationalizability. Take a game and remove any strictly dominated strategy for each player. Now in the reduced game, remove anything that is strictly dominated. Continue doing this until nothing is left to remove. The remaining strategies for each player are “rationalizable”. If players can hold any belief they want about what potential “types” opponents may be – where a given (Harsanyi) type specifies what an opponent will do – then as long as we are all rational, we all believe the opponents are rational, we all believe the opponents all believe that we all are rational, ad infinitum, the only possible outcomes to the game are the rationalizable ones. Proving this is actually quite complex: if we take as primitive the “hierarchy of beliefs” of each player (what do I believe my opponents will do, what do I believe they believe I will do, and so on), then we need to show that any hierarchy of beliefs can be written down in a type structure, then we need to be careful about how we define “rational” and “common belief” on a type structure, but all of this can be done. Note that many rationalizable strategies are not Nash equilibria.

So what further assumptions do we need to justify Nash? Recall the naive explanation: “rationality plus correct beliefs”. Nash takes us from rationalizability, where play is based on conjectures about opponent’s play, to an equilibrium, where play is based on correct conjectures. But which beliefs need to be correct? With two players and no uncertainty, the result is actually fairly straightforward: if our first order beliefs are (f,g), we mutually believe our first order beliefs are (f,g), and we mutually believe we are rational, then beliefs (f,g) represent a Nash equilibrium. You should notice three things here. First, we only need mutual belief (I know X, and you know I know X), not common belief, in rationality and in our first order beliefs. Second, the result is that our first-order beliefs are that a Nash equilibrium strategy will be played by all players; the result is about beliefs, not actual play. Third, with more than two players, we are clearly going to need assumptions about how my beliefs about our mutual opponent are related to your beliefs; that is, Nash will require more, epistemically, than “basic strategic reasoning”. Knowing these conditions can be quite useful. For instance, Terri Kneeland at UCL has investigated experimentally the extent to which each of the required epistemic conditions are satisfied, which helps us to understand situations in which Nash is harder to justify.

Finally, how about backward induction? Consider a centipede game. The backward induction rationale is that if we reached the final stage, the final player would defect, hence if we are in the second-to-last stage I should see that coming and defect before her, hence if we are in the third-to-last stage she will see that coming and defect before me, and so on. Imagine that, however, player 1 does not defect in the first stage. What am I to infer? Was this a mistake or am I perhaps facing an irrational opponent? Backward induction requires that I never make such an inference, and hence I defect in stage 2.

Here is a better justification for defection in the centipede game, though. If player 1 doesn’t defect in the first stage, then I “try my best” to retain a belief in his rationality. That is, if it is possible for him to have some belief about my actions in the second stage which rationally justified his first stage action, then I must believe that he holds those beliefs. For example, he may believe that I believe he will continue again in the third stage, hence that I will continue in the second stage, hence he will continue in the first stage then plan to defect in the third stage. Given his beliefs about me, his actions in the first stage were rational. But if that plan to defect in stage three were his justification, then I should defect in stage two. He realizes I will make these inferences, hence he will defect in stage 1. That is, the backward induction outcome is justified by forward induction. Now, it can be proven that rationality and common “strong belief in rationality” as loosely explained above, along with a suitably rich type structure for all players, generates a backward induction outcome. But the epistemic justification is completely based on the equivalence between forward and backward induction under those assumptions, not on any epistemic justification for backward induction reasoning per se. I think that’s a fantastic result.

Final version, prepared for the new Handbook of Game Theory. I don’t see a version on RePEc IDEAS.

“The Tragedy of the Commons in a Violent World,” P. Sekeris (2014)

The prisoner’s dilemma is one of the great insights in the history of the social sciences. Why would people ever take actions that make everyone worse off? Because we all realize that if everyone took the socially optimal action, we would each be better off individually by cheating and doing something else. Even if we interact many times, that incentive to cheat will remain in our final interaction, hence cooperation will unravel all the way back to the present. In the absence of some ability to commit or contract, then, it is no surprise we see things like oligopolies who sell more than the quantity which maximizes industry profit, or countries who exhaust common fisheries faster than they would if the fishery were wholly within national waters, and so on.

But there is a wrinkle: the dreaded folk theorem. As is well known, if we play frequently enough, and the probability that any given game is the last is low enough, then any feasible outcome which is better than what players can guarantee themselves regardless of other player’s action can be sustained as an equilibrium; this, of course, includes the socially optimal outcome. And the punishment strategies necessary to get to that social optimum are often fairly straightforward. Consider oligopoly: if your firm produces more than half the monopoly output, then I produce the Cournot duopoly quantity in the next period. If you think I will produce Cournot, your best response is also to produce Cournot, and we will do so forever. Therefore, if we are setting prices frequently enough, the benefit to you of cheating today is not enough to overcome the lower profits you will earn in every future period, and hence we are able to collude at the monopoly level of output.

Folk theorems are really robust. What if we only observe some random public signal of what each of us did in the last period? The folk theorem holds. What if we only privately observe some random signal of what the other people did last period? No problem, the folk theorem holds. There are many more generalizations. Any applied theorist has surely run into the folk theorem problem – how do I let players use “reasonable” strategies in a repeated game but disallow crazy strategies which might permit tacit collusion?

This is Sekeris’ problem in the present paper. Consider two nations sharing a common pool of resources like fish. We know from Hotelling how to solve the optimal resource extraction problem if there is only one nation. With more than one nation, each party has an incentive to overfish today because they don’t take sufficient account of the fact that their fishing today lowers the amount of fish left for the opponent tomorrow, but the folk theorem tells us that we can still sustain cooperation if we interact frequently enough. Indeed, Ostrom won the Nobel a few years ago for showing how such punishments operate in many real world situations. But, but! – why then do we see fisheries and other common pool resources overdepleted so often?

There are a few ways to get around the folk theorem. First, it may just be that players do not interact forever, at least probabalistically; some firms may last longer than others, for instance. Second, it may be that firms cannot change their strategies frequently enough, so that you will not be punished so harshly if you deviate from the cooperative optimum. Third, Mallesh Pai and coauthors show in a recent paper that with a large number of players and sufficient differential obfuscation of signals, it becomes too difficult to “catch cheaters” and hence the stage game equilibrium is retained. Sekeris proposes an alternative to all of these: allow players to take actions which change the form of the stage game in the future. In particular, he allows players to fight for control of a bigger share of the common pool if they wish. Fighting requires expending resources from the pool building arms, and the fight itself also diminishes the size of the pool by destroying resources.

As the remaining resource pool gets smaller and smaller, then each player is willing to waste fewer resources arming themselves in a fight over that smaller pool. This means that if conflict does break out, fewer resources will be destroyed in the “low intensity” fight. Because fighting is less costly when the pool is small, as the pool is depleted through cooperative extraction, eventually the players will fight over what remains. Since players will have asymmetric access to the pool following the outcome of the fight, there are fewer ways for the “smaller” player to harm the bigger one after the fight, and hence less ability to use threats of such harm to maintain folk-theorem cooperation before the fight. Therefore, the cooperative equilibrium partially unravels and players do not fully cooperate even at the start of the game when the common pool is big.

That’s a nice methodological trick, but also somewhat reasonable in the context of common resource pool management. If you don’t overfish today, it must be because you fear I will punish you by overfishing myself tomorrow. If you know I will enact such punishment, then you will just invade me tomorrow (perhaps metaphorically via trade agreements or similar) before I can enact such punishment. This possibility limits the type of credible threats that can be made off the equilibrium path.

Final working paper (RePEc IDEAS. Paper published in Fall 2014 RAND.

“Housing Market Spillovers: Evidence from the End of Rent Control in Cambridge, MA,” D. Autor, C. Palmer & P. Pathak (2014)

Why don’t people like renters? Looking for rental housing up here in Toronto (where under any reasonable set of parameters, there looks to be a serious housing bubble at the moment), it seems very rare for houses to be rented and also very rare for rental and owned homes to appear in the same neighborhood. Why might this be? Housing externalities is one answer: a single run-down house on the block greatly harms the value of surrounding houses. Social opprobrium among homeowners may be sufficient to induce them to internalize these externalities in a way that is not true of landlords. The very first “real” paper I helped with back at the Fed showed a huge impact of renovating run-down properties on neighborhood land values in Richmond, Virginia.

Given that housing externalities exist, we may worry about policies that distort the rent-buy decision. Rent control may not only limit incentives for landlords to upgrade the quality of their own property, but may also damage the value of neighboring properties. Autor, Palmer and Pathak investigate a quasiexperiment in Cambridge, MA (right next door to my birthplace of Boston, I used to hear Cambridge referred to as the PRC!). In 1994, Massachusetts held a referendum on banning rent control, which was enforced very strongly in Cambridge. It passed 51-49.

The units previously under rent control, no surprise, saw a big spurt of investment and a large increase in their value. If the rent controlled house was in a block with lots of other rent controlled houses, however, the price rose even more. That is, there was a substantial indirect impact where upgrades on neighboring houses increases the value of my previously rent-controlled house. Looking at houses that were never rent controlled, those close to previously rent-controlled units rose in price much faster than otherwise-similar houses in the same area which didn’t have rent-controlled units on the same block. Overall, Autor et al estimate that rent decontrol raised the value of Cambridge property by 2 billion, and that over 80 percent of this increase was due to indirect effects (aka housing externalities). No wonder people are so worried about a rental unit popping up in their neighborhood!

Final version in June 2014 JPE (IDEAS version).

“Upstream Innovation and Product Variety in the U.S. Home PC Market,” A. Eizenberg (2014)

Who benefits from innovation? The trivial answer would be that everyone weakly benefits, but since innovation can change the incentives of firms to offer different varieties of a product, heterogeneous tastes among buyers may imply that some types of innovation makes large groups of people worse off. Consider computers, a rapidly evolving technology. If Lenovo introduces a laptop with a faster processor, they may wish to discontinue production of a slower laptop, because offering both types flattens the demand curve for each, and hence lowers the profit-maximizing markup that can be charged for the better machine. This effect, combined with a fixed cost of maintaining a product line, may push firms to offer too little variety in equilibrium.

As an empirical matter, however, things may well go the other direction. Spence’s famous product selection paper suggests that firms may produce too much variety, because they don’t take into account that part of the profit they earn from a new product is just cannibalization of other firm’s existing product lines. Is it possible to separate things out from data? Note that this question has two features that essentially require a structural setup: the variable of interest is “welfare”, a completely theoretical concept, and lots of the relevant numbers like product line fixed costs are unobservable to the econometrician, hence they must be backed out from other data via theory.

There are some nice IO tricks to get this done. Using a near-universe of laptop sales in the early 2000s, Eizenberg estimates heterogeneous household demand using standard BLP-style methods. Supply is tougher. He assumed that firms get a fixed cost per product line shock, then pick their product mix each quarter, then observe consumer demand, then finally play Nash-Bertrand differentiated product pricing. The problem is that the pricing game often has multiple equilibria (e.g., with two symmetric firms, one may offer a high-end product and the other a low-end one, or vice versa). Since the pricing game equilibria are going to be used to back out fixed costs, we are in a bit of a bind. Rather than select equilibria using some ad hoc approach (how would you even do so in the symmetric case just mentioned?), Eizenberg cleverly just partially identifies fixed costs as backed out from any possible pricing game equilibrium, using bounds in the style of Pakes, Porter, Ho and Ishii. This means that welfare effects are also only partially identified.

Throwing this model at the PC data shows that the mean consumer in the early 2000s wasn’t willing to pay any extra for a laptop, but there was a ton of heterogeneity in willingness to pay both for laptops and for faster speed on those laptops. Every year, the willingness to pay for a given computer fell $257 – technology was rapidly evolving and lots of substitute computers were constantly coming onto the market.

Eizenberg uses these estimates to investigate a particularly interesting counterfactual: what was the effect of the introduction of the lighter Pentium M mobile processor? As Pentium M was introduced, older Pentium III based laptops were, over time, no longer offered by the major notebook makers. The M raised predicted notebook sales by 5.8 to 23.8%, raised mean notebook price by $43 to $86, and lowered Pentium III share in the notebook market from 16-23% down to 7.7%. Here’s what’s especially interesting, though: total consumer surplus is higher with the M available, but all of the extra consumer surplus accrues to the 20% least price-sensitive buyers (as should be intuitive, since only those with high willingness-to-pay are buying cutting edge notebooks). What if a social planner had forced firms to keep offering the Pentium III models after the M was introduced? Net consumer plus producer surplus may have actually been positive, and the benefits would have especially accrued to those at the bottom end of the market!

Now, as a policy matter, we are (of course) not going to force firms to offer money-losing legacy products. But this result is worth keeping in mind anyway: because firms are concerned about pricing pressure, they may not be offering a socially optimal variety of products, and this may limit the “trickle-down” benefits of high tech products.

2011 working paper (No IDEAS version). Final version in ReStud 2014 (gated).

Laboratory Life, B. Latour & S. Woolgar (1979)

Let’s do one more post on the economics of science; if you haven’t heard of Latour and the book that made him famous, all I can say is that it is 30% completely crazy (the author is a French philosopher, after all!), 70% incredibly insightful, and overall a must read for anyone trying to understand how science proceeds or how scientists are motivated.

Latour is best known for two ideas: that facts are socially constructed (and hence science really isn’t that different from other human pursuits) and that objects/ideas/networks have agency. He rose to prominence with Laboratory Life, which followed two years observing a lab, that of future Nobel Winner Roger Guillemin at the Salk Institute at UCSD.

What he notes is that science is really strange if you observe it proceeding without any priors. Basically, a big group of people use a bunch of animals and chemicals and technical devices to produce beakers of fluids and points on curves and colored tabs. Somehow, after a great amount of informal discussion, all of these outputs are synthesized into a written article a few pages long. Perhaps, many years later, modalities about what had been written will be dropped; “X is a valid test for Y” rather than “W and Z (1967) claim that X is a valid test for Y” or even “It has been conjectured that X may be a valid test for Y”. Often, the printed literature will later change its mind; “X was once considered a valid test for Y, but that result is no longer considered convincing.”

Surely no one denies that the last paragraph accurately describes how science proceeds. But recall the schoolboy description, in which there are facts in the world, and then scientists do some work and run some tests, after which a fact has been “discovered”. Whoa! Look at all that is left out! How did we decide what to test, or what particulars constitute distinct things? How did we synthesize all of the experimental data into a few pages of formal writeup? Through what process did statements begin to be taken for granted, losing their modalities? If scientists actually discover facts, then how can a “fact” be overturned in the future? Latour argues, and gives tons of anecdotal evidence from his time at Salk, that providing answers to those questions basically constitutes the majority of what scientists actually do. That is, it is not that the fact is out there in nature waiting to be discovered, but that the fact is constructed by scientists over time.

That statement can be misconstrued, of course. That something is constructed does not mean that it isn’t real; the English language is both real and it is uncontroversial to point out that it is socially constructed. Latour and Woolgar: “To say that [a particular hormone] is constructed is not to deny its solidity as a fact. Rather, it is to emphasize how, where and why it was created.” Or later, “We do not wish to say that facts do not exist nor that there is no such thing as reality. In this simple sense we are not relativist. Our point is that ‘out-there-ness’ is the consequence of scientific work rather than its cause.” Putting their idea another way, the exact same object or evidence can at one point be considered up for debate or perhaps just a statistical artefact, yet later is considered a “settled fact” and yet later still will occasionally revert again. That is, the “realness” of the scientific evidence is not a property of the evidence itself, which does not change, but a property of the social process by which science reifies that evidence into an object of significance.

Latour and Woolgar also have an interesting discussion of why scientists care about credit. The story of credit as a reward, or credit-giving as some sort of gift exchange is hard to square with certain facts about why people do or do not cite. Rather, credit can be seen as a sort of capital. If you are credited with a certain breakthrough, you can use that capital to get a better position, more equipment and lab space, etc. Without further breakthroughs for which you are credited, you will eventually run out of such capital. This is an interesting way to think about why and when scientists care about who is credited with particular work.

Amazon link. This is a book without a nice summary article, I’m afraid, so you’ll have to stop by your library.


Get every new post delivered to your Inbox.

Join 199 other followers

%d bloggers like this: