Author Archives: afinetheorem

“The Contributions of the Economics of Information to Twentieth Century Economics,” J. Stiglitz (2000)

There have been three major methodological developments in economics since 1970. First, following the Lucas Critique we are reluctant to accept policy advice which is not the result of directed behavior on the part of individuals and firms. Second, developments in game theory have made it possible to reformulate questions like “why do firms exist?”, “what will result from regulating a particular industry in a particular way?”, “what can I infer about the state of the world from an offer to trade?”, among many others. Third, imperfect and asymmetric information was shown to be of first-order importance for analyzing economic problems.

Why is information so important? Prices, Hayek taught us, solve the problem of asymmetric information about scarcity. Knowing the price vector is a sufficient statistic for knowing everything about production processes in every firm, as far as generating efficient behavior is concerned. The simple existence of asymmetric information, then, is not obviously a problem for economic efficiency. And if asymmetric information about big things like scarcity across society does not obviously matter, then how could imperfect information about minor things matter? A shopper, for instance, may not know exactly the price of every car at every dealership. But “Natura non facit saltum”, Marshall once claimed: nature does not make leaps. Tiny deviations from the assumptions of general equilibrium do not have large consequences.

But Marshall was wrong: nature does make leaps when it comes to information. The search model of Peter Diamond, most famously, showed that arbitrarily small search costs lead to firms charging the monopoly price in equilibrium, hence a welfare loss completely out of proportion to the search costs. That is, information costs and asymmetries, even very small ones, can theoretically be very problematic for the Arrow-Debreu welfare properties.

Even more interesting, we learned that prices are more powerful than we’d believed. They convey information about scarcity, yes, but also information about other people’s own information or effort. Consider, for instance, efficiency wages. A high wage is not merely a signal of scarcity for a particular type of labor, but is simultaneously an effort inducement mechanism. Given this dual role, it is perhaps not surprising that general equilibrium is no longer Pareto optimal, even if the planner is as constrained informationally as each agent.

How is this? Decentralized economies may, given information cost constraints, exert too much effort searching, or generate inefficient separating equilibrium that unravel trades. The beautiful equity/efficiency separation of the Second Welfare Theorem does not hold in a world of imperfect information. A simple example on this point is that it is often useful to allow some agents suffering moral hazard worries to “buy the firm”, mitigating the incentive problem, but limited liability means this may not happen unless those particular agents begin with a large endowment. That is, a different endowment, where the agents suffering extreme moral hazard problems begin with more money and are able to “buy the firm”, leads to more efficient production (potentially in a Pareto sense) than an endowment where those workers must be provided with information rents in an economy-distorting manner.

It is a strange fact that many social scientists feel economics to some extent stopped progressing by the 1970s. All the important basic results were, in some sense, known. How untrue this is! Imagine labor without search models, trade without monopolistic competitive equilibria, IO or monetary policy without mechanism design, finance without formal models of price discovery and equilibrium noise trading: all would be impossible given the tools we had in 1970. The explanations that preceded modern game theoretic and information-laden explanations are quite extraordinary: Marshall observed that managers have interests different from owners, yet nonetheless are “well-behaved” in running firms in a way acceptable to the owner. His explanation was to credit British upbringing and morals! As Stiglitz notes, this is not an explanation we would accept today. Rather, firms have used a number of intriguing mechanisms to structure incentives in a way that limits agency problems, and we now possess the tools to analyze these mechanisms rigorously.

Final 2000 QJE (RePEc IDEAS)

“Identifying Technology Spillovers and Product Market Rivalry,” N. Bloom, M. Schankerman & J. Van Reenen (2013)

How do the social returns to R&D differ from the private returns? We must believe there is a positive gap between the two given the widespread policies of subsidizing R&D investment. The problem is measuring the gap: theory gives us a number of reasons why firms may do more R&D than the social optimum. Most intuitively, a lot of R&D contains “business stealing” effects, where some of the profit you earn from your new computer chip comes from taking sales away from me, even if you chip is only slightly better than mine. Business stealing must be weighed against the fact that some of the benefits of knowledge a firm creates is captured by other firms working on similar problems, and the fact that consumers get surplus from new inventions as well.

My read of the literature is that we don’t have know much about how aggregate social returns to research differ from private returns. The very best work is at the industry level, such as Trajtenberg’s fantastic paper on CAT scans, where he formally writes down a discrete choice demand system for new innovations in that product and compares R&D costs to social benefits. The problem with industry-level studies is that, almost by definition, they are studying the social return to R&D in ex-post successful new industries. At an aggregate level, you might think, well, just include the industry stock of R&D in a standard firm production regression. This will control for within-industry spillovers, and we can make some assumption about the steepness of the demand curve to translate private returns given spillovers into returns inclusive of consumer surplus.

There are two problems with that method. First, what is an “industry” anyway? Bloom et al point out in the present paper that even though Apple and Intel do very similar research, as measured by the technology classes they patent in, they don’t actually compete in the product market. This means that we want to include “within-similar-technology-space stock of knowledge” in the firm production function regression, not “within-product-space stock of knowledge”. Second, and more seriously, if we care about social returns, we want to subtract out from the private return to R&D any increase in firm revenue that just comes from business stealing with slightly-improved versions of existing products.

Bloom et al do both in a very interesting way. First, they write down a model where firms get spillovers from research in similar technology classes, then compete with product market rivals; technology space and product market space are correlated but not perfectly so, as in the Apple/Intel example. They estimate spillovers in technology space using measures of closeness in terms of patent classes, and measure closeness in product space based on the SIC industries that firms jointly compete in. The model overidentifies the existence of spillovers: if technological spillovers exist, then you can find evidence conditional on the model in terms of firm market value, firm R&D totals, firm productivity and firm patent activity. No big surprises, given your intuition: technological spillovers to other firms can be seen in every estimated equation, and business stealing R&D, though small in magnitude, is a real phenomenon.

The really important estimate, though, is the level of aggregate social returns compared to private returns. The calculation is non-obvious, and shuttled to an online appendix, but essentially we want to know how increasing R&D by one dollar increases total output (the marginal social return) and how increasing R&D by one dollar increases firm revenue (marginal private return). The former may exceed the latter if the benefits of R&D spill over to other firms, but the latter may exceed the former is lots of R&D just leads to business stealing. Note that any benefits in terms of consumer surplus are omitted. Bloom et al find aggregate marginal private returns on the order of 20%, and social returns on the order of 60% (a gap referred to as “29.2%” instead of “39.2%” in the paper; come on, referees, this is a pretty important thing to not notice!). If it wasn’t for business stealing, the gap between social and private returns would be ten percentage points higher. I confess a little bit of skepticism here; do we really believe that for the average R&D performing firm, the marginal private return on R&D is 20%? Nonetheless, the estimate that social returns exceed private returns is important. Even more important is the insight that the gap between social and private returns depends on the size of the technology spillover. In Bloom et al’s data, large firms tend to do work in technology spaces with more spillovers, while small firms tend to work on fairly idiosyncratic R&D; to greatly simplify what is going on, large firms are doing more general R&D than the very product-specific R&D small firms do. This means that the gap between private and social return is larger for large firms, and hence the justification for subsidizing R&D might be highest for very large firms. Government policy in the U.S. used to implicitly recognize this intuition, shuttling R&D funds to the likes of Bell Labs.

All in all an important contribution, though this is by no means the last word on spillovers; I would love to see a paper asking why firms don’t do more R&D given the large private returns we see here (and in many other papers, for that matter). I am also curious how R&D spillovers compare to spillovers from other types of investments. For instance, an investment increasing demand for product X also increases demand for any complementary products, leads to increased revenue that is partially captured by suppliers with some degree of market power, etc. Is R&D really that special compared to other forms of investment? Not clear to me, especially if we are restricting to more applied, or more process-oriented, R&D. At the very least, I don’t know of any good evidence one way or the other.

Final version, Econometrica 2013 (RePEc IDEAS version); the paper essentially requires reading the Appendix in order to understand what is going on.

“Entrepreneurship: Productive, Unproductive and Destructive,” W. Baumol (1990)

William Baumol, who strikes me as one of the leading contenders for a Nobel in the near future, has written a surprising amount of interesting economic history. Many economic historians see innovation – the expansion of ideas and the diffusion of products containing those ideas, generally driven by entrepreneurs – as critical for growth. But we find it very difficult to see any reason why the “spirit of innovation” or the net amount of cleverness in society is varying over time. Indeed, great inventions, as undeveloped ideas, occur almost everywhere at almost all times. The steam engine of Heron of Alexandria, which was used for parlor tricks like opening temple doors and little else, is surely the most famous example of a great idea, undeveloped.

Why, then, do entrepreneurs develop ideas and cause products to diffuse widely at some times in history and not at others? Schumpeter gave five roles for an entrepreneur: introducing new products, new production methods, new markets, new supply sources or new firm and industry organizations. All of these are productive forms of entrepreneurship. Baumol points out that clever folks can also spend their time innovating new war implements, or new methods of rent seeking, or new methods of advancing in government. If incentives are such that those activities are where the very clever are able to prosper, both financially and socially, then it should be no surprise that “entrepreneurship” in this broad sense is unproductive or, worse, destructive.

History offers a great deal of support here. Despite quite a bit of productive entrepreneurship in the Middle East before the rise of Athens and Rome, the Greeks and Romans, especially the latter, are well-known for their lack of widespread diffusion of new productive innovations. Beyond the steam engine, the Romans also knew of the water wheel yet used it very little. There are countless other examples. Why? Let’s turn to Cicero: “Of all the sources of wealth, farming is the best, the most able, the most profitable, the most noble.” Earning a governorship and stripping assets was also seen as noble. What we now call productive work? Not so much. Even the freed slaves who worked as merchants had the goal of, after acquiring enough money, retiring to “domum pulchram, multum serit, multum fenerat”: a fine house, land under cultivation and short-term loans for voyages.

Baumol goes on to discuss China, where passing the imperial exam and moving into government was the easiest way to wealth, and the early middle ages of Europe, where seizing assets from neighboring towns was more profitable than expanding trade. The historical content of Baumol’s essay was greatly expanded in a book he edited alongside Joel Mokyr and David Landes called The Invention of Enterprise, which discusses the relative return to productive entrepreneurship versus other forms of entrepreneurship from Babylon up to post-war Japan.

The relative incentives for different types of “clever work” are relevant today as well. Consider Luigi Zingales’ new lecture, Does Finance Benefit Society? I can’t imagine anyone would consider Zingales hostile to the financial sector, but he nonetheless discusses in exhaustive detail the ways in which incentives push some workers in that sector toward rent-seeking and fraud rather than innovation which helps the consumer.

Final JPE copy (RePEc IDEAS). Murphy, Schleifer and Vishny have a paper, also from the JPE in 1990, on the topic of how clever people in many countries are incentivized toward rent-seeking; their work is more theoretical and empirical than historical. If you are interested in innovation and entrepreneurship, I uploaded the reading list for my PhD course on the topic here.

“Designing Efficient College and Tax Policies,” S. Findeisen & D. Sachs (2014)

It’s job market season, which is a great time of year for economists because we get to read lots of interesting papers. The one, by Dominik Sachs from Cologne and his coauthor Sebastian Findeisen, is particularly relevant given the recent Obama policy announcement about further subsidizing community college. The basic facts of marginal college students are fairly well-known: there is a pretty substantial wage bump for college grads (including ones who are not currently attending but who would attend if college was a little cheaper), many do not go to college even given this wage bump, there are probably externalities both in the economic and social realm from having a more education population though these are quite hard to measure, borrowing constraints bind for some potential college students but don’t appear to be that important, and it is very hard to design policies which benefit only marginal college candidates without also subsidizing those who would go whether or not the subsidy existed.

The naive thought might be “why should we subsidize college in the absence of borrowing constraints? By revealed preference, people choose not to go to college even given the wage bump, which likely implies that for many people studying and spending time going to class gives negative utility. Given the wage bump, these people are apparently willing to pay a lot of money to avoid spending time in college. The social externalities of college probably exist, but in general equilibrium more college-educated workers might drive down the return to college for people who are currently going. Therefore, we ought not distort the market.”

However, Sachs and Findeisen point out that there is also a fiscal externality: higher wages equals higher tax revenue in the future, and only the government cares about that revenue. Even more, the government is risk-neutral, or at least less risk-averse than individuals, about that revenue; people might avoid going to college if, along with bumping up their expected future wages, college also introduces uncertainty into their future wage path. If a subsidy could be targeted largely to students on the margin rather than those currently attending college, and if those marginal students see a big wage bump, and if government revenue less transfers back to the taxpayer is high, then it may be worth it for the government to subsidize college even if there are no other social benefits!

The authors write a nice little structural model. People choose to go to college or not depending on their innate ability, their parent’s wealth, the cost of college, the wage bump they expect (and the variance thereof), and their personal taste or distaste for studying as opposed to working (“psychic costs”). All of those variables aside from personal taste and innate ability can be pulled out of U.S. longitudinal data, performance on the army qualifying test can proxy for innate ability, and given distributional assumptions, we can identify the last free parameter, personal taste, by assuming that people go to college only if their lifetime discounted utility from attendance, less psychic costs, exceeds the lifetime utility from working instead. A choice model of this type seems to match data from previous studies with quasirandom variation concerning the returns to college education.

The direct benefit to the government from higher tax revenue from a subsidy policy, then, is the cost of the subsidy times the number subsidized, minus the proportion of subsidized students who would not have gone to college but for the subsidy times the discounted lifetime wage bump for those students times government tax revenue as a percent of that wage bump. The authors find that a general college subsidy program nearly pays for itself: if you subsidize everyone there aren’t many marginal students, but even for those students the wage bump is substantial. Targeting low income students is even better. Though the low income students affected on the margin tend to be less academically gifted, and hence to earn a lower (absolute) increase in wages from going to college, subsidies targeted at low income students do not waste as much money subsidizing students who would go to college anyway (i.e., a large percentage of high income kids). Note that the subsidies are small enough in absolute terms that the distortion on parental labor supply, from working less in order to qualify for subsidies, is of no quantitative importance, a fact the authors show rigorously. Merit-based subsidies will attract better students who have more to gain from going to college, but they also largely affect people who would go to college anyway, hence offer less bang for the buck to government compared to need-based grants.

The authors have a nice calibrated model in hand, so there are many more questions they ask beyond the direct partial equilibrium benefits of college attendance. For example, in general equilibrium, if we induce people to go to college, the college wage premium will fall. But note that wages for non-college-grads will rise in relative terms, so the net effect of the grants discussed in the previous paragraph on government revenue is essentially unchanged. Further, as Nate Hilger found using quasirandom variation in income due to layoffs, liquidity constraints do not appear to be terribly important for the college making decision: it is increasing grants, not changing loan eligibility, that will do anything of any importance to college attendance.

November 2014 working paper (No IDEAS version). The authors have a handful of other very interesting papers in the New Dynamic Public Finance framework, which is blazing hot right now. As far as I understand the project of NDPF, essentially we can simplify the (technically all-but-impossible-to-solve) dynamic mechanism problem of designing optimal taxes and subsidies under risk aversion and savings behavior to an equivalent reduced form that essentially only depends on simple first order conditions and a handful of elasticities. Famously, it is not obvious that capital taxation should be zero.

“Competition, Imitation and Growth with Step-by-Step Innovation,” P. Aghion, C. Harris, P. Howitt, & J. Vickers (2001)

(One quick PSA before I get to today’s paper: if you happen, by chance, to be a graduate student in the social sciences in Toronto, you are more than welcome to attend my PhD seminar in innovation and entrepreneurship at the Rotman school which begins on Wednesday, the 7th. I’ve put together a really wild reading list, so hopefully we’ll get some very productive discussions out of the course. The only prerequisite is that you know some basic game theory, and my number one goal is forcing the economists to read sociology, the sociologists to write formal theory, and the whole lot to understand how many modern topics in innovation have historical antecedents. Think of it as a high-variance cross-disciplinary educational lottery ticket! If interested, email me at for more details.)

Back to Aghion et al. Let’s kick off 2015 with one of the nicer pieces to come out the ridiculously productive decade or so of theoretical work on growth put together by Philippe Aghion and his coauthors; I wish I could capture the famous alacrity of Aghion’s live presentation of his work, but I fear that’s impossible to do in writing! This paper is based around writing a useful theory to speak to two of the oldest questions in the economics of innovation: is more competition in product markets good or bad for R&D, and is there something strange about giving a firm IP (literally a grant of market power meant to spur innovation via excess rents) at the same time as we enforce antitrust (generally a restriction on market power meant to reduce excess rents)?

Aghion et al come to a few very surprising conclusions. First, the Schumpeterian idea that firms with market power do more R&D is misleading because it ignores the “escape the competition” effect whereby firms have high incentive to innovate when there is a large market that can be captured by doing so. Second, maximizing that “escape the competition” motive may involve making it not too easy to catch up to market technological leaders (by IP or other means). These two theoretical results imply that antitrust (making sure there are a lot of firms competing in a given market, spurring new innovation to take market share from rivals) and IP policy (ensuring that R&D actually needs to be performed in order to gain a lead) are in a sense complements! The fundamental theoretical driver is that the incentive to innovate depends not only on the rents of an innovation, but on the incremental rents of an innovation; if innovators include firms that already active in an industry, policy that makes your current technological state less valuable (because you are in a more competitive market, say) or policy that makes jumping to a better technological state more valuable both increase the size of the incremental rent, and hence the incentive to perform R&D.

Here are the key aspects of a simplified version of the model. An industry is a duopoly where consumers spend exactly 1 dollar per period. The duopolists produce partially substitutable goods, where the more similar the goods the more “product market competition” there is. Each of the duopolists produces their good at a firm-specific cost, and competes in Bertrand with their duopoly rival. At the minimal amount of product market competition, each firm earns constant profit regardless of their cost or their rival’s cost. Firms can invest in R&D which gives some flow probability of lowering their unit cost. Technological laggards sometimes catch up to the unit cost of leaders with exogenous probability; lower IP protection (or more prevalent spillovers) means this probability is higher. We’ll look only at features of this model in the stochastic distribution of technological leadership and lags which is a steady state if there infinite duopolistic industries.

In a model with these features, you always want at least a little competition, essentially for Arrow (1962) reasons: the size of the market is small when market power is large because total unit sales are low, hence the benefit of reducing unit costs is low, hence no one will bother to do any innovation in the limit. More competition can also be good because it increases the probability that two firms are at similar technological levels, in which case each wants to double down on research intensity to gain a lead. At very high levels of competition, the old Schumpeterian story might bind again: goods are so substitutable that R&D to increase rents is pointless since almost all rents are competed away, especially if IP is weak so that rival firms catch up to your unit cost quickly no matter how much R&D you do. What of the optimal level of IP? It’s always best to ensure IP is not too strong, or that spillovers are not too weak, because the benefit of increased R&D effort when firms are at similar technological levels following the spillover exceeds the lost incentive to gain a lead in the first place when IP is not perfectly strong. When markets are really competitive, however, the Schumpeterian insight that some rents need to exist militates in favor of somewhat stronger IP than in less competitive product markets.

Final working paper (RePEc IDEAS) which was published in 2001 in the Review of Economic Studies. This paper is the more detailed one theoretically, but if all of the insight sounds familiar, you may already know the hugely influential follow-up paper by Aghion, Bloom, Blundell, Griffith and Howitt, “Competition and Innovation: An Inverted U Relationship”, published in the QJE in 2005. That paper gives some empirical evidence for the idea that innovation is maximized at intermediate values of product market competition; the Schumpeterian “we need some rents” motive and the “firms innovate to escape competition” motive both play a role. I am actually not a huge fan of this paper – as an empirical matter, I’m unconvinced that most cost-reducing innovation in many industries will never show up in patent statistics (principally for reasons that Eric von Hippel made clear in The Sources of Innovation, which is freely downloadable at that link!). But this is a discussion for another day! One more related paper we have previously discussed is Goettler and Gordon’s 2012 structural work on processor chip innovation at AMD and Intel, which has a very similar within-industry motivation.

“Forced Coexistence and Economic Development: Evidence from Native American Reservations,” C. Dippel (2014)

I promised one more paper from Christian Dippel, and it is another quite interesting one. There is lots of evidence, folk and otherwise, that combining different ethnic or linguistic groups artificially, as in much of the ex-colonial world, leads to bad economic and governance outcomes. But that’s weird, right? After all, ethnic boundaries are themselves artificial, and there are tons of examples – Italy and France being the most famous – of linguistic diversity quickly fading away once a state is developed. Economic theory (e.g., a couple recent papers by Joyee Deb) suggests an alternative explanation: groups that have traditionally not worked with each other need time to coordinate on all of the Pareto-improving norms you want in a society. That is, it’s not some kind of intractable ethnic hate, but merely a lack of trust that is the problem.

Dippel uses the history of American Indian reservations to examine the issue. It turns out that reservations occasionally included different subtribal bands even though they almost always were made up of members of a single tribe with a shared language and ethnic identity. For example, “the notion of tribe in Apachean cultures is very weakly developed. Essentially it was only a recognition
that one owed a modicum of hospitality to those of the same speech, dress, and customs.” Ethnographers have conveniently constructed measures of how integrated governance was in each tribe prior to the era of reservations; some tribes had very centralized governance, whereas others were like the Apache. In a straight OLS regression with the natural covariates, incomes are substantially lower on reservations made up of multiple bands that had no pre-reservation history of centralized governance.

Why? First, let’s deal with identification (more on what that means in a second). You might naturally think that, hey, tribes with centralized governance in the 1800s were probably quite socioeconomically advanced already: think Cherokee. So are we just picking up that high SES in the 1800s leads to high incomes today? Well, in regions with lots of mining potential, bands tended to be grouped onto one reservation more frequently, which suggests that resource prevalence on ancestral homelands outside of the modern reservation boundaries can instrument for the propensity for bands to be placed together. Instrumented estimates of the effect of “forced coexistence” is just as strong as the OLS estimate. Further, including tribe fixed effects for cases where single tribes have a number of reservations, a surprisingly common outcome, also generates similar estimates of the effect of forced coexistence.

I am very impressed with how clear Dippel is about what exactly is being identified with each of these techniques. A lot of modern applied econometrics is about “identification”, and generally only identifies a local average treatment effect, or LATE. But we need to be clear about LATE – much more important than “what is your identification strategy” is an answer to “what are you identifying anyway?” Since LATE identifies causal effects that are local conditional on covariates, and the proper interpretation of that term tends to be really non-obvious to the reader, it should go without saying that authors using IVs and similar techniques ought be very precise in what exactly they are claiming to identify. Lots of quasi-random variation generates that variation along a local margin that is of little economic importance!

Even better than the estimates is an investigation of the mechanism. If you look by decade, you only really see the effect of forced coexistence begin in the 1990s. But why? After all, the “forced coexistence” is longstanding, right? Think of Nunn’s famous long-run effect of slavery paper, though: the negative effects of slavery are mediated during the colonial era, but are very important once local government has real power and historically-based factionalism has some way to bind on outcomes. It turns out that until the 1980s, Indian reservations had very little local power and were largely run as government offices. Legal changes mean that local power over the economy, including the courts in commercial disputes, is now quite strong, and anecdotal evidence suggests lots of factionalism which is often based on longstanding intertribal divisions. Dippel also shows that newspaper mentions of conflict and corruption at the reservation level are correlated with forced coexistence.

How should we interpret these results? Since moving to Canada, I’ve quickly learned that Canadians generally do not subscribe to the melting pot theory; largely because of the “forced coexistence” of francophone and anglophone populations – including two completely separate legal traditions! – more recent immigrants are given great latitude to maintain their pre-immigration culture. This heterogeneous culture means that there are a lot of actively implemented norms and policies to help reduce cultural division on issues that matter to the success of the country. You might think of the problems on reservations and in Nunn’s post-slavery states as a problem of too little effort to deal with factionalism rather than the existence of the factionalism itself.

Final working paper, forthcoming in Econometrica. No RePEc IDEAS version. Related to post-colonial divisions, I also very much enjoyed Mobilizing the Masses for Genocide by Thorsten Rogall, a job market candidate from IIES. When civilians slaughter other civilians, is it merely a “reflection of ancient ethnic hatred” or is it actively guided by authority? In Rwanda, Rogall finds that almost all of the killing is caused directly or indirectly by the 50,000-strong centralized armed groups who fanned out across villages. In villages that were easier to reach (because the roads were not terribly washed out that year), more armed militiamen were able to arrive, and the more of them that arrived, the more deaths resulted. This in-person provoking appears much more important than the radio propaganda which Yanigazawa-Drott discusses in his recent QJE; one implication is that post-WW2 restrictions on free speech in Europe related to Nazism may be completely misdiagnosing the problem. Three things I especially liked about Rogall’s paper: the choice of identification strategy is guided by a precise policy question which can be answered along the local margin identified (could a foreign force stopping these centralized actors a la Romeo Dallaire have prevented the genocide?), a theoretical model allows much more in-depth interpretation of certain coefficients (for instance, he can show that most villages do not appear to have been made up of active resistors), and he discusses external cases like the Lithuanian killings of Jews during World War II, where a similar mechanism appears to be at play. I’ll have many more posts on cool job market papers coming shortly!

“The Rents from Sugar and Coercive Institutions: Removing the Sugar Coating,” C. Dippel, A. Greif & D. Trefler (2014)

Today, I’ve got two posts about some new work by Christian Dippel, an economic historian at UCLA Anderson who is doing some very interesting theoretically-informed history; no surprise to see Greif and Trefler as coauthors on this paper, as they are both prominent proponents of this analytical style.

The authors consider the following puzzle: sugar prices absolutely collapse during the mid and late 1800s, largely because of the rise of beet sugar. And yet, wages in the sugar-dominant British colonies do not appear to have fallen. This is odd, since all of our main theories of trade suggest that when an export price falls, the price of factors used to produce that export also fall (this is less obvious than just marginal product falling, but still true).

The economics seem straightforward enough, so what explains the empirical result? Well, the period in question is right after the end of slavery in the British Empire. There were lots of ways in which the politically powerful could use legal or extralegal means to keep wages from rising to marginal product. Suresh Naidu, a favorite of this blog, has a number of papers on labor coercion everywhere from the UK in the era of Master and Servant Law, to the US South post-reconstruction, to the Middle East today; actually, I understand he is writing a book on the subject which, if there is any justice, has a good shot at being the next Pikettyesque mainstream hit. Dippel et al quote a British writer in the 1850s on the Caribbean colonies: “we have had a mass of colonial legislation, all dictated by the most short-sighted but intense and disgraceful selfishness, endeavouring to restrict free labour by interfering with wages, by unjust taxation, by unjust restrictions, by oppressive and unequal laws respecting contracts, by the denial of security of [land] tenure, and by impeding the sale of land.” In particular, wages rose rapidly right after slavery ended in 1838, but those gains were clawed back by the end of 1840s due to “tenancy-at-will laws” (which let employers seize some types of property if workers left), trespass and land use laws to restrict freeholding on abandoned estates and Crown land, and emigration restrictions.

What does labor coercion have to do with wages staying high as sugar prices collapse? The authors write a nice general equilibrium model. Englishmen choose whether to move to the colonies (in which case they get some decent land) or to stay in England at the outside wage. Workers in the Caribbean can either take a wage working sugar which depends on bargaining power, or they can go work marginal freehold land. Labor coercion rules limit the ability of those workers to work some land, so the outside option of leaving the sugar plantation is worse the more coercive institutions are. Governments maximize a weighted combination of Englishmen and local wages, choosing the coerciveness of institutions. The weight on Englishmen wages is higher the more important sugar exports and their enormous rents are to the local economy. In partial equilibrium, then, if the price of sugar falls exogenously, the wages of workers on sugar plantations falls (as their MP goes down), the number of locals willing to work sugar falls, hence the number of Englishman willing to stay falls (as their profit goes down). With few plantations, sugar rents become less important, labor coercion falls, opening up more marginal land for freeholders, which causes even more workers to leave sugar plantations and improves wages for those workers. However, if sugar is very important, the government places a lot of weight on planter income in the social welfare function, hence responds to a fall in sugar prices by increasing labor coercion, lowering the outside option of workers, keeping them on the sugar plantations, where they earn lower wages than before for the usual economic reasons. That is, if sugar is really important, coercive institutions will be retained, the economic structure will be largely unchanged in response to a fall in world sugar prices, and hence wages will fall, but if sugar is only of marginal importance, a fall in sugar prices leads the politically powerful to leave, lowering the political strength of the planter class, thus causing coercive labor institutions to decline, allowing workers to reallocate such that wages approach marginal product; since the MP of options other than sugar may be higher than the wage paid to sugar workers, this reallocation caused by the decline of sugar prices can cause wages in the colony to increase.

The British, being British, kept very detailed records of things like incarceration rates, wages, crop exports, and the like, and the authors find a good deal of empirical evidence for the mechanism just described. To assuage worries about the endogeneity of planter power, they even get a subject expert to construct a measure of geographic suitability for sugar in each of 14 British Caribbean colonies, and proxies for planter power with the suitability of marginal land for sugar production. Interesting work all around.

What should we take from this? That legal and extralegal means can be used to keep factor rents from approaching their perfect competition outcome: well, that is something essentially every classical economist from Smith to Marx has described. The interesting work here is the endogeneity of factor coercion. There is still some debate about much we actually know about whether these endogenous institutions (or, even more so, the persistence of institutions) have first-order economic effects; see a recent series of posts by Dietz Vollrath for a skeptical view. I find this paper by Dippel et al, as well as recent work by Naidu and Hornbeck, are the cleanest examples of how exogenous shocks affect institutions, and how those institutions then affect economic outcomes of great importance.

December 2014 working paper (no RePEc IDEAS version)

A Note on Coauthors on the 2014-2015 Job Market

It is high stress time in the world of economics as we reach the heart of our very centralized job market. If you happen to be on a hiring committee this year, or are just interested in great new work by a group of young economists, let me hype five folks who are either my coauthors or else people I’ve spent an incredible amount of time discussing research with during grad school.

First, my coauthors. Jorge Lemus is an applied theorist and IO economist who works primarily on innovation-related topics. We have worked together for a few years on a paper about the economic theory of research lines (new draft coming this week!); I know firsthand how capable Jorge is in writing and solving interesting models. In addition to our work together, Jorge has two papers with Emil Temnyalov on patent trolls. The first paper investigates why firms might sell patents to “privateers” who go on to sue the original patentee’s rivals: because the privateer cannot be countersued, there is no “IP truce”, hence the patent is effectively stronger. This potential welfare benefit is contrasted with the fact that the privateer lowers industry profits and the privateer lowers the value of developing a defensive patent portfolio to countersue with in the first place. The second paper studies more traditional patent trolls in their entry deterrence versus monetization of ideas role. Unlike many theorists, though, he can also do interesting empirical work, such as his paper on pricing dynamics with Fernando Luco at Texas A&M.

My other coauthor on the market this year is Yasin Ozcan. We have two papers together on open access mandates. Yasin is my go-to guy when it comes to computationally intense empirical work. For our work together, we needed to merge hundreds of gigabytes of raw text patent applications with an enormous sample of academic medical research. He wrote code that efficiently scraped and matched everything with only a couple weeks of runtime on the server; I think the project would have taken me infinite time on my own. Yasin’s job market paper uses a ridiculous dataset matching M&A activity among all firms, including many which are not public, with the patent database. Essentially, when innovative firms acquire other innovative firms, who are they acquiring? Yasin shows that firms who do high quality research, measured in a variety of ways, tend to acquire firms that also do high quality research: there is assortative matching in the open innovation model. There are some interesting implications here for the boundary of the firm in entrepreneurial firms, as well as many more interesting questions to explore using this data.

Aside from my coauthors, there are three other economists who are, by dint of office placement many years ago, guys I frequently discuss new research with. Weifeng Zhong, a political economist, studies why it is sometimes autocracies that are the most business friendly (think of Dubai, China, or even England after Henry VIII). Essentially, nondemocratic states can perform lump-sum seizures (think China and rural land) which is relatively non-distortionary, and can use that revenue to pay politicians not to distort the economy is other ways. Democratic states prohibit such seizures, hence the politicians have little ability to earn money by stealing from the government coffers, hence the politicians do not need to be compensated heavily and therefore capital taxation can be low. In the middle, the politicians need to be paid off, but they can only be paid off using distortionary capital taxation. This summary, of course, is much less detailed that the model in the actual paper, which you ought read in detail!

Luciano Pomatto, a “high theorist”, doesn’t really need mentioning here: if you are in the market for a theorist, you must already know him. In addition to already having published in the AER and the Annals of Stats, even notes which Luciano wrote in his spare time are being cited in handbook chapters and annual reviews. His work covers everything from the epistemics of Bayesian games to the link between testability and Blackwell merging to the link between Hume and Popper’s scientific methods when scientists can act strategically to the nature of social welfare functions when inequality itself is a factor to a really cool paper linking cooperative and noncooperative games by applying forward induction reasoning to the concept of pairwise stability under incomplete information. The running joke in the office was that equilibrium refinements are a bit too applied of a topic to be discussed, but what’s amazing is that even though all of these papers are right at the technical cutting edge, the implications are essentially all understandable by the least theoretically-inclined economists out there: no math just for math’s sake here.

Finally, Emil Temnyalov, a coauthor with Jorge on the patent troll papers, has written a very nice paper on price discrimination with frequent flyer programs. Frequent flyer programs are revenue streams for airlines; they earn huge amounts of money “selling miles” to consumers via credit card promotions and the like. Might there then be a reason for the programs aside from loyalty? Emil shows that frequent flyer programs are useful as a tool for dynamic price discrimination when a good is perishable and demand is stochastic. Rare is it at this point to find a really nice application of dynamic mechanism theory. I confess that I am a frequent flyer nerd and have discusses arcane details of these programs with Emil many, many times.

All of these guys are also great company for the apocryphal beer (I can confirm this via direct experience!). In addition to the officemates and coauthors, let me mention a few other friends who I think do interesting work: it’s worth looking once more at Ludovico Zaraga and Anthony Wray, two historians who have spent many days in dusty archives (the mark of a real historian!) and who have benefited from attending Joel Mokyr University for the last few years, Mikhail Safranov, a theorist who can solve essentially any problem you throw his way, Chris Lau, my old basketball teammate who knows more about the economics of for-profit education than essentially anyone (and who has looked at these questions “properly”, by which I mean in the context of a structural choice model!), Ofer Cohen, who has developed a very interesting behavioral model of “mental accounting” with nice explanatory power in developing country household behavior, Shruti Sinha, our econometrics star who has a great technical paper on nonparametric identification in matching models, Andrew Butters, a very competent IO and energy economist who has a really important paper about how stochastic demand can muck up previous understanding of persistent productivity differences across firms, Bridget Hoffman, fresh back from running a study in India of who exactly is harmed or helped when nonmonetary rationing is used when distributing development aid, Esteban Petruzello, a well-trained IO economist who uses some cutting-edge demand system techniques to investigate the effectiveness of anti-smoking campaigns, and Juan David Prada and Matteo Li Bergolis, two macroeconomists and really nice guys who unfortunately work in an area where my background is simply too limited to offer any useful comments. Hopefully your university sees something interesting here, as I’m sure it will!

“Minimal Model Explanations,” R.W. Batterman & C.C. Rice (2014)

I unfortunately was overseas and wasn’t able to attend the recent Stanford conference on Causality in the Social Sciences; a friend organized the event and was able to put together a really incredible set of speakers: Nancy Cartwright, Chuck Manski, Joshua Angrist, Garth Saloner and many others. Coincidentally, a recent issue of the journal Philosophy of Science had an interesting article quite relevant to economists interested in methodology: how is it that we learn anything about the world when we use a model that is based on false assumptions?

You might think of there being five classes which make up nearly every paper published in the best economics journals. First are pure theoretical exercises, or “tool building”, such as investigations of the properties of equilibria or the development of a new econometric technique. Second are abstract models which are meant to speak to an applied problem. Third are empirical papers whose primary quantities of interest are the parameters of an economic model (broadly, “structural papers”, although this isn’t quite the historic use of the term). Fourth are empirical papers whose primary quantities of interest are causal treatment effects (broadly, “reduced form papers”, although again this is not the historic meaning of that term). Fifth are descriptive work or historical summary. Lab and field experiments, and old-fashioned correlation analysis, all fit into that framework fairly naturally as well. It is the second and third classes which seem very strange to many non-economists. We write a model which is deliberately abstract and which is based on counterfactual assumptions about human or firm behavior, but nonetheless we feel that these types of models are “useful” or “explanatory” in some sense. Why?

Let’s say that in the actual world, conditions A imply outcome B via implication C (perhaps causal, perhaps as part of a simultaneous equilibrium, or whatever). The old Friedman 1953 idea is that a good model predicts B well across all questions with which we are concerned, and the unreality of the assumptions (or implicitly of the logical process C) are unimportant. Earlier literature in the philosophy of science has suggested that “minimal models” explain because A’, a subset of A, are sufficient to drive B via C; that is, the abstraction merely strips away any assumptions that are not what the philosopher Weisberg calls “explanatorily privileged causal factors.” Pincock, another philosopher, suggests that models track causes, yes, but also isolate factors and connect phenomena via mathematical similarity. That is, the model focuses on causes A’, subset of A, and on implications C’, subset of C, which are of special interest because they help us see how the particular situation we are analyzing is similar to ones we have analyzed before.

Batterman and Rice argue that these reasons are not why minimal models “work”. For instance, if we are to say that a model explains because it abstracts only to the relevant causal factors, the question is how we know what those factors are in advance of examining them. Consider Fisher’s sex ratio model: why do we so frequently see 1:1 sex ratios in nature? He argues that there is a fitness advantage for those whose offspring tend toward the less common sex, since they find it easier to procreate. In the model, parents choose sex of offspring, reproduction is asexual (does not involve matching), no genetic recombination occurs, there are no random changes to genes, etc: many of the assumptions are completely contrary to reality. Why, then, do we think the model explains? It explains because there is a story about why the omitted factors are irrelevant to the behavior being explained. That is, in the model assumptions D generate E via causal explanation C, and there is a story about why D->E via C and A->B via C operate in similar ways. Instead of simply assuming that certain factors are “explanatorily privileged”, we show that that model factors affect outcomes in similar ways to how more complicated real world objects operate.

Interesting, but I feel that this still isn’t what’s going on in economics. Itzhak Gilboa, the theorist, in a review of Mary Morgan’s delightful book The World in the Model, writes that “being an economic theorist, I have been conditioned to prefer elegance over accuracy, insight over detail.” I take that to mean that what economic theorists care about are explanatory factors or implications C’, subset of C. That is, the deduction is the theory. Think of Arrow’s possibility theorem. There is nothing “testable” about it; certainly the theory does not make any claim about real world outcomes. It merely shows the impossibility of preference aggregation satisfying certain axioms, full stop. How is this “useful”? Well, the usefulness of this type of abstract model depends entirely on the user. Some readers may find such insight trivial, or uninteresting, or whatever, whereas others may find such an exploration of theoretical space helps clarify their thinking about some real world phenomenon. The whole question of “Why do minimal models explain/work/predict” is less interesting to me than the question “Why do minimal models prove useful for a given reader“.

The closest philosophical position to this idea is some form of Peirce-style pragmatism – he actually uses a minimal model himself in exactly this way in his Note on the Economy of the Theory of Research! I also find it useful to think about the usefulness of abstract models via Economic Models as Analogies, an idea pushed by Gilboa and three other well-known theorists. Essentially, a model is a case fully examined. Examining a number of cases in the theoretical world, and thinking formally through those cases, can prove useful when critiquing new policy ideas or historical explanations about the world. The theory is not a rule – and how could it be given the abstractness of the model – but an element in your mental toolkit. In physics, for example, if your engineer proposes spending money building a machine that implies perpetual motion, you have models of the physical world in your toolkit which, while not being about exactly that machine, are useful when analyzing how such a machine would or would not work. Likewise, if Russian wants to think about how it should respond to a “sudden stop” in investment and a currency outflow, the logical consequences of any real world policy are so complex that it is useful to have thought through the equilibrium implications of policies within the context of toy models, even if such models are only qualitatively useful or only useful in certain cases. When students complain, “but the assumptions are so unrealistic” or “but the model can’t predict anything”, you ought respond that the model can predict perfectly within the context of the model, and it is your job as the student, as the reader, to consider how understanding the mechanisms in the model help you think more clearly about related problems in the real world.

Final version in Philosophy of Science, which is gated, I’m afraid; I couldn’t find an ungated draft. Of related interest in the philosophy journals recently is Kevin Davey’s Can Good Science Be Logically Inconsistent? in Synthese. Note that economists use logically inconsistent reasoning all the time, in that we use model with assumption A in context B, and model with assumption Not A in context C. If “accepting a model” means thinking of the model as “justified belief”, then Davey provides very good reasons to think that science cannot be logically inconsistent. If, however, “accepting a model” meaning “finding it useful as a case” or “finding the deduction in the model of inherent interest”, then of course logically inconsistent models can still prove useful. So here’s to inconsistent economics!

“What Do Small Businesses Do?,” E. Hurst & B. Pugsley (2011)

There are a huge number of policies devoted toward increasing the number of small businesses. The assumption, it seems, is that small businesses are generating more spillovers than large businesses, in terms of innovation, increases in the labor match rate, or indirect welfare benefits from creative destruction. Indeed, politicians like to think of these “Joe the Plumber” types as heroic job creators, although I’m not sure what that could possibly mean since the long run level of unemployment is constant and unrelated the amount of entrepreneurial churn in whatever economic model or empirical data you wish to investigate.

These policies beg the question: are new firms actually quick-growing, innovative concerns, or are they mainly small restaurants, doctor’s offices and convenience stores? The question is important since it is tough to see why the tax code should privilege, say, an independent convenience store over a new corporate-run branch – if anything, the independent is less innovative and less likely to grow in the future. Erik Hurst and Ben Pugsley do a nice job of generating stylized facts on these issues using a handful of recent surveys of firm outcomes and the stated goals of the owners of new firms.

The evidence is pretty overwhelming that most new firms are not the heroic, job-creating innovator. Among firms with less than 20 employees, most are concentrated in a very small number of industries like construction, retail, restaurants, etc, and this concentration is much more evident than among larger firms. Most small firms never hire more than a couple employees, and this is true even among firms that survive five or ten years. Among new firms, only 2.7% file for a patent within four years, and only 6-8% develop any proprietary product or technique at all.

It is not only in outcomes, but in expectations as well where it seems small businesses are not rapidly-growing innovative firms. At their origin, 75% of small business owners report no desire to grow their business, nonpecuniary reasons (such as “to be my own boss”) are the most common reason given to start a business, and only 10% plan to develop any new product or process. That is, most small businesses are like the corner doctor’s office or small plumbing shop. Starting a business for nonpecuniary reasons is also correlated with not wanting to grow, not wanting to innovate, and not actually doing so. They are small and non-innovative because they don’t want to be big, not because they fail at trying to become big. It’s also worth mentioning that hardly any small business owners in the U.S. sample report starting a business because they couldn’t find a job; the opposite is true in developing countries.

These facts make it really hard to justify a lot of policy. For instance, consider subsidies that only accrue to businesses below a certain size. This essentially raises the de facto marginal tax rate on growing firms (since the subsidy disappears once the firm grows above a certain size), even though rapidly growing small businesses are exactly the type we presumably are trying to subsidize. If liquidity constraints or other factors limiting firm entry were important, then the subsidies might still be justified, but it seems from Hurst and Pagsley’s survey that all these policies will do is increase entry among business owners who want to be their own boss and who never plan to hire or innovate in any economically important way. A lot more work here, especially on the structural/theoretical side, is needed to develop better entrepreneurial policies (I have a few thoughts myself, so watch this space).

Final Working Paper (RePEc IDEAS) which was eventually published in the Brookings series. Also see Haltiwanger et al’s paper showing that it’s not small firms but young firms which are engines of growth. I posted on a similar topic a few weeks ago, which may be of interest.


Get every new post delivered to your Inbox.

Join 239 other followers

%d bloggers like this: