Category Archives: Uncategorized

Statistics for Strategic Scientists – A Clark for Isaiah Andrews

Today’s 2021 Clark Medal goes to the Harvard econometrician Isaiah Andrews, and no surprise. Few young econometricians have produced such a volume of work so quickly. And while Andrews has a number of papers on traditional econometric topics – how to do high-powered inference on non-linear models, for instance – I want to focus here on his work on what you might call “strategic statistics”.

To understand what we mean by that term, we need to first detour a bit and understand what econometrics is anyway. The great Joseph Schumpeter, in a beautiful short introduction to the new Econometric Society in 1933, argues that economics is not only the most mathematical of the social or moral sciences, but of all sciences. How can that be? Concepts in physics like mass or velocity are surely quantitative, but must be measured before we can put some number of them. However, concepts in economics are fundamentally quantitative: our basic building blocks are prices and quantities. From these numerical concepts comes the natural desire to investigate the relationship between them: estimates of demand curves go back at least to Gregory King and Charles D’Avenant in the 17th century! The issue is not that economics is amenable to theoretical investigation. Rather, from King on forward through von Thunen, Cournot, Walras, Fisher and many more, economics is a science where numerical data from the practice of markets is combined with theory.

Econometrics, then, is not simply statistics, a problem of computing standard errors and developing new estimators. Rather, as historians of thought like Mary Morgan and Ray Epstein have pointed out, econometrics is a unique subfield of statistics because of its focus on identification is models where different variables are simultaneously determined. Consider estimating how a change in the price of iron will affect sales. You observe a number of points of data from prior years: at 20 dollars per ton, 40 tons were sold; at 30 dollars per ton, 45 tons were sold. Perhaps you have similar data from different countries with disconnected markets. Even in the 1800s, the tool of linear regression with controls was known. Look at the numbers above: 40 tons sold at $20, and 45 tons at $30. The demand curve slopes up! The naïve analyst goes to Mr. Carnegie and suggests, on the basis of past data, that if he increases the price of iron, he will sell even more!

The problem with running this regression, though, should be clear if one starts with theory. The price of iron depends on the conjunction of supply and demand, on Marshall’s famous “scissors”. Our observational data cannot tell us whether changes in the price-quantity pairs observed happened because demand or supply shifted. This conflation is common in public reasoning: we observe that house prices are rising very quickly at the same time as many new condos are being built in the neighborhood, and think the latter is causing the former. Correctly, however, both the price increase and the new construction can occur if demand for living in the neighborhood increases and the marginal cost of construction is increasing. Supply and demand is not the only simultaneous stochastic equation model in economics, of course: anything with strategic behavior that determines an equilibrium will be as well.

This causal identification problem goes back at least to what Trygve Haavelmo pointed out in 1943. The past relationship between prices and quantities sold is not informative as to what will happen if I choose to raise prices. Likewise, though rising prices and new construction correlate, if we choose to increase construction in an area, prices will fall. Though there is an empirical link between rising prices and a strong economy, we cannot generate a strong economy in the long run just by inflating the currency. Econometrics, then, is largely concerned with the particular statistical problem of identifying certain parameters that explain what will happen if we change one part of the system through policy or when we place people with different known preferences, costs, and so on in the same strategic situation.

How we can do that is an oft-told story, but roughly we can identify a parameter in a simultaneously determined model with statistical assumptions or with structural assumptions. In the context of supply and demand, if we randomly increase the price a certain good is sold at in a bunch of markets, that experiment can help identify the price elasticity of demand, holding demand constant (but tells us nothing about what happens to price and quantity if consumer demand changes!). If we use “demand” or “supply” shifters – random factors that affect price only via their effect of firm costs or consumer demand – these “instrumental variables” allow us to split the supply and demand curve in past observational data. If we assume more structure, such as that there are a set of firms who price according to Cournot, then we can back out firm costs and look at counterfactuals like “what if a merger reduced the number of firms in this market by one”. The important thing to realize is no matter where an empirical technique lies on the spectrum from purely statistical to heavily theory-driven, there are underlying assumptions being made by the econometrician to identify the parameters of interest. The exclusion restriction in an IV, that the shifter in question only affects price via the supply or demand side, is as much an untestable assumption as the argument that firms are profit-maximizing Cournot players.

This brings us back to Isaiah Andrews. How do scientists communicate their results to the public, particularly when different impossible-to-avoid assumptions give different results? How can we ensure the powerful statistical tools we use for internal validity, meaning causally-relevant insight in the particular setting from which the empirical is drawn, do not mislead about external validity, the potential for applying those estimates when participants have scope for self-selection or researchers select convenient non-representative times or locations for their study? When our estimation is driven by the assumptions of a model, what does it mean when we say our model “fits the data” or “explains key variation in the data”? These questions are interesting essentially because of the degrees of freedom the researcher holds in moving from a collection of observations to a “result”. Differences of opinion in economics are not largely about the precision of estimated data, a la high energy physics, but about the particular assumptions used by the analyst to move from data to estimated parameters of interest. Taking this seriously is what I mean above by “strategic statistics”: the fact that identification in economics requires choices by the analyst means we need to take the implications of those choices seriously. Andrews’ work has touched on each of the questions above in highly creative ways. I should also note that, by the standards of high-rigor econometrics, his papers tend to be quite readable and also quite concise.

Let’s begin with scientific communication. As we are all aware from the disastrous Covid-related public science in the past year (see Zeynep Tufekci’s writing for countless examples), there is often a tension between reporting results truthfully and the decisions taken based on those results. Andrews and Shapiro model this as a Wald-style game where scientists collect data and provide an estimate of some parameter, then a decision is made following that report. The estimate is of course imprecise: science involves uncertainty. The critical idea is that the “communications model” – where scientists report an estimate and different agents take actions based on that report – differs from the “decision model” where the scientist selects the actions (or, alternatively, the government chooses a common policy for all end-users on the basic of scientist recommendations). Optimal communication depends heavily on which setting you are in. Imagine that a costly drug is weakly known to improve health, but the exact benefit is unknown. When they can choose, users take the drug if the benefit exceeds their personal cost of taking it. In an RCT, because of sampling error, sometimes you’ll get that the drug is harmful when you try to estimate how “beneficial” it is. In a communications model, the readers adjust for sampling error, so you just report truthfully: there is still useful information in that “negative” estimate because it still tells you that the effect of the drug is likely to be close to zero the more negative the point estimate. No reason to hide that from readers! In a “decision model”, you would essentially be forcing a tax on the drug just because of sampling error, even though you know this is harmful, so optimally you censor the reporting and just give “no effect” in your scientific communications. There is a really interesting link between decision theory and econometrics going back to Wald’s classic paper. The tension between open communication of results to users with different preferences, and recommended decisions to those same users is well worth further investigation.

How to communicate results also hinges on internal versus external validity. A study done in Tanzania on schooling may find that paying parents to send kids to school increases attendance 16%. This study may be totally randomized with the region. What would the effect be of paying parents in Norway? If the only difference across families depends on observables within the support of the data in the experiment, we can simply reweight results. This seems implausible, though – there are many unobservable differences between Oslo and Dodoma. In theory, though, if all those unobservables were known, we again just have a reweighting problem. Emily Oster and Andrews show that bounds on the externally valid effect of a policy can be constructed if you assume the share of covariance between selection/participation and the estimated treatment effects (the idea here is not far off from the well-known Oster bounds for omitted variable bias). For instance, in the Bloom et al work from home in China paper, call center workers who choose to work-from-home see a nontrivial increase in productivity. Perhaps they select to work-from-home because they know they can do so efficiently, however. Using the Oster-Andrews bound, to get a negative effect of work-from-home for this call center, unobservable differences across workers would have to be 14.7 times more informative about treatment effect heterogeneity than observables.

In addition to unobservables making our estimates hard to apply outside very specific contexts, structural assumptions can also “drive” results. Structural models often use a complex set of assumptions to identify a model, where “identify” means that distinct estimated outcomes of interest depend on distinct underlying data (the “traditional” definition). But which assumptions are critical? What changes if we modify one of them? This is a very hard question to answer: as every structural economist knows, we often don’t know how to “close” the model so that it can be estimated if we change the assumptions. Many authors loosely say that “x is identified by y” when the estimated x is very sensitive to changes in y, where y might be an a priori assumption, or a particular type of data. In that sense, “what is critical to the estimate in this structural model” is asking “how can I trust the author that y in fact identifies x”? In a paper in JBES, Andrews and coauthors sum up this problem in a guide to practical sensitivity analysis in structural models: “A reader who accepted the full list of assumptions could walk away having learned a great deal. A reader who questioned even one of the assumptions might learn very little, as they would find it hard or impossible to predict how the conclusions might change under alternative assumptions.” Seems like a problem! However, Andrews has shown, in a 2017 QJE with Gentzkow and Shapiro, that the formal sensitivity of structural estimates is in fact possible.

The starting point is that some statistical techniques are transparent: if you regress wages on education, we all understand that omitting skill biases this relationship upward, and that if we know the covariance of skill and education, we have some idea of the magnitude of the bias. Andrews’ idea is to take the same idea more broadly to any moment-based estimate. If you have some guess about how an assumption affects particular moments of the data, then you can use a particular matrix to approximate how changes in those moments affect the parameters we care about. Consider this example. In a well-known paper, Dellavigna et al find that door-to-door donations to charity are often just based on social pressure. That is, we give a few bucks to get this person off our doorstep without being a jerk, not because we care about the charity. The model uses variation in whether you knew the person was coming to your doorstep alongside an assumption that, basically, social pressure drives small donations with a different distribution from altruistic/warm glow donations. In particular, the estimate of social pressure turns out to be quite sensitive to donations of exactly ten dollars. Using the easy-to-compute matrix in Andrews et al, you can easily answer, as a reader, questions like “how does the estimate of social pressure change if 10% of households just default to giving ten bucks because it is a single bill, regardless of social pressure vs. warm glow?” I think there will be much more role for ex-post dashboard/webapp type analyses by readers in the future: why should a paper restrict to the particular estimates and robustness the authors choose? Just as open data is not often required, I wouldn’t be surprised if “open analysis” in the style of this paper becomes common as well.

A few remaining bagatelles: Andrews’ work is much broader than just what has been discussed here, of course. First, in a very nice applied theory paper in the AER, Andrews and Dan Barron show how a firm like Toyota can motivate its suppliers to work hard even when output is not directly contractible. Essentially, recent high performers become “favored suppliers” who are chosen whenever the planner believes their productivity in the current period is likely quite high. Payoffs to the firm with this rule are strictly higher than just randomly choosing some supplier that is expected to be productive today, due to the need to dynamically provide incentives to avoid moral hazard. Second, in work with his dissertation advisor Anna Mikusheva, Andrews has used results from differential geometry to perform high-powered inference when the link between structural parameters and the outcome of interest is highly non-linear. Third, in work with Max Kasy, Andrews shows a much more powerful way to identify the effect of publication bias that simply comparisons of the distribution of p-values around “significance” cutoffs. Fourth, this is actually the second major prize for econometrics this year, as Silvana Tenreyro won the “European Clark”, the Yrjo Jahnsson Award, this year alongside Ricardo Reis. Tenreyro is well-known for the ppml estimator is her “log of gravity” paper with Santos Silva. One wonders who will be the next Nobel winner in pure econometrics, however: a prize has not gone to that subfield since Engle and Granger in 2003. I could see it going two ways: a more “traditional” prize to someone like Manski, Hausman, or Phillips, or a “modern causal inference” prize to any number of contributors to that influential branch. Finally, I realize I somehow neglected to cover the great Melissa Dell’s Clark prize last year – to be rectified soon!

Advertisement

The Simple Economics of Social Distancing and the Coronavirus

“Social distancing” – reducing the number of daily close contacts individuals have – is being encouraged by policymakers and epidemiologists. Why it works, and why now rather than for other diseases, is often left unstated. Economists have two important contributions here. First, game theoretic models of behavior are great for thinking through where government mandates are needed and where they aren’t. Second, economists are used to thinking through tradeoffs, such as the relative cost and benefit of shutting down schools versus the economic consequences of doing so. The most straightforward epidemiological model of infection – the SIR model dating back to the 1920s – is actually quite commonly used in economic models of innovation or information diffusion, so it is one we are often quite familiar with. Let’s walk through the simple economics of epidemic policy.

We’ll start with three assumptions. First, an infected person will infect B other people before recovering if we make no social changes and no one is immune. Second, people who have recovered from coronavirus do not get sick again (which appears to be roughly true). Third, coronavirus patients tend to be infectious before they show up in a hospital. We will relax these assumptions shortly. Finally, we will let d represent the amount of social distancing. If d=1, we are all just living our normal lives. If d=0, we are completely isolated in bubbles and no infections transmit. The cost of distancing level d is c(d), where distancing grows ever more costly the more you do it – for the mathematically inclined, c(1-d)>0, c'(1-d)>0, c(1)=0.

In the classic SIR model, people are either susceptible, infected, or recovered (or “removed” depending on the author). Let S, I, and R be the fraction of the population in each bubble at any given time t. Only group S can be infected by someone new. When an infected person encounters someone, they pass the disease along with probability dB, where d is distancing level and B is the infection rate. If an infected person interacts with another infected or recovered person, they do not get sick, or at least not more sick.

Define one unit of time as the period someone is infectious. We then can define how the proportion of people in each group change over time as

dS/dt=-dBSI
dI/dt=dBSI-I
dR/dt=I

The second equation, for instance, says the proportion of population that is infected is the infection rate given distancing dB times the number of possible interactions between infected and susceptible people SI, minus the number of people infected in the current period I (remember, we define one unit of time as the period in which you are sick after being infected, so today’s sick are tomorrow’s recovered). If dI/dt>0, then the number of infected people is growing. When the infection is very young, almost everyone is in group S (hence S equals something close to 1) and no distancing is happening (so d=1), so the epidemic spreads if (dBS-1)I>0, and therefore when B>1. Intuitively, if 1 sick person infects on average more 1 person, the epidemic grows. To stop the epidemic, we need to slow that transmission so that dI/dt<0. The B in this model is the "R0" you may see in press, incidentally. With coronavirus, B is something like 2.

How can we end the epidemic, then? Two ways. First, the epidemic dies out because of “herd immunity”. This means that the number of people in bin R, those already infected, gets high enough that infected people interact with too few susceptible people to keep the disease going. With B=2 and no distancing (d=1), we would need dI/dt=(dBS-1)I=2S-1<0, or S<.5. In that case, the number of people eventually infected is half of society before the epidemic stops growing, then smaller numbers continue to get infected until the disease peters out. Claims that coronavirus will infect “70%” of society are based on this math – it will not happen because people would pursue serious distancing policies well before we reached anywhere near this point.

The alternative is distancing. I use distancing to mean any policy that reduces infectiousness – quarantine, avoiding large groups, washing your hands, etc. The math is simple. Again, let B=2. To stop coronavirus before large numbers (say, before 10% of society) are infected requires (dBS-1)I=(2dS-1)I<0. For S roughly equal to 1, we therefore need d<1/2. That is, we need to reduce the average number of infections a sick person gives to others at least in half. Frequent handwashing might reduce infection by 20% or so, though with huge error bars. That is, to stop the epidemic, we need fairly costly interventions like cancelling large events and work-from-home policies. Note that traditional influenza maybe has B=1.2, so small behavior changes like staying home when sick and less indoor interaction in the summer may be enough to stop epidemic spread. This is not true of coronavirus, as far as we know.

Ok, let’s turn to the economics. We have two questions here: what will individuals do in an epidemic, and what should society compel them to do? The first question involves looking for the d(s,t,z) chosen in a symmetric subgame perfect equilibrium, where s is your state (sick or not), z is society’s state (what fraction of people are sick), and t is time. The externality here is clear: Individuals care about not being sick themselves, but less about how their behavior affects the spread of disease to others. That is, epidemic prevention is a classic negative externality problem! There are two ways to solve these problems, as Weitzman taught us: Prices or Quantities. Prices means taxing behavior that spreads disease. In the coronavirus context, it might be something like “you can go to the ballgame, conditional on paying a tax of $x which is enough to limit attendance”. Quantities means limiting that behavior directly. As you might imagine, taxation is quite difficult to implement in this context, hence quantity limits (you can’t have events with more than N people) are more common.

It turns out that solving for the equilibrium in the SIR epidemic game is not easy and generally has to be done numerically. But we can say some things. Let m be the marginal cost of distancing in the limit, normalized to the cost of being infected (m=c'(d)/C for d=0, where C is the cost of being infected). If distancing is not very efficient (m is low) or if transmission is hard (B is not much more than 1), then in equilibrium no one will distance themselves, and the epidemic will spread. People will also not distance themselves until the epidemic has already spread quite a bit – the cost of social distancing needs to be paid even though the benefit in terms of not being infected is quite low, and you can always “hide from other people” later on.

Where, then, is mandated social distancing useful? If individuals do not account for the externality of their social distancing enough, they will avoid some contact to prevent getting sick right away, but not enough to prevent the epidemic from continuing to spread. If the epidemic is super dangerous (very high costs of sitting sick a la ebola or B very high), in equilibrium individuals will distance without being forced to. If the cost of being sick is high relative to the economic and social disruption of distancing, it is better even from a social planner’s point to view to just to risk getting sick. We don’t attempt to prevent the common cold with anything more extreme than covering our mouths.

However, if B is not too high, and the cost of being sick in high relative to the cost of distancing but not too high, it can be optimal for the government to impose social distancing. In the case of coronavirus, we need d<1/2. Since some people cannot reduce contact like doctors, the rest of us need to reduce the number of close contacts we interact with every day by even more than half. That is a bigger reduction than the workday versus Sunday number of interactions for the average person!

This model can be extended in a few useful ways. Three are particularly relevant: 1) what if we know who is sick before they are contagious, 2) what if people have different costs of being sick, and 3) what if the network of contacts is more complex than “any given person is likely to run into any other”.

If we observe people before they are sick, then quarantine can reduce d. Imagine a disease where everyone who gets sick turns blue, then they can only infect you two days later. Surely we can all see the very low cost method of preventing an epidemic – lock up the blue people! Ebola and leprosy are not far off from this case, and SARS also had the nice property that people are sick well before they are infectious. It seems coronavirus is quite infectious even when you only feel mildly ill, so pure testing plus quarantine is unlikely to move d – the distancing parameter – enough to reduce the number of infections caused by each sick person enough. This is especially true once the number of infected to too large to trace all of their contacts and test before they become infectious themselves.

If people have different costs of being sick, the case for government mandates is stronger. Let young people be only mildly sick, and old people much more so, even as each group is equally contagious. In equilibrium, the young will take only minor distancing precautions, and old major ones. Since the cost of distancing is convex, it is not efficient nor an equilibrium for the old to pursue extreme distancing while the young do relatively little. This convexity should increase the set of parameters where government mandated distancing is needed. As far as I am aware, there is not a good published model explicitly showing this in an SIR differential game, however (economists trapped home this weekend – let’s work this out and get it on ArXiv!).

Finally, the case of more “realistic” networks is interesting. In the real world, social contacts tend to have a “small world” property – we are tightly connected with a group of people who all know each other, not randomly connected to strangers. High clustering reduces the rate of early diffusion (see, e.g., this review) and makes quarantine more effective. For instance, if a wife is infected, the husband can be quarantined, as he is much more likely to be infected than some random person in society. “Brokerage” type contacts which connect two highly clustered groups are also important to separate, since they are the only way that disease spreads from one group to another. This is the justification for travel restrictions during epidemics – however, once most clusters have infected people, the travel restrictions no longer are important.

“Resetting the Urban Network,” G. Michaels & F. Rauch (2017)

Cities have two important properties: they are enormously consequential for people’s economic prosperity, and they are very sticky. That stickiness is twofold: cities do not change their shape rapidly in response to changing economic or technological opportunities (consider, e.g., Hornbeck and Keniston on the positive effects of the Great Fire of Boston), and people are hesitant to leave their existing non-economic social network (Deryagina et al show that Katrina victims, a third of whom never return to New Orleans, are materially better off as soon as three years after the hurricane, earning more and living in less expensive cities; Shoag and Carollo find that Japanese-Americans randomly placed in internment camps in poor areas during World War 2 see lower incomes and children’s educational outcomes even many years later).

A lot of recent work in urban economics suggests that the stickiness of cities is getting worse, locking path dependent effects in with even more vigor. A tour-de-force by Shoag and Ganong documents that income convergence across cities in the US has slowed since the 1970s, that this only happened in cities with restrictive zoning rules, and that the primary effect has been that as land use restrictions make housing prices elastic to income, working class folks no longer move from poor to rich cities because the cost of housing makes such a move undesirable. Indeed, they suggest a substantial part of growing income inequality, in line with work by Matt Rognlie and others, is due to the fact that owners of land have used political means to capitalize productivity gains into their existing, tax-advantaged asset.

Now, one part of urban stickiness over time may simply be reflecting that certain locations are very productive, that they have a large and valuable installed base of tangible and intangible assets that make their city run well, and hence we shouldn’t be surprised to see cities retain their prominence and nature over time. So today, let’s discuss a new paper by Michaels and Rauch which uses a fantastic historical case to investigate this debate: the rise and fall of the Roman Empire.

The Romans famously conquered Gaul – today’s France – under Caesar, and Britain in stages up through Hadrian (and yes, Mary Beard’s SPQR is worthwhile summer reading; the fact that Nassim Taleb and her do not get along makes it even more self-recommending!). Roman cities popped up across these regions, until the 5th century invasions wiped out Roman control. In Britain, for all practical purposes the entire economic network faded away: cities hollowed out, trade came to a stop, and imports from outside Britain and Roman coin are near nonexistent in the archaeological record for the next century and a half. In France, the network was not so cleanly broken, with Christian bishoprics rising in many of the old Roman towns.

Here is the amazing fact: today, 16 of France’s 20 largest cities are located on or near a Roman town, while only 2 of Britain’s 20 largest are. This difference existed even back in the Middle Ages. So who cares? Well, Britain’s cities in the middle ages are two and a half times more likely to have coastal access than France’s cities, so that in 1700, when sea trade was hugely important, 56% of urban French lived in towns with sea access while 87% of urban Brits did. This is even though, in both countries, cities with sea access grew faster and huge sums of money were put into building artificial canals. Even at a very local level, the France/Britain distinction holds: when Roman cities were within 25km of the ocean or a navigable river, they tended not to move in France, while in Britain they tended to reappear nearer to the water. The fundamental factor for the shift in both places was that developments in shipbuilding in the early middle ages made the sea much more suitable for trade and military transport than the famous Roman Roads which previously played that role.

Now the question, of course, is what drove the path dependence: why didn’t the French simply move to better locations? We know, as in Ganong and Shoag’s paper above, that in the absence of legal restrictions, people move toward more productive places. Indeed, there is a lot of hostility to the idea of path dependence more generally. Consider, for example, the case of the typewriter, which “famously” has its QWERTY form because of an idiosyncracy in the very early days of the typewriter. QWERTY is said to be much less efficient than alternative key layouts like Dvorak. Liebowitz and Margolis put this myth to bed: not only is QWERTY fairly efficient (you can think much faster than you can type for any reasonable key layout), but typewriting companies spent huge amounts of money on training schools and other mechanisms to get secretaries to switch toward the companies’ preferred keyboards. That is, while it can be true that what happened in the past matters, it is also true that there are many ways to coordinate people to shift to a more efficient path if a suitable large productivity improvement exists.

With cities, coordinating on the new productive location is harder. In France, Michaels and Rauch suggest that bishops and the church began playing the role of a provider of public goods, and that the continued provision of public goods in certain formerly-Roman cities led them to grow faster than they otherwise would have. Indeed, Roman cities in France with no bishop show a very similar pattern to Roman cities in Britain: general decline. That sunk costs and non-economic institutional persistence can lead to multiple steady states in urban geography, some of which are strictly worse, has been suggested in smaller scale studies (e.g., Redding et al RESTAT 2011 on Germany’s shift from Berlin to Frankfurt, or the historical work of Engerman and Sokoloff).

I loved this case study, and appreciate the deep dive into history that collecting data on urban locations over this period required. But the implications of this literature broadly are very worrying. Much of the developed world has, over the past forty years, pursued development policies that are very favorable to existing landowners. This has led to stickiness which makes path dependence more important, and reallocation toward more productive uses less likely, both because cities cannot shift their geographic nature and because people can’t move to cities that become more productive. We ought not artificially wind up like Dijon and Chartres in the middle ages, locking our population into locations better suited for the economy of the distant past.

2016 working paper (RePEc IDEAS). Article is forthcoming in Economic Journal. With incredible timing, Michaels and Rauch, alongside two other coauthors, have another working paper called Flooded Cities. Essentially, looking across the globe, there are frequent very damaging floods, occurring every 20 years or so in low-lying areas of cities. And yet, as long as those areas are long settled, people and economic activity simply return to those areas after a flood. Note this is true even in countries without US-style flood insurance programs. The implication is that the stickiness of urban networks, amenities, and so on tends to be very strong, and if anything encouraged by development agencies and governments, yet this stickiness means that we wind up with many urban neighborhoods, and many cities, located in places that are quite dangerous for their residents without any countervailing economic benefit. You will see their paper in action over the next few years: despite some neighborhoods flooding three times in three years, one can bet with confidence that population and economic activity will remain on the floodplains of Houston’s bayou. (And in the meanwhile, ignoring our worries about future economic efficiency, I wish only the best for a safe and quick recovery to friends and colleagues down in Houston!)

A John Bates Clark Prize for Economic History!

A great announcement last week, as Dave Donaldson, an economic historian and trade economist, has won the 2017 John Bates Clark medal! This is an absolutely fantastic prize: it is hard to think of any young economist whose work is as serious as Donaldson’s. What I mean by that is that in nearly all of Donaldson’s papers, there is a very specific and important question, a deep collection of data, and a rigorous application of theory to help identify the precise parameters we are most concerned with. It is the modern economic method at its absolute best, and frankly is a style of research available to very few researchers, as the specific combination of theory knowledge and empirical agility required to employ this technique is very rare.

A canonical example of Donaldson’s method is his most famous paper, written back when he was a graduate student: “The Railroads of the Raj”. The World Bank today spends more on infrastructure than on health, education, and social services combined. Understanding the link between infrastructure and economic outcomes is not easy, and indeed has been a problem that has been at the center of economic debates since Fogel’s famous accounting on the railroad. Further, it is not obvious either theoretically or empirically that infrastructure is good for a region. In the Indian context, no less a sage than the proponent of traditional village life Mahatma Gandhi felt the British railroads, rather than help village welfare, “promote[d] evil”, and we have many trade models where falling trade costs plus increasing returns to scale can decrease output and increase income volatility.

Donaldson looks at the setting of British India, where 67,000 kilometers of rail were built, largely for military purposes. India during the British Raj is particularly compelling as a setting due to its heterogeneous nature. Certain seaports – think modern Calcutta – were built up by the British as entrepots. Many internal regions nominally controlled by the British were left to rot via, at best, benign neglect. Other internal regions were quasi-independent, with wildly varying standards of governance. The most important point, though, is that much of the interior was desperately poor and in a de facto state of autarky: without proper roads or rail until the late 1800s, goods were transported over rough dirt paths, leading to tiny local “marketing regions” similar to what Skinner found in his great studies of China. British India is also useful since data on goods shipped, local on weather conditions, and agricultural prices were rigorously collected by the colonial authorities. Nearly all that local economic data is in dusty tomes in regional offices across the modern subcontinent, but it is at least in principle available.

Let’s think about how many competent empirical microeconomists would go about investigating the effects of the British rail system. It would be a lot of grunt work, but many economists would spend the time collecting data from those dusty old colonial offices. They would then worry that railroads are endogenous to economic opportunity, so would hunt for reasonable instruments or placebos, such as railroads that were planned yet unbuilt, or railroad segments that skipped certain areas because of temporary random events. They would make some assumptions on how to map agricultural output into welfare, probably just restricting the dependent variable in their regressions to some aggregate measure of agricultural output normalized by price. All that would be left to do is run some regressions and claim that the arrival of the railroad on average raised agricultural income by X percent. And look, this wouldn’t be a bad paper. The setting is important, the data effort heroic, the causal factors plausibly exogenous: a paper of this form would have a good shot at a top journal.

When I say that Donaldson does “serious” work, what I mean is that he didn’t stop with those regressions. Not even close! Consider what we really want to know. It’s not “What is the average effect of a new railroad on incomes?” but rather, “How much did the railroad reduce shipping costs, in each region?”, “Why did railroads increase local incomes?”, “Are there alternative cheaper policies that could have generated the same income benefit?” and so on. That is, there are precise questions, often involving counterfactuals, which we would like to answer, and these questions and counterfactuals necessarily involve some sort of model mapping the observed data into hypotheticals.

Donaldson leverages both reduced-form, well-identified evidence, and that broader model we suggested was necessary, and does so with a paper which is beautifully organized. First, he writes down an Eaton-Kortum style model of trade (Happy 200th Birthday to the theory of comparative advantage!) where districts get productivity draws across goods then trade subject to shipping costs. Consider this intuition: if a new rail line connect Gujarat to Bihar, then the existence of this line will change Gujarat’s trade patterns with every other state, causing those other states to change their own trade patterns, causing a whole sequence of shifts in relative prices that depend on initial differences in trade patterns, the relative size of states, and so on. What Donaldson notes is that if you care about welfare in Gujarat, all of those changes only affect Gujaratis if they affect what Gujaratis end up consuming, or equivalently if it affects the real income they earn from their production. Intuitively, if pre-railroad Gujarat’s local consumption was 90% locally produced, and after the railroad was 60% locally produced, then declining trade costs permitted the magic of comparative advantage to permit additional specialization and hence additional Ricardian rents. This is what is sometimes called a sufficient statistics approach: the model suggests that the entire effect of declining trade costs on welfare can be summarized by knowing agricultural productivity for each crop in each area, the local consumption share which is imported, and a few elasticity parameters. Note that the sufficient statistic is a result, not an assumption: the Eaton-Kortum model permits taste for variety, for instance, so we are not assuming away any of that. Now of course the model can be wrong, but that’s something we can actually investigate directly.

So here’s what we’ll do: first, simply regress time and region dummies plus a dummy for whether rail has arrived in a region on real agricultural production in that region. This regression suggests a rail line increases incomes by 16%, whereas placebo regressions for rail lines that were proposed but canceled see no increase at all. 16% is no joke, as real incomes in India over the period only rose 22% in total! All well and good. But what drives that 16%? Is it really Ricardian trade? To answer that question, we need to estimate the parameters in that sufficient statistics approach to the trade model – in particular, we need the relative agricultural productivity of each crop in each region, elasticities of trade flows to trade costs (and hence the trade costs themselves), and the share of local consumption which is locally produced (the “trade share”). We’ll then note that in the model, real income in a region is entirely determined by an appropriately weighted combination of local agricultural productivity and changes in the weighted trade share, hence if you regress real income minus the weighted local agricultural productivity shock on a dummy for the arrival of a railroad and the trade share, you should find a zero coefficient on the rail dummy if, in fact, the Ricardian model is capturing why railroads affect local incomes. And even more importantly, if we find that zero, then we understand that efficient infrastructure benefits a region through the sufficient statistic of the trade share, and we can compare the cost-benefit ratio of the railroad to other hypothetical infrastructure projects on the basis of a few well-known elasticities.

So that’s the basic plot. All that remains is to estimate the model parameters, a nontrivial task. First, to get trade costs, one could simply use published freight rates for boats, overland travel, and rail, but this wouldn’t be terribly compelling; bandits, and spoilage, and all the rest of Samuelson’s famous “icebergs” like linguistic differences raise trade costs as well. Donaldson instead looks at the differences in origin and destination prices for goods produced in only one place – particular types of salt – before and after the arrival of a railroad. He then uses a combination of graph theory and statistical inference to estimate the decline in trade costs between all region pairs. Given massive heterogeneity in trade costs by distance – crossing the Western Ghats is very different from shipping a boat down the Ganges! – this technique is far superior to simply assuming trade costs linear in distance for rail, road, or boat.

Second, he checks whether lowered trade costs actually increased trade volume, and at what elasticity, using local rainfall as a proxy for local productivity shocks. The use of rainfall data is wild: for each district, he gathers rainfall deviations for the sowing to harvest times individually for each crop. This identifies the agricultural productivity distribution parameters by region, and therefore, in the Eaton-Kortum type model, lets us calculate the elasticity of trade volume to trade shocks. Salt shipments plus crop-by-region specific rain shocks give us all of the model parameters which aren’t otherwise available in the British data. Throwing these parameters into the model regression, we do in fact find that once agricultural productivity shocks and the weighted trade share are accounted for, the effect of railroads on local incomes are not much different from zero. The model works, and note that real incomes changes based on the timing of the railroad were at no point used to estimate any of the model parameters! That is, if you told me that Bihar had positive rain shocks which increased output on their crops by 10% in the last ten years, and that the share of local production which is eaten locally went from 60 to 80%, I could tell you with quite high confidence the change in local real incomes without even needing to know when the railroad arrived – this is the sense in which those parameters are a “sufficient statistic” for the full general equilibrium trade effects induced by the railroad.

Now this doesn’t mean the model has no further use: indeed, that the model appears to work gives us confidence to take it more seriously when looking at counterfactuals like, what if Britain had spent money developing more effective seaports instead? Or building a railroad network to maximize local economic output rather than on the basis of military transit? Would a noncolonial government with half the resources, but whose incentives were aligned with improving the domestic economy, have been able to build a transport network that improved incomes more even given their limited resources? These are first order questions about economic history which Donaldson can in principle answer, but which are fundamentally unavailable to economists who do not push theory and data as far as he was willing to push them.

The Railroads of the Raj paper is canonical, but far from Donaldson’s only great work. He applies a similar Eaton-Kortum approach to investigate how rail affected the variability of incomes in India, and hence the death rate. Up to 35 million people perished in famines in India in the second half of the 19th century, as the railroad was being built, and these famines appeared to end (1943 being an exception) afterwards. Theory is ambiguous about whether openness increases or decreases the variance of your welfare. On the one hand, in an open economy, the price of potatoes is determined by the world market and hence the price you pay for potatoes won’t swing wildly up and down depending on the rain in a given year in your region. On the other hand, if you grow potatoes and there is a bad harvest, the price of potatoes won’t go up and hence your real income can be very low during a drought. Empirically, less variance in prices in the market after the railroad arrives tends to be more important for real consumption, and hence for mortality, than the lower prices you can get for your own farm goods when there is a drought. And as in the Railroads of the Raj paper, sufficient statistics from a trade model can fully explain the changes in mortality: the railroad decreased the effect of bad weather on mortality completely through Ricardian trade.

Leaving India, Donaldson and Richard Hornbeck took Fogel’s intuition that the the importance of the railroad to the US depends on trade that is worthwhile when the railroad exists versus trade that is worthwhile when only alternatives like better canals or roads exist. That is, if it costs $9 to ship a wagonful of corn by canal, and $8 to do the same by rail, then even if all corn is shipped by rail once the railroad is built, we oughtn’t ascribe all of that trade to the rail. Fogel assumed relationships between land prices and the value of the transportation network. Hornbeck and Donaldson alternatively estimate that relationship, again deriving a sufficient statistic for the value of market access. The intuition is that adding a rail link from St. Louis to Kansas City will also affect the relative prices, and hence agricultural production, in every other region of the country, and these spatial spillovers can be quite important. Adding the rail line to Kansas City affects market access costs in Kansas City as well as relative prices, but clever application of theory can still permit a Fogel-style estimate of the value of rail to be made.

Moving beyond railroads, Donaldson’s trade work has also been seminal. With Costinot and Komunjer, he showed how to rigorously estimate the empirical importance of Ricardian trade for overall gains from trade. Spoiler: it isn’t that important, even if you adjust for how trade affects market power, a result seen in a lot of modern empirical trade research which suggests that aspects like variety differences are more important than Ricardian productivity differences for gains from international trade. There are some benefits to Ricardian trade across countries being relatively unimportant: Costinot, Donaldson and Smith show that changes to what crops are grown in each region can massively limit the welfare harms of climate change, whereas allowing trade patterns to change barely matters. The intuition is that there is enough heterogeneity in what can be grown in each country when climate changes to make international trade relatively unimportant for mitigating these climate shifts. Donaldson has also rigorously studied in a paper with Atkin the importance of internal rather than international trade costs, and has shown in a paper with Costinot that economic integration has been nearly as important as productivity improvements in increasing the value created by American agriculture over the past century.

Donaldson’s CV is a testament to how difficult this style of work is. He spent eight years at LSE before getting his PhD, and published only one paper in a peer reviewed journal in the 13 years following the start of his graduate work. “Railroads of the Raj” has been forthcoming at the AER for literally half a decade, despite the fact that this work is the core of what got Donaldson a junior position at MIT and a tenured position at Stanford. Is it any wonder that so few young economists want to pursue a style of research that is so challenging and so difficult to publish? Let us hope that Donaldson’s award encourages more of us to fully exploit both the incredible data we all now have access to, but also the beautiful body of theory that induces deep insights from that data.

A Note on the Trump Immigration Policy

This site is seven years old, during which time I have not written a single post which is not explicitly about economics research. The posts have collectively reached well over a half million readers in this time, and I have been incredibly encouraged to see how many folks, even outside of academia, are interested in how economics, and economic theory in particular, can help explain the social world.

I hope you’ll permit me to take one post where I break the “economic research only” rule. The executive order issued yesterday banning entry into the United States for citizens of seven nations is an abomination, and directly contrary to both the words of Lazarus’ poem on the Statue of Liberty and the 1965 immigration reform which banned discrimination on the basis of national origin. It is an absolute disgrace, particularly to me as an American who, like the majority of my countrymen, see the immigrant experience as the greatest source of pride the country has to offer. Every academic, including myself, has friends and colleagues and coauthors from the countries included on this ban.

I understand that there are citizens of the affected countries worried about how their studies will be able to continue given these immigration restrictions. While my hope is that the courts will overturn this un-American executive order, I want our friends from these countries to know that there are currently plans in the works to assist you. If you are a economics or strategy student affected by this order, or have students in those fields who may need temporary academic accommodation elsewhere, please email me at kevin.bryan@rotman.utoronto.ca . This is of particular importance for students from the affected countries who are unable to return to the United States from present foreign travel. I can’t make any promises, but I have been in contact with a number of universities who may be able to help. If you are a PhD program director who may be able to help, I’d ask you to also contact me and I can keep you informed as to how things are progressing and how you can assist.

There is a troubling, nativist, anti-liberal (in the sense of Hume and Smith and Mill) streak in the world at the moment. The progress of knowledge depends on an open, free, and international system of cooperation. We in academia must stand up for this system, and for our friends who are being shut out of it.

“Bonus Culture: Competitive Pay, Screening and Multitasking,” R. Benabou & J. Tirole (2014)

Empirically, bonus pay as a component of overall renumeration has become more common over time, especially in highly competitive industries which involve high levels of human capital; think of something like management of Fortune 500 firms, where the managers now have their salary determined globally rather than locally. This doesn’t strike most economists as a bad thing at first glance: as long as we are measuring productivity correctly, workers who are compensated based on their actual output will both exert the right amount of effort and have the incentive to improve their human capital.

In an intriguing new theoretical paper, however, Benabou and Tirole point out that many jobs involve multitasking, where workers can take hard-to-measure actions for intrinsic reasons (e.g., I put effort into teaching because I intrinsically care, not because academic promotion really hinges on being a good teacher) or take easy-to-measure actions for which there might be some kind of bonus pay. Many jobs also involve screening: I don’t know who is high quality and who is low quality, and although I would optimally pay people a bonus exactly equal to their cost of effort, I am unable to do so since I don’t know what that cost is. Multitasking and worker screening interact among competitive firms in a really interesting way, since how other firms incentivize their workers affects how workers will respond to my contract offers. Benabou and Tirole show that this interaction means that more competition in a sector, especially when there is a big gap between the quality of different workers, can actually harm social welfare even in the absence of any other sort of externality.

Here is the intuition. For multitasking reasons, when different things workers can do are substitutes, I don’t want to give big bonus payments for the observable output, since if I do the worker will put in too little effort on the intrinsically valuable task: if you pay a trader big bonuses for financial returns, she will not put as much effort into ensuring all the laws and regulations are followed. If there are other finance firms, though, they will make it known that, hey, we pay huge bonuses for high returns. As a result, workers will sort, with all of the high quality traders will move to the high bonus firm, leaving only the low quality traders at the firm with low bonuses. Bonuses are used not only to motivate workers, but also to differentially attract high quality workers when quality is otherwise tough to observe. There is a tradeoff, then: you can either have only low productivity workers but get the balance between hard-to-measure tasks and easy-to-measure tasks right, or you can retain some high quality workers with large bonuses that make those workers exert too little effort on hard-to-measure tasks. When the latter is more profitable, all firms inefficiently begin offering large, effort-distorting bonuses, something they wouldn’t do if they didn’t have to compete for workers.

How can we fix things? One easy method is with a bonus cap: if the bonus is capped at the monopsony optimal bonus, then no one can try to screen high quality workers away from other firms with a higher bonus. This isn’t as good as it sounds, however, because there are other ways to screen high quality workers (such as offering lower clawbacks if things go wrong) which introduce even worse distortions, hence bonus caps may simply cause less efficient methods to perform the same screening and same overincentivization of the easy-to-measure output.

When the individual rationality or incentive compatibility constraints in a mechanism design problem are determined in equilibrium, based on the mechanisms chosen by other firms, we sometimes called this a “competing mechanism”. It seems to me that there are quite a number of open questions concerning how to make these sorts of problems tractable; a talented young theorist looking for a fun summer project might find it profitable to investigate this as-yet small literature.

Beyond the theoretical result on screening plus multitasking, Tirole and Benabou also show that their results hold for market competition more general than just perfect competition versus monopsony. They do this through a generalized version of the Hotelling line which appears to have some nice analytic properties, at least compared to the usual search-theoretic models which you might want to use when discussing imperfect labor market competition.

Final copy (RePEc IDEAS version), forthcoming in the JPE.

“Entrepreneurship: Productive, Unproductive and Destructive,” W. Baumol (1990)

William Baumol, who strikes me as one of the leading contenders for a Nobel in the near future, has written a surprising amount of interesting economic history. Many economic historians see innovation – the expansion of ideas and the diffusion of products containing those ideas, generally driven by entrepreneurs – as critical for growth. But we find it very difficult to see any reason why the “spirit of innovation” or the net amount of cleverness in society is varying over time. Indeed, great inventions, as undeveloped ideas, occur almost everywhere at almost all times. The steam engine of Heron of Alexandria, which was used for parlor tricks like opening temple doors and little else, is surely the most famous example of a great idea, undeveloped.

Why, then, do entrepreneurs develop ideas and cause products to diffuse widely at some times in history and not at others? Schumpeter gave five roles for an entrepreneur: introducing new products, new production methods, new markets, new supply sources or new firm and industry organizations. All of these are productive forms of entrepreneurship. Baumol points out that clever folks can also spend their time innovating new war implements, or new methods of rent seeking, or new methods of advancing in government. If incentives are such that those activities are where the very clever are able to prosper, both financially and socially, then it should be no surprise that “entrepreneurship” in this broad sense is unproductive or, worse, destructive.

History offers a great deal of support here. Despite quite a bit of productive entrepreneurship in the Middle East before the rise of Athens and Rome, the Greeks and Romans, especially the latter, are well-known for their lack of widespread diffusion of new productive innovations. Beyond the steam engine, the Romans also knew of the water wheel yet used it very little. There are countless other examples. Why? Let’s turn to Cicero: “Of all the sources of wealth, farming is the best, the most able, the most profitable, the most noble.” Earning a governorship and stripping assets was also seen as noble. What we now call productive work? Not so much. Even the freed slaves who worked as merchants had the goal of, after acquiring enough money, retiring to “domum pulchram, multum serit, multum fenerat”: a fine house, land under cultivation and short-term loans for voyages.

Baumol goes on to discuss China, where passing the imperial exam and moving into government was the easiest way to wealth, and the early middle ages of Europe, where seizing assets from neighboring towns was more profitable than expanding trade. The historical content of Baumol’s essay was greatly expanded in a book he edited alongside Joel Mokyr and David Landes called The Invention of Enterprise, which discusses the relative return to productive entrepreneurship versus other forms of entrepreneurship from Babylon up to post-war Japan.

The relative incentives for different types of “clever work” are relevant today as well. Consider Luigi Zingales’ new lecture, Does Finance Benefit Society? I can’t imagine anyone would consider Zingales hostile to the financial sector, but he nonetheless discusses in exhaustive detail the ways in which incentives push some workers in that sector toward rent-seeking and fraud rather than innovation which helps the consumer.

Final JPE copy (RePEc IDEAS). Murphy, Schleifer and Vishny have a paper, also from the JPE in 1990, on the topic of how clever people in many countries are incentivized toward rent-seeking; their work is more theoretical and empirical than historical. If you are interested in innovation and entrepreneurship, I uploaded the reading list for my PhD course on the topic here.

Personal Note: Moving to Toronto

Before discussing a lovely application of High Micro Theory to a long-standing debate in macro in a post coming right behind this one, a personal note: starting this summer, I am joining the Strategy group at the University of Toronto Rotman School of Management as an Assistant Professor. I am, of course, very excited about the opportunity, and am glad that Rotman was willing to give me a shot even though I have a fairly unusual set of interests. Some friends asked recently if I have any job market advice, and I told them that I basically just spent five years reading interesting papers, trying to develop a strong toolkit, and using that knowledge base to attack questions I am curious about as precisely as I could, with essentially no concern about how the market might view this. Even if you want to be strategic, though, this type of idiosyncrasy might not be a bad strategy.

Consider the following model: any school evaluates you according to v+e(s), where v is a common signal of your quality and e(s) is a school-specific taste shock. You get an offer if v+e(s) is maximized for some school s; you are maximizing a first-order statistic, essentially. What this means is that increasing v (by being smarter, or harder-working, or in a hotter field) and increasing the variance of e (by, e.g., working on very specific topics even if they are not “hot”, or by developing an unusual set of talents) are equally effective in garnering a job you will be happy with. And, at least in my case, increasing v provides disutility whereas increasing the variance of e can be quite enjoyable! If you do not want to play such a high-variance strategy, though, my friend James Bailey (heading from Temple’s PhD program to work at Creighton) has posted some more sober yet still excellent job market advice. I should also note that writing a research-oriented blog seemed to be weakly beneficial as far as interviews were concerned; in perhaps a third of my interviews, someone mentioned this site, and I didn’t receive any negative feedback. Moving from personal anecdote to the minimal sense of the word data, Jonathan Dingel of Trade Diversion also seems to have had a great deal of success. Given this, I would suggest that there isn’t much need to worry that writing publicly about economics, especially if restricted to technical content, will torpedo a future job search.

“Compulsory Licensing – Evidence from the Trading with the Enemy Act,” P. Moser & A. Voena (2011)

Petra Moser is unquestionably doing the most interesting data-driven work on invention and growth of any economist working today; indeed, if only she applied her great data more directly to puzzles in theory, I think she would become a good bet for a Clark medal in a few years. The present paper, with Alessandra Voena, a star on last year’s junior job market, is forthcoming in the AER, and deservedly.

The problem at hand is compulsory licensing. This is a big deal in the Doha Round of WTO negotiations, since many poor and middle-income countries (think Thailand and Brazil) force drugmakers to license some particularly important drugs to local manufacturers. This helps lower the cost of AIDS retrovirals, but probably also has some negative effect on the incentive to develop newer drugs for diseases prevalent in the third world. But the tradeoff is not this simple! Because the drugs are licensed to local firms, who then produce them, there is some technology transfer and presumably some learning-by-doing. Does compulsory licensing help infant industries grow in the recipient country? And by how much?

The historical experiment is the Trading with the Enemy Act. During WWI, the US government seized a bunch of property owned by German firms, including their patents. They then licensed these at low cost to US firms. Germany was well ahead of the US technologically in organic chemistry, and Moser and Voena use this fact to study the impact of compulsory licenses for a variety of chemical dyes. They find that in (very narrowly defined) technological areas where patents where licensed, future propensity to patent by US firms roughly doubled. No such increase was seen in non-American firms who didn’t have access to such cheap licenses. The impact on future patents occurred a few years after WWI, consistent with a learning by doing story. Relevant for the Doha Round debate, German firms quickly began working on new chemistry inventions after the war, which you might interpret as consistent with a one-time seizure of IP having no long-term impact on invention if it truly an exceptional circumstance.

Why the delay if you have a patent explaining what to do? It turns out that patents – and this is true even today – are often woefully insufficient to replicate an original invention. DuPont’s first attempt at (German-invented) indigo dye turned out green instead of blue! BASF’s Haber-Bosch process patent didn’t include certain tricky details regarding chemical nature of the appropriate catalyst; it took 10 years for US firms to figure out the secret.

http://ssrn.com/abstract=1313867 (July 2011 working paper. Moser and Voena also, according to Moser’s website, have a forthcoming working paper on the impact of TRIPS licensing on the US pharma industry which I certainly want to check out. As far as I know, that paper hasn’t begun to circulate.)

“Converting Pirates without Cannibalizing Users,” B. Danaher, S. Dhanasobhon, M. Smith & R. Telang (2010)

“You can’t compete with free,” right? Somehow, iTunes and other online distributors manage to sell a large number of TV episodes a la carte, even though free pirated copies of these shows are widely available on BitTorrent. Are there just two different types of consumers with different moral preferences? Or might many consumers become pirates if incentivized to do so? How much does piracy eat into sales?

Danaher et al have a great natural experiment. In 2007, NBC played hardball with Apple over iTunes pricing of individual episodes. From December 2007 to September 2008, NBC and affiliate shows were not available on iTunes, the dominant legal site for TV downloads. The authors scraped daily reports on torrent traffic for a huge number of TV episodes, as well as daily Amazon sales date for the box sets of these shows.

The results are insightful. Piracy of NBC shows jumped 11% after the shows were removed from iTunes. This increase may be understated since it is 11% above and beyond the increase in piracy of non-NBC classic shows during the same period – if NBC’s actions led some users to try piracy, and those users also began to pirate ABC shows, the actual effect of removing the legal channel for NBC shows on total online piracy may be even bigger than 11.4%. To put that number in perspective, the increase in NBC downloads per week was approximately twice the total number of downloads of these shows via iTunes when they were available. The impact of DVD box sales is close to zero, perhaps suggesting that “digital consumers” are in this instance quite separate in their demand from buyers of DVDs.

What might lead to this result? One explanation is that piracy involves a fixed cost, such as learning BitTorrent or “getting over one’s moral qualms.” Once that price is paid, all content is free, hence demand will be higher than demand for $2 episodes on iTunes. This is further supported by the fact that NBC piracy did not fall back to its November 2007 level after legal NBC shows returned to iTunes in 2008.

Two takeaways here. For piracy researchers, modeling consumers as “non-pirates” and “pirates” who do not respond to incentives across this divide is probably not accurate. For firms, when facing competition with free bootleg copies, the costs of mistakes in pricing strategy can be severe indeed!

http://www.heinz.cmu.edu/~rtelang/ms_nbc.pdf (Final WP version – published in Management Science 2010)

[Hat tip to the commenter who pointed me to this article.]

%d bloggers like this: