“How do Patents Affect Follow-On Innovation: Evidence from the Human Genome,” B. Sampat & H. Williams (2014)

This paper, by Heidi Williams (who surely you know already) and Bhaven Sampat (who is perhaps best known for his almost-sociological work on the Bayh-Dole Act with Mowery), made quite a stir at the NBER last week. Heidi’s job market paper a few years ago, on the effect of openness in the Human Genome Project as compared to Celera, is often cited as an “anti-patent” paper. Essentially, she found that portions of the human genome sequenced by the HGP, which placed their sequences in the public domain, were much more likely to be studied by scientists and used in tests than portions sequenced by Celera, who initially required fairly burdensome contractual steps to be followed. This result was very much in line with research done by Fiona Murray, Jeff Furman, Scott Stern and others which also found that minor differences in openness or accessibility can have substantial impacts on follow-on use (I have a paper with Yasin Ozcan showing a similar result). Since the cumulative nature of research is thought to be critical, and since patents are a common method of “restricting openness”, you might imagine that Heidi and the rest of these economists were arguing that patents were harmful for innovation.

That may in fact be the case, but note something strange: essentially none of the earlier papers on open science are specifically about patents; rather, they are about openness. Indeed, on the theory side, Suzanne Scotchmer has a pair of very well-known papers arguing that patents effectively incentivize cumulative innovation if there are no transaction costs to licensing, no spillovers from sequential research, and no incentive for early researchers to limit licenses in order to protect their existing business (consider the case of Farnsworth and the FM radio), and if potential follow-on innovators can be identified before they sink costs. That is a lot of conditions, but it’s not hard to imagine industries where inventions are clearly demarcated, where holders of basic patents are better off licensing than sitting on the patent (perhaps because potential licensors are not also competitors), and where patentholders are better off not bothering academics who technically infringe on their patent.

What industry might have such characteristics? Sampat and Williams look at gene patents. Incredibly, about 30 percent of human genes have sequences that are claimed under a patent in the United States. Are “patented genes” still used by scientists and developers of medical diagnostics after the patent grant, or is the patent enough of a burden to openness to restrict such use? What is interesting about this case is that the patentholder generally wants people to build on their patent. If academics find some interesting genotype-phenotype links based on their sequence, or if another firm develops a disease test based on the sequence, there are more rents for the patentholder to garner. In surveys, it seems that most academics simply ignore patents of this type, and most gene patentholders don’t interfere in research. Anecdotally, licenses between the sequence patentholder and follow-on innovators are frequent.

In general, it is really hard to know whether patents have any effect on anything, however; there is very little variation over time and space in patent strength. Sampat and Williams take advantage of two quasi-experiments, however. First, they compare applied-for-but-rejected gene patents to applied-for-but-granted patents. At least for gene patents, there is very little difference in terms of measurables before the patent office decision across the two classes. Clearly this is not true for patents as a whole – rejected patents are almost surely of worse quality – but gene patents tend to come from scientifically competent firms rather than backyard hobbyists, and tend to have fairly straightforward claims. Why are any rejected, then? The authors’ second trick is to look directly at patent examiner “leniency”. It turns out that some examiners have rejection rates much higher than others, despite roughly random assignment of patents within a technology class. Much of the difference in rejection probability is driven by the random assignment of examiners, which justifies the first rejected-vs-granted technique, and also suggested an instrumental variable to further investigate the data.

With either technique, patent status essentially generates no difference in the use of genes by scientific researchers and diagnostic test developers. Don’t interpret this result as turning over Heidi’s earlier genome paper, though! There is now a ton of evidence that minor impediments to openness are harmful to cumulative innovation. What Sampat and Williams tell us is that we need to be careful in how we think about “openness”. Patents can be open if the patentholder has no incentive to restrict further use, if downstream innovators are easy to locate, and if there is no uncertainty about the validity or scope of a patent. Indeed, in these cases the patentholder will want to make it as easy as possible for follow-on innovators to build on their patent. On the other hand, patentholders are legally allowed to put all sorts of anti-openness burdens on the use of their patented invention by anyone, including purely academic researchers. In many industries, such restrictions are in the interest of the patentholder, and hence patents serve to limit openness; this is especially true where private sector product development generates spillovers. Theory as in Scotchmer-Green has proven quite correct in this regard.

One final comment: all of these types of quasi-experimental methods are always a bit weak when it comes to the extensive margin. It may very well be that individual patents do not restrict follow-on work on that patent when licenses can be granted, but at the same time the IP system as a whole can limit work in an entire technological area. Think of something like sampling in music. Because all music labels have large teams of lawyers who want every sample to be “cleared”, hip-hop musicians stopped using sampled beats to the extent they did in the 1980s. If you investigated whether a particular sample was less likely to be used conditional on its copyright status, you very well might find no effect, as the legal burden of chatting with the lawyers and figuring out who owns what may be enough of a limit to openness that musicians give up samples altogether. Likewise, in the complete absence of gene patents, you might imagine that firms would change their behavior toward research based on sequenced genes since the entire area is more open; this is true even if the particular gene sequence they want to investigate was unpatented in the first place, since having to spend time investigating the legal status of a sequence is a burden in and of itself.

July 2014 Working Paper (No IDEAS version). Joshua Gans has also posted a very interesting interpretation of this paper in terms of Coasean contractability.

“Agricultural Productivity and Structural Change: Evidence from Brazil,” P. Bustos et al (2014)

It’s been a while – a month of exploration in the hinterlands of the former Soviet Union, a move up to Canada, and a visit down to the NBER Summer Institute really put a cramp on my posting schedule. That said, I have a ridiculously long backlog of posts to get up, so they will be coming rapidly over the next few weeks. I saw today’s paper presented a couple days ago at the Summer Institute. (An aside: it’s a bit strange that there isn’t really any media at SI – the paper selection process results in a much better set of presentations than at the AEA or the Econometric Society, which simply have too long of a lag from the application date to the conference, and too many half-baked papers.)

Bustos and her coauthors ask, when can improvements in agricultural productivity help industrialization? An old literature assumed that any such improvement would help: the newly rich agricultural workers would demand more manufactured goods, and since manufactured and agricultural products are complements, rising agricultural productivity would shift workers into the factories. Kiminori Matsuyama wrote a model (JET 1992) showing the problem here: roughly, if in a small open economy productivity goes up in a good you have a Ricardian comparative advantage in, then you want to produce even more of that good. A green revolution which doubles agricultural productivity in, say, Mali, while keeping manufacturing productivity the same, will allow Mali to earn twice as much selling the agriculture overseas. Workers will then pour into the agricultural sector until the marginal product of labor is re-equated in both sectors.

Now, if you think that industrialization has a bunch of positive macrodevelopment spillovers (via endogenous growth, population control or whatever), then this is worrying. Indeed, it vaguely suggests that making villages more productive, an outright goal of a lot of RCT-style microdevelopment studies, may actually be counterproductive for the country as a whole! That said, there seems to be something strange going on empirically, because we do appear to see industrialization in countries after a Green Revolution. What could be going on? Let’s look back at the theory.

Implicitly, the increase in agricultural productivity in Matsuyama was “Hicks-neutral” – it increased the total productivity of the sector without affecting the relative marginal factor productivities. A lot of technological change, however, is factor-biased; to take two examples from Brazil, modern techniques that allow for double harvesting of corn each year increase the marginal productivity of land, whereas “Roundup Ready” GE soy that requires less tilling and weeding increases the marginal productivity of farmers. We saw above that Hicks-neutral technological change in agriculture increases labor in the farm sector: workers choosing where to work means that the world price of agriculture times the marginal product of labor in that sector must be equal to world price of manufacturing times the marginal product of labor in manufacturing. A Hicks-neutral improvement in agricultural productivity raises MPL in that sector no matter how much land or labor is currently being used, hence wage equality across sectors requires workers to leave the factor for the farm.

What of biased technological change? As before, the only thing we need to know is whether the technological change increases the marginal product of labor. Land-augmenting technical change, like double harvesting of corn, means a country can produce the same amount of output with the old amount of farm labor and less land. If one more worker shifts from the factory to the farm, she will be farming less marginal land than before the technological change, hence her marginal productivity of labor is higher than before the change, hence she will leave the factory. Land-augmenting technological change always increases the amount of agricultural labor. What about farm-labor-augmenting technological change like GM soy? If land and labor are not very complementary (imagine, in the limit, that they are perfect substitutes in production), then trivially the marginal product of labor increases following the technological change, and hence the number of farm workers goes up. The situation is quite different if land and farm labor are strong complements. Where previously we had 1 effective worker per unit of land, following the labor-augmenting technology change it is as if we have, say, 2 effective workers per unit of land. Strong complementarity implies that, at that point, adding even more labor to the farms is pointless: the marginal productivity of labor is decreasing in the technological level of farm labor. Therefore, labor-augmenting technology with a strongly complementary agriculture production function shifts labor off the farm and into manufacturing.

That’s just a small bit of theory, but it really clears things up. And even better, the authors find empirical support for this idea: following the introduction to Brazil of labor-augmenting GM soy and land-augmenting double harvesting of maize, agricultural productivity rose everywhere, the agricultural employment share rose in areas that were particularly suitable for modern maize production, and the manufacturing employment share rose in areas that were particularly suitable for modern soy production.

August 2013 working paper. I think of this paper as a nice complement to the theory and empirics in Acemoglu’s Directed Technical Change and Walker Hanlon’s Civil War cotton paper. Those papers ask how changes in factor prices endogenously affect the development of different types of technology, whereas Bustos and coauthors ask how the exogenous development of different types of technology affect the use of various factors. I read the former as most applicable to structural change questions in countries at the technological frontier, and the latter as appropriate for similar questions in developing countries.

Debraj Ray on Piketty’s Capital

As mentioned by Sandeep Baliga over at Cheap Talk, Debraj Ray has a particularly interesting new essay on Piketty’s Capital in the 21st Century. If you are theoretically inclined, you will find Ray’s comments to be one of the few reviews of Piketty that proves insightful.

I have little to add to Ray, but here are four comments about Piketty’s book:

1) The data collection effort on inequality by Piketty and coauthors is incredible and supremely interesting; not for nothing does Saez-Piketty 2003 have almost 2000 citations. Much of this data can be found in previous articles, of course, but it is useful to have it all in one place. Why it took so long for this data to become public, compared to things like GDP measures, is an interesting one which sociology Dan Hirschman is currently working on. Incidentally, the data quality complaints by the Financial Times seem to me of rather limited importance to the overall story.

2) The idea that Piketty is some sort of outsider, as many in the media want to make him out to be, is very strange. His first job was at literally the best mainstream economics department in the entire world, he won the prize given to the best young economist in Europe, he has published a paper in a Top 5 economics journal every other year since 1995, his most frequent coauthor is at another top mainstream department, and that coauthor himself won the prize for the best young economist in the US. It is also simply not true that economists only started caring about inequality after the 2008 financial crisis; rather, Autor and others were writing on inequality well before date in response to clearer evidence that the “Great Compression” of the income distribution in the developed world during the middle of the 20th century had begun to reverse itself sometime in the 1970s. Even I coauthored a review of income inequality data in late 2006/early 2007!

3) As Ray points out quite clearly, the famous “r>g” of Piketty’s book is not an explanation for rising inequality. There are lots of standard growth models – indeed, all standard growth models that satisfy dynamic efficiency – where r>g holds with no impact on the income distribution. Ray gives the Harrod model: let output be produced solely by capital, and let the capital-output ratio be constant. Then Y=r*K, where r is the return to capital net of depreciation, or the capital-output ratio K/Y=1/r. Now savings in excess of that necessary to replace depreciated assets is K(t+1)-K(t), or

Y(t+1)[K(t+1)/Y(t+1)] – Y(t)[K(t)/Y(t)]

Holding the capital-output ratio constant, we have that savings s=[Y(t+1)-Y(t)]K/Y=g[K/Y], where g is the growth rate of the economy. Finally, since K/Y=1/r in the Harrod model, we have that s=g/r, and hence r>g will hold in a Harrod model whenever the savings rate is less than 100% of current income. This model, however, has nothing to do with the distribution of income. Ray notes that the Phelps-Koopmans theorem implies that a similar r>g result will hold along any dynamically efficient growth path in much more general models.

You may wonder, then, how we can have r>g and yet not have exploding income held by the capital-owning class. Two reasons: first, as Piketty has pointed out, r in these economic models (the return to capital, full stop) and r in the sense important to growing inequality, are not the same concept, since wars and taxes lower the r received by savers. Second, individuals presumably also dissave according to some maximization concept. Imagine an individual has $1 billion, the risk-free market return after taxes is 4%, and the economy-wide growth rate is 2%, with both numbers exogenously holding forever. It is of course true true that this individual could increase their share of the economy’s wealth without bound. Even with the caveat that as the capital-owning class owns more and more, surely the portion of r due to time preference, and hence r itself, will decline, we still oughtn’t conclude that income inequality will become worse or that capital income will increase. If this representative rich individual simply consumes 1.92% of their income each year – a savings rate of over 98 percent! – the ratio of income among the idle rich to national income will remain constant. What’s worse, if some of the savings is directed to human capital rather than physical capital, as is clearly true for the children of the rich in the US, the ratio of capital income to overall income will be even less likely to grow.

These last couple paragraphs are simply an extended argument that r>g is not a “Law” that says something about inequality, but rather a starting point for theoretical investigation. I am not sure why Piketty does not want to do this type of investigation himself, but the book would have been better had he done so.

4) What, then, does all this mean about the nature of inequality in the future? Ray suggests an additional law: that there is a long-run tendency for capital to replace labor. This is certainly true, particularly if human capital is counted as a form of “capital”. I disagree with Ray about the implication of this fact, however. He suggests that “to avoid the ever widening capital-labor inequality as we lurch towards an automated world, all its inhabitants must ultimately own shares of physical capital.” Consider the 19th century as a counterexample. There was enormous technical progress in agriculture. If you wanted a dynasty that would be rich in 2014, ought you have invested in agricultural land? Surely not. There has been enormous technical progress in RAM chips and hard drives in the last couple decades. Is the capital related to those industries where you ought to have invested? No. With rapid technical progress in a given sector, the share of total income generated by that sector tends to fall (see Baumol). Even when the share of total income is high, the social surplus of technical progress is shared among various groups according to the old Ricardian rule: rents accrue to the (relatively) fixed factor! Human capital which is complementary to automation, or goods which can maintain a partial monopoly in an industry complementary to those affected by automation, are much likelier sources of riches than owning a bunch of robots, since robots and the like are replicable and hence the rents accrued to their owners, regardless of the social import, will be small.

There is still a lot of work to be done concerning the drivers of long-run inequality, by economists and by those more concerned with political economy and sociology. Piketty’s data, no question, is wonderful. Ray is correct that the so-called Laws in Piketty’s book, and the predictions about the next few decades that they generate, are of less interest.

A Comment on Thomas Piketty, inclusive of appendix, is in pdf form, or a modified version in html can be read here.

On Gary Becker

Gary Becker, as you must surely know by now, has passed away. This is an incredible string of bad luck for the University of Chicago. With Coase and Fogel having passed recently, and Director, Stigler and Friedman dying a number of years ago, perhaps Lucas and Heckman are the only remaining giants from Chicago’s Golden Age.

Becker is of course known for using economic methods – by which I mean constrained rational choice – to expand economics beyond questions of pure wealth and prices to question of interest to social science at large. But this contribution is too broad, and he was certainly not the only one pushing such an expansion; the Chicago Law School clearly was doing the same. For an economist, Becker’s principal contribution can be summarized very simply: individuals and households are producers as well as consumers, and rational decisions in production are as interesting to analyze as rational decisions in consumption. As firms must purchase capital to realize their productive potential, humans much purchase human capital to improve their own possible utilities. As firms take actions today which alter constraints tomorrow, so do humans. These may seem to be trite statements, but that are absolutely not: human capital, and dynamic optimization of fixed preferences, offer a radical framework for understanding everything from topics close to Becker’s heart, like educational differences across cultures or the nature of addiction, to the great questions of economics like how the world was able to break free from the dreadful Malthusian constraint.

Today, the fact that labor can augment itself with education is taken for granted, which is a huge shift in how economists think about production. Becker, in his Nobel Prize speech: “Human capital is so uncontroversial nowadays that it may be difficult to appreciate the hostility in the 1950s and 1960s toward the approach that went with the term. The very concept of human capital was alleged to be demeaning because it treated people as machines. To approach schooling as an investment rather than a cultural experience was considered unfeeling and extremely narrow. As a result, I hesitated a long time before deciding to call my book Human Capital, and hedged the risk by using a long subtitle. Only gradually did economists, let alone others, accept the concept of human capital as a valuable tool in the analysis of various economic and social issues.”

What do we gain by considering the problem of human capital investment within the household? A huge amount! By using human capital along with economic concepts like “equilibrium” and “private information about types”, we can answer questions like the following. Does racial discrimination wholly reflect differences in tastes? (No – because of statistical discrimination, underinvestment in human capital by groups that suffer discrimination can be self-fulfilling, and, as in Becker’s original discrimination work, different types of industrial organization magnify or ameliorate tastes for discrimination in different ways.) Is the difference between men and women in traditional labor roles a biological matter? (Not necessarily – with gains to specialization, even very small biological differences can generate very large behavioral differences.) What explains many of the strange features of labor markets, such as jobs with long tenure, firm boundaries, etc.? (Firm-specific human capital requires investment, and following that investment there can be scope for hold-up in a world without complete contracts.) The parenthetical explanations in this paragraph require completely different policy responses from previous, more naive explanations of the phenomena at play.

Personally, I find human capital most interesting in understanding the Malthusian world. Malthus conjectured the following: as productivity improved for some reason, excess food will appear. With excess food, people will have more children and population will grow, necessitating even more food. To generate more food, people will begin farming marginal land, until we wind up with precisely the living standards per capita that prevailed before the productivity improvement. We know, by looking out our windows, that the world in 2014 has broken free from Malthus’ dire calculus. But how? The critical factors must be that as productivity improves, population does not grow, or else grows slower than the continued endogenous increases in productivity. Why might that be? The quantity-quality tradeoff. A productivity improvement generates surplus, leading to demand for non-agricultural goods. Increased human capital generates more productivity on those goods. Parents have fewer kids but invest more heavily in their human capital so that they can work in the new sector. Such substitution is only partial, so in order to get wealthy, we need a big initial productivity improvement to generate demand for the goods in the new sector. And thus Malthus is defeated by knowledge.

Finally, a brief word on the origin of human capital. The idea that people take deliberate and costly actions to improve their productivity, and that formal study of this object may be useful, is modern: Mincer and Schultz in the 1950s, and then Becker with his 1962 article and famous 1964 book. That said, economists (to the chagrin of some other social scientists!) have treated humans as a type of capital for much longer. A fascinating 1966 JPE [gated] traces this early history. Petty, Smith, Senior, Mill, von Thunen: they all thought an accounting of national wealth required accounting for the productive value of the people within the nation, and 19th century economists frequently mention that parents invest in their children. These early economists made such claims knowing they were controversial; Walras clarifies that in pure theory “it is proper to abstract completely from considerations of justice and practical expediency” and to regard human beings “exclusively from the point of view of value in exchange.” That is, don’t think we are imagining humans as being nothing other than machines for production; rather, human capital is just a useful concept when discussing topics like national wealth. Becker, unlike the caricature where he is the arch-neoliberal, was absolutely not the first to “dehumanize” people by rationalizing decisions like marriage or education in a cost-benefit framework; rather, he is great because he was the first to show how powerful an analytical concept such dehumanization could be!

“Competition in Persuasion,” M. Gentzkow & E. Kamenica (2012)

How’s this for fortuitous timing: I’d literally just gone through this paper by Gentzkow and Kamenica yesterday, and this morning it was announced that Gentzkow is the winner of the 2014 Clark Medal! More on the Clark in a bit, but first, let’s do some theory.

This paper is essentially the multiple sender version of the great Bayesian Persuasion paper by the same authors (discussed on this site a couple years ago). There are a group of experts who can (under commitment to only sending true signals) send costless signals about the realization of the state. Given the information received, the agent makes a decision, and each expert gets some utility depending on that decision. For example, the senders might be a prosecutor and a defense attorney who know the guilt of a suspect, and the agent a judge. The judge convicts if p(guilty)>=.5, the prosecutor wants to maximize convictions regardless of underlying guilt, and vice versa for the defense attorney. Here’s the question: if we have more experts, or less collusive experts, or experts with less aligned interests, is more information revealed?

A lot of our political philosophy is predicated on more competition in information revelation leading to more information actually being revealed, but this is actually a fairly subtle theoretical question! For one, John Stuart Mill and others of his persuasion would need some way of discussing how people competing to reveal information strategically interact, and to the extent that this strategic interaction is non-unique, they would need a way for “ordering” sets of potentially revealed information. We are lucky in 2014, thanks to our friends Nash and Topkis, to be able to nicely deal with each of those concerns.

The trick to solving this model (basically every proof in the paper comes down to algebra and some simple results from set theory; they are clever but not technically challenging) is the main result from the Bayesian Persuasion paper. Draw a graph with the agent’s posterior belief on the X-axis, and the utility (call this u) the sender gets from actions based on each posterior on the y-axis. Now draw the smallest concave function (call it V) that is everywhere greater than u. If V is strictly greater than u at the prior p, then a sender can improve her payoff by revealing information. Take the case of the judge and the prosecutor. If the judge has the prior that everyone brought before them is guilty with probability .6, then the prosecutor never reveals information about any suspect, and the judge always convicts (giving the prosecutor utility 1 rather than 0 from an acquittal). If, however, the judge’s prior is that everyone is guilty with .4, then the prosecutor can mix such that 80 percent of criminals are convicted by judiciously revealing information. How? Just take 2/3 of the innocent people, and all of the guilty people, and send signals that each of these people is guilty with p=.5, and give the judge information on the other 1/3 of innocent people that they are innocent with probability 1. This is plausible in a Bayesian sense. The judge will convict all of the folks where p(guilty)=.5, meaning 80 percent of all suspects are convicted. If you draw the graph described above with u=1 when the judge convicts and u=0 otherwise, it is clear that V>u if and only if p<.5, hence information is only revealed in that case.

What about when there are multiple senders with different utilities u? It is somewhat intuitive: more information is always, almost by definition, informative for the agent (remember Blackwell!). If there is any sender who can improve their payoff by revealing information given what has been revealed thus far, then we are not in equilibrium, and some sender has the incentive to deviate by revealing more information. Therefore, adding more senders increases the amount of information revealed and “shrinks” the set of beliefs that the agent might wind up holding (and, further, the authors show that any Bayesian plausible beliefs where no sender can further reveal information to improve their payoff is an equilibrium). We still have a number of technical details concerning multiplicity of equilibria to deal with, but the authors show that these results hold in a set order sense as well. This theorem is actually great: to check equilibrium information revelation, I only need to check where V and u diverge sender by sender, without worrying about complex strategic interactions. Because of that simplicity, it ends up being very easy to show that removing collusion among senders, or increasing the number of senders, will improve information revelation in equilibrium.

September 2012 working paper (IDEAS version). A brief word on the Clark medal. Gentzkow is a fine choice, particularly for his Bayesian persuasion papers, which are already very influential. I have no doubt that 30 years from now, you will still see the 2011 paper on many PhD syllabi. That said, the Clark medal announcement is very strange. It focuses very heavily on his empirical work on newspapers and TV, and mentions his hugely influential theory as a small aside! This means that five of the last six Clark medal winners, everyone but Levin and his relational incentive contracts, have been cited primarily for MIT/QJE-style theory-light empirical microeconomics. Even though I personally am primarily an applied microeconomist, I still see this as a very odd trend: no prizes for Chernozhukov or Tamer in metrics, or Sannikov in theory, or Farhi and Werning in macro, or Melitz and Costinot in trade, or Donaldson and Nunn in history? I understand these papers are harder to explain to the media, but it is not a good thing when the second most prominent prize in our profession is essentially ignoring 90% of what economists actually do.

“Finite Additivity, Another Lottery Paradox, and Conditionalisation,” C. Howson (2014)

If you know the probability theorist Bruno de Finetti, you know him either for his work on exchangeable processes, or for his legendary defense of finite additivity. Finite additivity essentially replaces the Kolmogorov assumption of countable additivity of probabilities. If Pr(i) for i=1 to N is the probability of event i, then the probability of the union of all i is just the sum of each individual probability under either countable of finite additivity, but countable additivity requires that property to hold for a countably infinite set of events.

What is objectionable about countable additivity? There are three classic problems. First, countable additivity restricts me from some very reasonable subjective beliefs. For instance, I might imagine that a Devil is going to pick one of the integers, and that he is equally likely to predict any given number. That is, my prior is uniform over the integers. Countable additivity does not allow this: if the probability of any given number being picked is greater than zero, then the sum diverges, and if the probability any given number is picked is zero, then by countable additivity the sum of the grand set is also zero, violating the usual axiom that the grand set has probability 1. The second problem, loosely related to the first, is that I literally cannot assign probabilities to some objects, such as a nonmeasurable set.

The third problem, though, is the really worrying one. To the extent that a theory of probability has epistemological meaning and is not simply a mathematical abstraction, we might want to require that it not contradict well-known philosophical premises. Imagine that every day, nature selects either 0 or 1. Let us observe 1 every day until the present (call this day N). Let H be the hypothesis that nature will select 1 every day from now until infinity. It is straightforward to show that countable additivity requires that as N grows large, continued observation of 1 implies that Pr(H)->1. But this is just saying that induction works! And if there is any great philosophical advance in the modern era, it is Hume’s (and Goodman’s, among others) demolition of the idea that induction is sensible. My own introduction to finite additivity comes from a friend’s work on consensus formation and belief updating in economics: we certainly don’t want to bake in ridiculous conclusions about beliefs that rely entirely on countable additivity, given how strongly that assumption militates for induction. Aumann was always very careful on this point.

It turns out that if you simply replace countable additivity with finite additivity, all of these problems (among others) go away. Howson, in a paper in the newest issue of Synthese, asks why, given that clear benefit, anyone still finds countable additivity justifiable? Surely there are lots of pretty theorems, from Radon-Nikodym on down, that require countable additivity, but if the theorem critically hinges on the basis of an unjustifiable assumption, then what exactly are we to infer about the justifiability of the theorem itself?

Two serious objections are tougher to deal with for de Finetti acolytes: coherence and conditionalization. Coherence, a principle closely associated with de Finetti himself, says that there should not be “fair bets” given your beliefs where you are guaranteed to lose money. It is sometimes claimed that a uniform prior over the naturals is not coherent: you are willing to take a bet that any given natural number will not be drawn, but the conjunction of such bets for all natural numbers means you will lose money with certainty. This isn’t too worrying, though; if we reject countable additivity, then why should we define coherence to apply to non-finite conjunctions of bets?

Conditionalization is more problematic. It means that given prior P(i), your posterior P(f) of event S after observing event E must be such that P(f)(S)=P(i)(S|E). This is just “Bayesian updating” off of a prior. Lester Dubins pointed out the following. Let A and B be two mutually exclusive hypothesis, such that P(A)=P(B)=.5. Let the random quantity X take positive integer values such that P(X=n|B)=0 (you have a uniform prior over the naturals conditional on B obtaining, which finite additivity allows), and P(X=n|A)=2^(-n). By the law of total probability, for all n, P(X=n)>0, and therefore by Bayes’ Theorem, P(B|X=n)=1 and P(A|X=n)=0, no matter which n obtains! Something is odd here. Before seeing the resolution of n, you would take a fair bet on A obtaining. But once n obtains (no matter which n!), you are guaranteed to lose money by betting on A.

Here is where Howson tries to save de Finetti with an unexpected tack. The problem in Dubins example is not finite additivity, but conditionalization – Bayesian updating from priors – itself! Here’s why. By a principle called “reflection”, if using a suitable updating rule, your future probability of event A is p with certainty, then your current probability of event A must also be p. By Dubins argument, then, P(A)=0 must hold before X realizes. But that means your prior must be 0, which means that whatever independent reasons you had for the prior being .5 must be rejected. If we are to give up one of Reflection, Finite Additivity, Conditionalization, Bayes’ Theorem or the Existence of Priors, Howson says we ought give up conditionalization. Now, there are lots of good reasons why conditionalization is sensible within a utility framework, so at this point, I will simply point your toward the full paper and let you decide for yourself whether Howson’s conclusion is sensible. In any case, the problems with countable additivity should be better known by economists.

Final version in Synthese, March 2014 [gated]. Incidentially, de Finetti was very tightly linked to the early econometricians. His philosophy – that probability is a form of logic and hence non-ampliative (“That which is logical is exact, but tells us nothing”) – simply oozes out of Savage/Aumann/Selten methods of dealing with reasoning under uncertainty. Read, for example, what Keynes had to say about what a probability is, and you will see just how radical de Finetti really was.

“At Least Do No Harm: The Use of Scarce Data,” A. Sandroni (2014)

This paper by Alvaro Sandroni in the new issue of AEJ:Micro is only four pages long, and has only one theorem whose proof is completely straightforward. Nonetheless, you might find it surprising if you don’t know the literature on expert testing.

Here’s the problem. I have some belief p about which events (perhaps only one, perhaps many) will occur in the future, but this belief is relatively uninformed. You come up to me and say, hey, I actually *know* the distribution, and it is p*. How should I incentivize you to truthfully reveal your knowledge? This step is actually an old one: all we need is something called a proper scoring rule, the Brier Score being the most famous. If someone makes N predictions f(i) about the probability of binary events i occurring, then the Brier Score is the sum of the squared difference between each prediction and its outcome {0,1}, divided by N. So, for example, if there are three events, you say all three will independently happen with p=.5, and the actual outcomes are {0,1,0}, your score is 1/3*[(.5-1)^2+2*(.5-0)^2], or .25. The Brier Score being a proper scoring rule means that your expected score is lowest if you actually predict the true probability distribution. That being the case, all I need to do is pay you more the lower your Brier Score is, and if you are risk-neutral you, being the expert, will truthfully reveal your knowledge. There are more complicated scoring rules that can handle general non-binary outcomes, of course. (If you don’t know what a scoring rule is, it might be worthwhile to convince yourself why a rule equal to the summed absolute value of deviations between prediction and outcome is not proper.)

That’s all well and good, but a literature over the past decade or so called “expert testing” has dealt with the more general problem of knowing who is actually an expert at all. It turns out that it is incredibly challenging to screen experts from charlatans when it comes to probabilistic forecasts. The basic (too basic, I’m afraid) reason is that your screening rule can only condition on realizations, but the expert is expected to know a much more complicated object, the probability distributions of each event. Imagine you want to use the following rule, called calibration, to test weathermen: on days where rain was predicted p=.4, it actually does rain close to 40 percent of those days. A charlatan has no idea whether it will rain today or tomorrow, but after making a year of predictions, notices that most of his predictions are “too low”. When rain was predicted with .6, it rained 80 percent of the time, and when predicted with .7, it rained 72 percent of the time, etc. What should the charlatan do? Start predicting rain every day, to become “better calibrated”. As the number of days grows large, this trick gets the charlatan closer and closer to calibration.

But, you say, surely I can notice such an obviously tricky strategy. That implicitly means you want to use a more complicated test to screen the charlatans from the experts. And a famous result of Foster and Vohra (which apparently was very hard to publish because so many referees simply didn’t believe the proof!) says that any test which passes experts with high probability for any realization of nature as the number of predictions gets large can be passed by a suitably clever and strategic charlatan with high probability. And, indeed, the proof of this turns out to be a straightforward application of an abstract minimax theorem proven by Fan in the early 1950s.

Back, now, to the original problem of this post. If I know you are an expert, I can get your information with a payment that is maximized when a proper scoring rule is minimized. But what if, in addition to wanting info when it is good, I don’t want to be harmed when you are a charlatan? And further, what if only a single prediction is being made? The expert testing results mean that screening good from bad is going to be a challenge no matter how much data I have. If you are a charlatan and are always incentivized to report my prior, then I am not hurt. But if you actually know the true probabilities, I want to pay you according to a proper scoring rule. Try this payment scheme: if you predict my prior p, then you get a payment ε which does not depend on the realization of the data. If you predict anything else, you get an expected payment based on a proper scoring rule, and that expected payment is greater than ε. So the informed expert is incentivized to report truthfully (there is a straightforward modification of the above if the informed expert is not risk-neutral). How can we get the charlatan to always report p? If the charlatan has minmax preferences as in Gilboa-Schmeidler, then the payoff is ε if p is reported no matter how the data realizes. If, however, the probability distribution actually is p, and the charlatan ever reports anything other than p, then since payoffs are based on a proper scoring rule, in that “worst-case scenario” the charlatan’s expected payoff is less than ε, hence she will never report anything other than p due to the minmax preferences. I wouldn’t worry too much about the minmax assumption, since it makes quite a bit of sense as a utility function for a charlatan that must make a decision what to announce under a complete veil of ignorance about nature’s true distribution.

Final AEJ:Micro version, which is unfortunately behind a paywall (IDEAS page). I can’t find an ungated version of this article. It remains a mystery why the AEA is still gating articles in the AEJ journals. This is especially true of AEJ:Micro, a society-run journal whose main competitor, Theoretical Economics, is completely open access.

“Immigration and the Diffusion of Technology: The Huguenot Diaspora in Prussia,” E. Hornung (2014)

Is immigration good for natives of the recipient country? This is a tough question to answer, particularly once we think about the short versus long run. Large-scale immigration might have bad short-run effects simply because more L plus fixed K means lower average incomes in essentially any economic specification, but even given that fact, immigrants bring with them tacit knowledge of techniques, ideas, and plans which might be relatively uncommon in the recipient country. Indeed, world history is filled with wise leaders who imported foreigners, occasionally by force, in order to access their knowledge. As that knowledge spreads among the domestic population, productivity increases and immigrants are in the long-run a net positive for native incomes.

How substantial can those long-run benefits be? History provides a nice experiment, described by Erik Hornung in a just-published paper. The Huguenots, French protestants, were largely expelled from France after the Edict of Nantes was revoked by the Sun King, Louis XIV. The Huguenots were generally in the skilled trades, and their expulsion to the UK, the Netherlands and modern Germany (primarily) led to a great deal of tacit technology transfer. And, no surprise, in the late 17th century, there was very little knowledge transfer aside from face-to-face contact.

In particular, Frederick William, Grand Elector of Brandenburg, offered his estates as refuge for the fleeing Huguenots. Much of his land had been depopulated in the plagues that followed the Thirty Years’ War. The centralized textile production facilities sponsored by nobles and run by Huguenots soon after the Huguenots arrived tended to fail quickly – there simply wasn’t enough demand in a place as poor as Prussia. Nonetheless, a contemporary mentions 46 professions brought to Prussia by the Huguenots, as well as new techniques in silk production, dyeing fabrics and cotton printing. When the initial factories failed, knowledge among the apprentices hired and purchased capital remained. Technology transfer to natives became more common as later generations integrated more tightly with natives, moving out of Huguenot settlements and intermarrying.

What’s particularly interesting with this history is that the quantitative importance of such technology transfer can be measured. In 1802, incredibly, the Prussians had a census of manufactories, or factories producing stock for a wide region, including capital and worker input data. Also, all immigrants were required to register yearly, and include their profession, in 18th century censuses. Further, Huguenots did not simply move to places with existing textile industries where their skills were most needed; indeed, they tended to be placed by the Prussians in areas which had suffered large population losses following the Thirty Years’ War. These population losses were highly localized (and don’t worry, before using population loss as an IV, Hornung makes sure that population loss from plague is not simply tracing out existing transportation highways). Using input data to estimate a Cobb-Douglas textile production function, an additional percentage point of the population with Huguenot origins in 1700 is associated with a 1.5 percentage point increase in textile productivity in 1800. This result is robust in the IV regression using wartime population loss to proxy for the percentage of Huguenot immigrants, as well as many other robustness checks. 1.5% is huge given the slow rate of growth in this era.

An interesting historical case. It is not obvious to me how relevant this estimation to modern immigration debates; clearly it must depend on the extent to which knowledge can be written down or communicated at distance. I would posit that the strong complementarity of factors of production (including VC funding, etc.) are much more important that tacit knowledge spread in modern agglomeration economies of scale, but that is surely a very difficult claim to investigate empirically using modern data.

2011 Working Paper (IDEAS version). Final paper published in the January 2014 AER.

“Wall Street and Silicon Valley: A Delicate Interaction,” G.-M. Angeletos, G. Lorenzoni & A. Pavan (2012)

The Keynesian Beauty Contest – is there any better example of an “old” concept in economics that, when read in its original form, is just screaming out for a modern analysis? You’ve got coordination problems, higher-order beliefs, signal extraction about underlying fundamentals, optimal policy response by a planner herself informationally constrained: all of these, of course, problems that have consumed micro theorists over the past few decades. The general problem of irrational exuberance when we start to model things formally, though, is that it turns out to be very difficult to generate “irrational” actions by rational, forward-looking agents. Angeletos et al have a very nice model that can generate irrational-looking asset price movements even when all agents are perfectly rational, based on the idea of information frictions between the real and financial sector.

Here is the basic plot. Entrepreneurs get an individual signal and a correlated signal about the “real” state of the economy (the correlation in error about fundamentals may be a reduced-form measure of previous herding, for instance). The entrepreneurs then make a costly investment. In the next period, some percentage of the entrepreneurs have to sell their asset on a competitive market. This may represent, say, idiosyncratic liquidity shocks, but really it is just in the model to abstract away from the finance sector learning about entrepreneur signals based on the extensive margin choice of whether to sell or not. The price paid for the asset depends on the financial sector’s beliefs about the real state of the economy, which come from a public noisy signal and the trader’s observations about how much investment was made by entrepreneurs. Note that the price traders pay is partially a function of trader beliefs about the state of the economy derived from the total investment made by entrepreneurs, and the total investment made is partially a function of the price at which entrepreneurs expect to be able to sell capital should a liquidity crisis hit a given firm. That is, higher order beliefs of both the traders and entrepreneurs about what the other aggregate class will do determine equilibrium investment and prices.

What does this imply? Capital investment is higher in the first stage if either the state of the world is believed to be good by entrepreneurs, or if the price paid in the following period for assets is expected to be high. Traders will pay a high price for an asset if the state of the world is believed to be good. These traders look at capital investment and essentially see another noisy signal about the state of the world. When an entrepreneur sees a correlated signal that is higher than his private signal, he increases investment due to a rational belief that the state of the world is better, but then increases it even more because of an endogenous strategic complementarity among the entrepreneurs, all of whom prefer higher investment by the class as a whole since that leads to more positive beliefs by traders and hence higher asset prices tomorrow. Of course, traders understand this effect, but a fixed point argument shows that even accounting for the aggregate strategic increase in investment when the correlated signal is high, aggregate capital can be read by traders precisely as a noisy signal of the actual state of the world. This means that when when entrepreneurs invest partially on the basis of a signal correlated among their class (i.e., there are information spillovers), investment is based too heavily on noise. An overweighting of public signals in a type of coordination game is right along the lines of the lesson in Morris and Shin (2002). Note that the individual signals for entrepreneurs are necessary to keep the traders from being able to completely invert the information contained in capital production.

What can a planner who doesn’t observe these signals do? Consider taxing investment as a function of asset prices, where high taxes appear when the market gets particularly frothy. This is good on the one hand: entrepreneurs build too much capital following a high correlated signal because other entrepreneurs will be doing the same and therefore traders will infer the state of the world is high and pay high prices for the asset. Taxing high asset prices lowers the incentive for entrepreneurs to shade capital production up when the correlated signal is good. But this tax will also lower the incentive to produce more capital when the actual state of the world, and not just the correlated signal, is good. The authors discuss how taxing capital and the financial sector separately can help alleviate that concern.

Proving all of this formally, it should be noted, is quite a challenge. And the formality is really a blessing, because we can see what is necessary and what is not if a beauty contest story is to explain excess aggregate volatility. First, we require some correlation in signals in the real sector to get the Morris-Shin effect operating. Second, we do not require the correlation to be on a signal about the real world; it could instead be correlation about a higher order belief held by the financial sector! The correlation merely allows entrepreneurs to figure something out about how much capital they as a class will produce, and hence about what traders in the next period will infer about the state of the world from that aggregate capital production. Instead of a signal that correlates entrepreneur beliefs about the state of the world, then, we could have a correlated signal about higher-order beliefs, say, how traders will interpret how entrepreneurs interpret how traders interpret capital production. The basic mechanism will remain: traders essentially read from aggregate actions of entrepreneurs a noisy signal about the true state of the world. And all this beauty contest logic holds in an otherwise perfectly standard Neokeynesian rational expectations model!

2012 working paper (IDEAS version). This paper used to go by the title “Beauty Contests and Irrational Exuberance”; I prefer the old name!

Personal Note: Moving to Toronto

Before discussing a lovely application of High Micro Theory to a long-standing debate in macro in a post coming right behind this one, a personal note: starting this summer, I am joining the Strategy group at the University of Toronto Rotman School of Management as an Assistant Professor. I am, of course, very excited about the opportunity, and am glad that Rotman was willing to give me a shot even though I have a fairly unusual set of interests. Some friends asked recently if I have any job market advice, and I told them that I basically just spent five years reading interesting papers, trying to develop a strong toolkit, and using that knowledge base to attack questions I am curious about as precisely as I could, with essentially no concern about how the market might view this. Even if you want to be strategic, though, this type of idiosyncrasy might not be a bad strategy.

Consider the following model: any school evaluates you according to v+e(s), where v is a common signal of your quality and e(s) is a school-specific taste shock. You get an offer if v+e(s) is maximized for some school s; you are maximizing a first-order statistic, essentially. What this means is that increasing v (by being smarter, or harder-working, or in a hotter field) and increasing the variance of e (by, e.g., working on very specific topics even if they are not “hot”, or by developing an unusual set of talents) are equally effective in garnering a job you will be happy with. And, at least in my case, increasing v provides disutility whereas increasing the variance of e can be quite enjoyable! If you do not want to play such a high-variance strategy, though, my friend James Bailey (heading from Temple’s PhD program to work at Creighton) has posted some more sober yet still excellent job market advice. I should also note that writing a research-oriented blog seemed to be weakly beneficial as far as interviews were concerned; in perhaps a third of my interviews, someone mentioned this site, and I didn’t receive any negative feedback. Moving from personal anecdote to the minimal sense of the word data, Jonathan Dingel of Trade Diversion also seems to have had a great deal of success. Given this, I would suggest that there isn’t much need to worry that writing publicly about economics, especially if restricted to technical content, will torpedo a future job search.

Follow

Get every new post delivered to your Inbox.

Join 174 other followers

%d bloggers like this: