Category Archives: Technology Transfer

“Identifying Technology Spillovers and Product Market Rivalry,” N. Bloom, M. Schankerman & J. Van Reenen (2013)

How do the social returns to R&D differ from the private returns? We must believe there is a positive gap between the two given the widespread policies of subsidizing R&D investment. The problem is measuring the gap: theory gives us a number of reasons why firms may do more R&D than the social optimum. Most intuitively, a lot of R&D contains “business stealing” effects, where some of the profit you earn from your new computer chip comes from taking sales away from me, even if you chip is only slightly better than mine. Business stealing must be weighed against the fact that some of the benefits of knowledge a firm creates is captured by other firms working on similar problems, and the fact that consumers get surplus from new inventions as well.

My read of the literature is that we don’t have know much about how aggregate social returns to research differ from private returns. The very best work is at the industry level, such as Trajtenberg’s fantastic paper on CAT scans, where he formally writes down a discrete choice demand system for new innovations in that product and compares R&D costs to social benefits. The problem with industry-level studies is that, almost by definition, they are studying the social return to R&D in ex-post successful new industries. At an aggregate level, you might think, well, just include the industry stock of R&D in a standard firm production regression. This will control for within-industry spillovers, and we can make some assumption about the steepness of the demand curve to translate private returns given spillovers into returns inclusive of consumer surplus.

There are two problems with that method. First, what is an “industry” anyway? Bloom et al point out in the present paper that even though Apple and Intel do very similar research, as measured by the technology classes they patent in, they don’t actually compete in the product market. This means that we want to include “within-similar-technology-space stock of knowledge” in the firm production function regression, not “within-product-space stock of knowledge”. Second, and more seriously, if we care about social returns, we want to subtract out from the private return to R&D any increase in firm revenue that just comes from business stealing with slightly-improved versions of existing products.

Bloom et al do both in a very interesting way. First, they write down a model where firms get spillovers from research in similar technology classes, then compete with product market rivals; technology space and product market space are correlated but not perfectly so, as in the Apple/Intel example. They estimate spillovers in technology space using measures of closeness in terms of patent classes, and measure closeness in product space based on the SIC industries that firms jointly compete in. The model overidentifies the existence of spillovers: if technological spillovers exist, then you can find evidence conditional on the model in terms of firm market value, firm R&D totals, firm productivity and firm patent activity. No big surprises, given your intuition: technological spillovers to other firms can be seen in every estimated equation, and business stealing R&D, though small in magnitude, is a real phenomenon.

The really important estimate, though, is the level of aggregate social returns compared to private returns. The calculation is non-obvious, and shuttled to an online appendix, but essentially we want to know how increasing R&D by one dollar increases total output (the marginal social return) and how increasing R&D by one dollar increases firm revenue (marginal private return). The former may exceed the latter if the benefits of R&D spill over to other firms, but the latter may exceed the former is lots of R&D just leads to business stealing. Note that any benefits in terms of consumer surplus are omitted. Bloom et al find aggregate marginal private returns on the order of 20%, and social returns on the order of 60% (a gap referred to as “29.2%” instead of “39.2%” in the paper; come on, referees, this is a pretty important thing to not notice!). If it wasn’t for business stealing, the gap between social and private returns would be ten percentage points higher. I confess a little bit of skepticism here; do we really believe that for the average R&D performing firm, the marginal private return on R&D is 20%? Nonetheless, the estimate that social returns exceed private returns is important. Even more important is the insight that the gap between social and private returns depends on the size of the technology spillover. In Bloom et al’s data, large firms tend to do work in technology spaces with more spillovers, while small firms tend to work on fairly idiosyncratic R&D; to greatly simplify what is going on, large firms are doing more general R&D than the very product-specific R&D small firms do. This means that the gap between private and social return is larger for large firms, and hence the justification for subsidizing R&D might be highest for very large firms. Government policy in the U.S. used to implicitly recognize this intuition, shuttling R&D funds to the likes of Bell Labs.

All in all an important contribution, though this is by no means the last word on spillovers; I would love to see a paper asking why firms don’t do more R&D given the large private returns we see here (and in many other papers, for that matter). I am also curious how R&D spillovers compare to spillovers from other types of investments. For instance, an investment increasing demand for product X also increases demand for any complementary products, leads to increased revenue that is partially captured by suppliers with some degree of market power, etc. Is R&D really that special compared to other forms of investment? Not clear to me, especially if we are restricting to more applied, or more process-oriented, R&D. At the very least, I don’t know of any good evidence one way or the other.

Final version, Econometrica 2013 (RePEc IDEAS version); the paper essentially requires reading the Appendix in order to understand what is going on.

Advertisement

“Why Did Universities Start Patenting?: Institution Building and the Road to the Bayh-Dole Act,” E. P. Berman (2008)

It goes without saying that the Bayh-Dole Act had huge ramifications for science in the United States. Passed in 1980, Bayh-Dole permitted (indeed, encouraged) universities to patent the output of federally-funded science. I think the empirical evidence is still not complete on whether this increase in university patenting has been good (more, perhaps, incentive to develop products based on university research), bad (patents generate static deadweight loss, and exclusive patent licenses limit future developers) or “worse than the alternative” (if the main benefit of Bayh-Dole is encouraging universities to promote their research to the private sector, we can achieve that goal without the deadweight loss of patents).

As a matter of theory, however, it’s hard for me to see how university patenting could be beneficial. The usual static tradeoff with patents is deadweight loss after the product is developed in exchange for the quasirents that incentivize fixed costs of research to be paid by the initial developer. With university research, you don’t even get that benefit, since the research is being done anyway. This means you have to believe the “increased incentive for someone to commercialize” under patents is enough to outweight the static deadweight loss; it is not even clear that there is any increased incentive in the first place. Scientists seem to understand what is going on: witness the license manager of the enormously profitable Cohen-Boyer recombinant DNA patent, “[W]hether we licensed it or not, commercialisation of recombinant DNA was going forward. As I mentioned, a non-exclusive licensing program, at its heart, is really a tax … [b]ut it’s always nice to say technology transfer.” That is, it is clear why cash-strapped universities like Bayh-Dole regardless of the social benefit.

In today’s paper, Elizabeth Popp Berman, a sociologist, poses an interesting question. How did Bayh-Dole ever pass given the widespread antipathy toward “locking up the results of public research” in the decades before its passage? She makes two points of particular interest. First, it’s not obvious that there is any structural break in 1980 in university patenting, as university patents increased 250% in the 12 years before the Act and about 300% in the 12 years afterward. Second, this pattern holds because the development of institutions and interested groups necessary for the law to change was a fairly continuous process beginning perhaps as early as the creation of the Research Corporation in 1912. What this means for economists is that we should be much more careful about seeing changes in law as “exogenous” since law generally just formalized already changing practice, and that our understanding of economic events driven by rational agents acting under constraints ought sometimes focus more on the constraints and how they develop rather than the rational action.

Here’s the history. Following World War II, the federal government became a far more important source of funding for university and private-sector science in the United States. Individual funding agencies differed in their patent policy; for instance, the Atomic Energy Commission essentially did not allow university scientists to patent the output of federally-funded research, whereas the Department of Defense permitted patents from their contactors. Patents were particularly contentious since over 90% of federal R&D in this period went to corporations rather than universities. Through the 1960s, the NIH began to fund more and more university science, and they hired a patent attorney in 1963, Norman Latker, who was very much in favor of private patent rights.

Latker received support for his position from two white papers published in 1968 that suggested the HEW (the parent of the NIH) was letting medical research languish because they wouldn’t grant exclusive licenses to pharma firms, who in turn argued that without the exclusive license they wouldn’t develop the research into a product. The politics of this report allowed Latker enough bureaucratic power to freely develop agreements with individual universities allowing them to retain patents in some cases. The rise of these agreements led many universities to hire patent officers, who would later organize into a formal lobbying group pushing for more ability to patent federally-funded research. Note essentially what is going on: individual actors or small groups take actions in each period which change the payoffs to future games (partly by incurring sunk costs) or by introducing additional constraints (reports that limit the political space for patent opponents, for example). The eventual passage of Bayh-Dole, and its effects, necessarily depend on that sort of institution building which is often left unmodeled in economic or political analysis. Of course, the full paper has much more detail about how this program came to be, and is worth reading in full.

Final version in Social Studies of Science (gated). I’m afraid I could not find an ungated copy.

“How do Patents Affect Follow-On Innovation: Evidence from the Human Genome,” B. Sampat & H. Williams (2014)

This paper, by Heidi Williams (who surely you know already) and Bhaven Sampat (who is perhaps best known for his almost-sociological work on the Bayh-Dole Act with Mowery), made quite a stir at the NBER last week. Heidi’s job market paper a few years ago, on the effect of openness in the Human Genome Project as compared to Celera, is often cited as an “anti-patent” paper. Essentially, she found that portions of the human genome sequenced by the HGP, which placed their sequences in the public domain, were much more likely to be studied by scientists and used in tests than portions sequenced by Celera, who initially required fairly burdensome contractual steps to be followed. This result was very much in line with research done by Fiona Murray, Jeff Furman, Scott Stern and others which also found that minor differences in openness or accessibility can have substantial impacts on follow-on use (I have a paper with Yasin Ozcan showing a similar result). Since the cumulative nature of research is thought to be critical, and since patents are a common method of “restricting openness”, you might imagine that Heidi and the rest of these economists were arguing that patents were harmful for innovation.

That may in fact be the case, but note something strange: essentially none of the earlier papers on open science are specifically about patents; rather, they are about openness. Indeed, on the theory side, Suzanne Scotchmer has a pair of very well-known papers arguing that patents effectively incentivize cumulative innovation if there are no transaction costs to licensing, no spillovers from sequential research, and no incentive for early researchers to limit licenses in order to protect their existing business (consider the case of Farnsworth and the FM radio), and if potential follow-on innovators can be identified before they sink costs. That is a lot of conditions, but it’s not hard to imagine industries where inventions are clearly demarcated, where holders of basic patents are better off licensing than sitting on the patent (perhaps because potential licensors are not also competitors), and where patentholders are better off not bothering academics who technically infringe on their patent.

What industry might have such characteristics? Sampat and Williams look at gene patents. Incredibly, about 30 percent of human genes have sequences that are claimed under a patent in the United States. Are “patented genes” still used by scientists and developers of medical diagnostics after the patent grant, or is the patent enough of a burden to openness to restrict such use? What is interesting about this case is that the patentholder generally wants people to build on their patent. If academics find some interesting genotype-phenotype links based on their sequence, or if another firm develops a disease test based on the sequence, there are more rents for the patentholder to garner. In surveys, it seems that most academics simply ignore patents of this type, and most gene patentholders don’t interfere in research. Anecdotally, licenses between the sequence patentholder and follow-on innovators are frequent.

In general, it is really hard to know whether patents have any effect on anything, however; there is very little variation over time and space in patent strength. Sampat and Williams take advantage of two quasi-experiments, however. First, they compare applied-for-but-rejected gene patents to applied-for-but-granted patents. At least for gene patents, there is very little difference in terms of measurables before the patent office decision across the two classes. Clearly this is not true for patents as a whole – rejected patents are almost surely of worse quality – but gene patents tend to come from scientifically competent firms rather than backyard hobbyists, and tend to have fairly straightforward claims. Why are any rejected, then? The authors’ second trick is to look directly at patent examiner “leniency”. It turns out that some examiners have rejection rates much higher than others, despite roughly random assignment of patents within a technology class. Much of the difference in rejection probability is driven by the random assignment of examiners, which justifies the first rejected-vs-granted technique, and also suggested an instrumental variable to further investigate the data.

With either technique, patent status essentially generates no difference in the use of genes by scientific researchers and diagnostic test developers. Don’t interpret this result as turning over Heidi’s earlier genome paper, though! There is now a ton of evidence that minor impediments to openness are harmful to cumulative innovation. What Sampat and Williams tell us is that we need to be careful in how we think about “openness”. Patents can be open if the patentholder has no incentive to restrict further use, if downstream innovators are easy to locate, and if there is no uncertainty about the validity or scope of a patent. Indeed, in these cases the patentholder will want to make it as easy as possible for follow-on innovators to build on their patent. On the other hand, patentholders are legally allowed to put all sorts of anti-openness burdens on the use of their patented invention by anyone, including purely academic researchers. In many industries, such restrictions are in the interest of the patentholder, and hence patents serve to limit openness; this is especially true where private sector product development generates spillovers. Theory as in Scotchmer-Green has proven quite correct in this regard.

One final comment: all of these types of quasi-experimental methods are always a bit weak when it comes to the extensive margin. It may very well be that individual patents do not restrict follow-on work on that patent when licenses can be granted, but at the same time the IP system as a whole can limit work in an entire technological area. Think of something like sampling in music. Because all music labels have large teams of lawyers who want every sample to be “cleared”, hip-hop musicians stopped using sampled beats to the extent they did in the 1980s. If you investigated whether a particular sample was less likely to be used conditional on its copyright status, you very well might find no effect, as the legal burden of chatting with the lawyers and figuring out who owns what may be enough of a limit to openness that musicians give up samples altogether. Likewise, in the complete absence of gene patents, you might imagine that firms would change their behavior toward research based on sequenced genes since the entire area is more open; this is true even if the particular gene sequence they want to investigate was unpatented in the first place, since having to spend time investigating the legal status of a sequence is a burden in and of itself.

July 2014 Working Paper (No IDEAS version). Joshua Gans has also posted a very interesting interpretation of this paper in terms of Coasean contractability.

“Immigration and the Diffusion of Technology: The Huguenot Diaspora in Prussia,” E. Hornung (2014)

Is immigration good for natives of the recipient country? This is a tough question to answer, particularly once we think about the short versus long run. Large-scale immigration might have bad short-run effects simply because more L plus fixed K means lower average incomes in essentially any economic specification, but even given that fact, immigrants bring with them tacit knowledge of techniques, ideas, and plans which might be relatively uncommon in the recipient country. Indeed, world history is filled with wise leaders who imported foreigners, occasionally by force, in order to access their knowledge. As that knowledge spreads among the domestic population, productivity increases and immigrants are in the long-run a net positive for native incomes.

How substantial can those long-run benefits be? History provides a nice experiment, described by Erik Hornung in a just-published paper. The Huguenots, French protestants, were largely expelled from France after the Edict of Nantes was revoked by the Sun King, Louis XIV. The Huguenots were generally in the skilled trades, and their expulsion to the UK, the Netherlands and modern Germany (primarily) led to a great deal of tacit technology transfer. And, no surprise, in the late 17th century, there was very little knowledge transfer aside from face-to-face contact.

In particular, Frederick William, Grand Elector of Brandenburg, offered his estates as refuge for the fleeing Huguenots. Much of his land had been depopulated in the plagues that followed the Thirty Years’ War. The centralized textile production facilities sponsored by nobles and run by Huguenots soon after the Huguenots arrived tended to fail quickly – there simply wasn’t enough demand in a place as poor as Prussia. Nonetheless, a contemporary mentions 46 professions brought to Prussia by the Huguenots, as well as new techniques in silk production, dyeing fabrics and cotton printing. When the initial factories failed, knowledge among the apprentices hired and purchased capital remained. Technology transfer to natives became more common as later generations integrated more tightly with natives, moving out of Huguenot settlements and intermarrying.

What’s particularly interesting with this history is that the quantitative importance of such technology transfer can be measured. In 1802, incredibly, the Prussians had a census of manufactories, or factories producing stock for a wide region, including capital and worker input data. Also, all immigrants were required to register yearly, and include their profession, in 18th century censuses. Further, Huguenots did not simply move to places with existing textile industries where their skills were most needed; indeed, they tended to be placed by the Prussians in areas which had suffered large population losses following the Thirty Years’ War. These population losses were highly localized (and don’t worry, before using population loss as an IV, Hornung makes sure that population loss from plague is not simply tracing out existing transportation highways). Using input data to estimate a Cobb-Douglas textile production function, an additional percentage point of the population with Huguenot origins in 1700 is associated with a 1.5 percentage point increase in textile productivity in 1800. This result is robust in the IV regression using wartime population loss to proxy for the percentage of Huguenot immigrants, as well as many other robustness checks. 1.5% is huge given the slow rate of growth in this era.

An interesting historical case. It is not obvious to me how relevant this estimation to modern immigration debates; clearly it must depend on the extent to which knowledge can be written down or communicated at distance. I would posit that the strong complementarity of factors of production (including VC funding, etc.) are much more important that tacit knowledge spread in modern agglomeration economies of scale, but that is surely a very difficult claim to investigate empirically using modern data.

2011 Working Paper (IDEAS version). Final paper published in the January 2014 AER.

“Identifying Technology Spillovers and Product Market Rivalry,” N. Bloom, M. Schankerman & J. Van Reenen (2013)

R&D decisions are not made in a vacuum: my firm both benefits from information about new technologies discovered by others, and is harmed when other firms create new products that steal from my firm’s existing product lines. Almost every workhorse model in innovation is concerned with these effects, but measuring them empirically, and understanding how they interact, is difficult. Bloom, Schankerman and van Reenen have a new paper with a simple but clever idea for understanding these two effects (and it will be no surprise to readers given how often I discuss their work that I think these three are doing some of the world’s best applied micro work these days).

First, note that firms may be in the same technology area but not in the same product area; Intel and Motorola work on similar technologies, but compete on very few products. In a simple model, firms first choose R&D, knowledge is produced, and then firms compete on the product market. The qualitative results of this model are as you might expect: firms in a technology space with many other firms will be more productive due to spillovers, and may or may not actually perform more R&D depending on the nature of diminishing returns in the knowledge production function. Product market rivalry is always bad for profits, does not affect productivity, and increases R&D only if research across firms is a strategic complement; this strategic complementarity could be something like a patent race model, where if firms I compete with are working hard trying to invent the Next Big Thing, then I am incentivized to do even more R&D so I can invent first.

On the empirical side, we need a measure of “product market similarity” and “technological similarity”. Let there be M product classes and N patent classes, and construct vectors for each firm of their share of sales across product classes and share of R&D across patent classes. There are many measures of the similarity of a vector, of course, including a well-known measure in innovation from Jaffe. Bloom et al, after my heart, note that we really ought use measures that have proper axiomatic microfoundations; though they do show the properties of a variety of measures of similarity, they don’t actually show the existence (or impossibility) of their optimal measure of similarity. This sounds like a quick job for a good microtheorist.

With similarity measures, all that’s left to do is run regressions of technological and product market similarity, as well as all sorts of fixed effects, on outcomes like R&D performed, productivity (measured using patents or out of a Cobb-Douglas equation) and market value (via the Griliches-style Tobin’s Q). These guys know their econometrics, so I’m omitting many details here, but I should mention that they do use the idea from Wilson’s 2009 ReSTAT of basically random changes in state R&D tax laws as an IV for the cost of R&D; this is a great technique, and very well implemented by Wilson, but getting these state-level R&D costs is really challenging and I can easily imagine a future where the idea is abused by naive implementation.

The results are actually pretty interesting. Qualitatively, the empirical results look quite like the theory, and in particular, the impact of technological similarity looks really important; having lots of firms working on similar technologies but working in different industries is really good for your firm’s productivity and profits. Looking at a handful of high-tech sectors, Bloom et al estimate that the marginal social return on R&D is on the order of 40 percentage points higher than the marginal private return of R&D, implying (with some huge caveats) that R&D in the United States might be something like 3 times smaller than it ought to be. This estimate is actually quite similar to what researchers using other methods have estimated. Interestingly, since bigger firms tend to work in more dense parts of the technology space, they tend to generate more spillovers, hence the common policy prescription of giving smaller firms higher R&D tax credits may be a mistake.

Two caveats. As far as I can tell, the model does not allow a role for absorptive capacity, where firm’s ability to integrate outside knowledge is endogenous to their existing R&D stock. Second, the estimated marginal private rate of return on R&D is something like 20 percent for the average firm; many other papers have estimated very high private benefits from research, but I have a hard time interpreting these estimates. If there really are 20% rates of return lying around, why aren’t firms cranking up their research? At least anecdotally, you hear complaints from industries like pharma about low returns from R&D. Third, there are some suggestive comments near the end about how government subsidies might be used to increase R&D given these huge social returns. I would be really cautious here, since there is quite a bit of evidence that government-sponsored R&D generates a much lower private and social rate of return that the other forms of R&D.

Final July 2013 Econometrica version (IDEAS version). Thumbs up to Nick Bloom for making the final version freely available on his website. The paper has an exhaustive appendix with technical details, as well as all of the data freely available for you to play with.

“Back to Basics: Basic Research Spillovers, Innovation Policy and Growth,” U. Akcigit, D. Hanley & N. Serrano-Velarde (2013)

Basic and applied research, you might imagine, differ in a particular manner: basic research has unexpected uses in a variety of future applied products (though it sometimes has immediate applications), while applied research is immediately exploitable but has fewer spillovers. An interesting empirical fact is that a substantial portion of firms report that they do basic research, though subject to a caveat I will mention at the end of this post. Further, you might imagine that basic and applied research are complements: success in basic research in a given area expands the size of the applied ideas pond which can be fished by firms looking for new applied inventions.

Akcigit, Hanley and Serrano-Velarde take these basic facts and, using some nice data from French firms, estimate a structural endogenous growth model with both basic and applied research. Firms hire scientists then put them to work on basic or applied research, where the basic research “increases the size of the pond” and occasionally is immediately useful in a product line. The government does “Ivory Tower” basic research which increases the size of the pond but which is never immediately applied. The authors give differential equations for this model along a balanced growth path, have the government perform research equal to .5% of GDP as in existing French data, and estimate the remaining structural parameters like innovation spillover rates, the mean “jump” in productivity from an innovation, etc.

The pretty obvious benefit of structural models as compared to estimating simple treatment effects is counterfactual analysis, particularly welfare calculations. (And if I may make an aside, the argument that structural models are too assumption-heavy and hence non-credible is nonsense. If the mapping from existing data to the actual questions of interest is straightforward, then surely we can write a straightforward model generating that external validity. If the mapping from existing data to the actual question of interest is difficult, then it is even more important to formally state what mapping you have in mind before giving policy advice. Just estimating a treatment effect off some particular dataset and essentially ignoring the question of external validity because you don’t want to take a stand on how it might operate makes me wonder why I, the policymaker, should take your treatment effect seriously in the first place. It seems to me that many in the profession already take this stance – Deaton, Heckman, Whinston and Nevo, and many others have published papers on exactly this methodological point – and therefore a decade from now, you will find it equally as tough to publish a paper that doesn’t take external validity seriously as it is to publish a paper with weak internal identification today.)

Back to the estimates: the parameters here suggest that the main distortion is not that firms perform too little R&D, but that they misallocate between basic and applied R&D; the basic R&D spills over to other firms by increasing the “size of the pond” for everybody, hence it is underperformed. This spillover, estimated from data, is of substantial quantitative importance. The problem, then, is that uniform subsidies like R&D tax credits will just increase total R&D without alleviating this misallocation. I think this is a really important result (and not only because I have a theory paper myself, coming at the question of innovation direction from the patent race literature rather than the endogenous growth literature, which generates essentially the same conclusion). What you really want to do to increase welfare is increase the amount of basic research performed. How to do this? Well, you could give heterogeneous subsidies to basic and applied research, but this would involve firms reporting correctly, which is a very difficult moral hazard problem. Alternatively, you could just do more research in academia, but if this is never immediately exploited, it is less useful than the basic research performed in industry which at least sometimes is used in products immediately (by assumption); shades of Aghion, Dewatripont and Stein (2008 RAND) here. Neither policy performs particularly well.

I have two small quibbles. First, basic research in the sense reported by national statistics following the Frascati manual is very different from basic research in the sense of “research that has spillovers”; there is a large literature on this problem, and it is particularly severe when it comes to service sector work and process innovation. Second, the authors suggest at one point that Bayh-Dole style university licensing of research is a beneficial policy: when academic basic research can now sometimes be immediately applied, we can easily target the optimal amount of basic research by increasing academic funding and allowing academics to license. But this prescription ignores the main complaint about Bayh-Dole, which is that academics begin, whether for personal or institutional reasons, to shift their work from high-spillover basic projects to low-spillover applied projects. That is, it is not obvious the moral hazard problem concerning targeting of subsidies is any easier at the academic level than at the private firm level. In any case, this paper is very interesting, and well worth a look.

September 2013 Working Paper (RePEc IDEAS version).

“Inventors, Patents and Inventing Activities in the English Brewing Industry, 1634-1850,” A. Nuvolari & J. Sumner (2013)

Policymakers often assume that patents are necessary for inventions to be produced or, if the politician is sophisticated, for a market in knowledge to develop. Economists are skeptical of such claims, for theoretical and empirical reasons. For example, Petra Moser has shown how few important inventions are ever patented, and Bessen and Maskin have a paper showing how the existence of patents can slow down innovation in certain technical industries. The literature more generally often mentions how heterogeneous appropriation strategies are across industries: some rely entirely on trade secrets, other on open source sharing, and yet others on patent protection.

Nuvolari and Sumner look at the English brewing industry from the 17th to the 19th century. This industry was actually quite innovative, most famously through the (perhaps collective) invention of that delightful winter friend named English Porter. The two look in great detail through lists of patents prior to 1850, and note that, despite the importance of brewing and its technical complexity, beer-related patents make up less than one percent of all patents granted during that period. Further, they note that there are enormous differences in patenting behavior within the brewing industry. Nonetheless, even in the absence of patents, there still existed a market for ideas.

Delving deeper, the authors show that many patentees were seen more as charlatans than as serious inventors. The most important inventors tended to either keep their inventions secret within their firm or guild, keep the inventions partially secret, publicize completely in order to enhance the status of their brewery as “scientific”, or publicize completely in order to garner consulting or engineering contracts. The partial secrecy and status-enhancing publicity reasons are particularly interesting. Humphrey Jackson, an aspiring chemist, sold a book with many technical details left as blank spots; by paying to attend his lecture, the details of his processes could be filled in, though the existence of the lecture was predicated on sufficiently large numbers buying the book! James Bavestock, a brewer in Hampshire, brought his hydrometer to the attention of a prominent London brewer Henry Thrale; in exchange, Thrale could organize entry into the London market, or a job in Thrale’s brewery should the small Hampshire concern go under.

2012 Working Paper (IDEAS version). This article appeared in the new issue of Business History Review, which was particularly good; it also featured, among others, a review on markets for knowledge in 19th century America which will probably be the final publication of the late Kenneth Sokoloff, and a paper by the always interesting Zorina Khan on international technology markets in the 19th century. Many current issues, such as open source, patent trolls, etc. are completely rehashing similar questions during that period, so the articles are well worth a look even for the non-historian.

“Contractability and the Design of Research Agreements,” J. Lerner & U. Malmendier (2010)

Outside research has (as we discussed yesterday) begun to regain prominence as a firm strategy. This is particularly so in biotech: the large drug firms generally do not do the basic research that leads to new products. Rather, they contract this out to independent research firms, then handle the development, licensing and marketing in-house. But such contracts are tough. Not only can do I have trouble writing an enforceable contract that conditions on the effort exerted by the research firm, but the fact that research firms have other projects, and also like to do pure science for prestige reasons, means that they are likely to take my money and use it to fund projects which are not entirely the most preferred of the drug company.

We are in luck: economic theory has a broad array of models of contracting under multitasking worries. Consider the following model of Lerner and Malmendier. The drug firm pays some amount to set up a contract. The research firm then does some research. The drug firm observes the effort of the researcher, who either worked on exactly what the drug company prefers, or on a related project which throws off various side inventions. After the research is performed, the research firm is paid. With perfect ability to contract on effort, this is an easy problem: pay the research firm only if they exert effort on the projects the drug company prefers. When the research project is “tell me whether this compound has this effect”, it might be possible to write such a contract. When the research project is “investigate the properties of this class of compounds and how they might relate to diseases of the heart”, surely no such contract is possible. In that case, the optimal contract may be just to let the research firm work on the broader project it prefers, because at least then the fact that the research firm gets spillovers means that the drug firm can pay the researcher less money. This is clearly second-best.

Can we do better? What about “termination contracts”? After effort is observed, but before development is complete, the drug firm can terminate the contract or not. Payments in the contract can certainly condition on termination. How about the following contract: the drug firm terminates if the research firm works on the broader research project, and it takes the patent rights to the side inventions. Here, if the research firm deviates and works on its own side projects, the drug company gets to keep the patents for those side projects, hence the research firm won’t do such work. And the drug firm further prefers the research firm to work on the assigned project; since termination means that development is not completed, the drug firm won’t just falsely claim that effort was low in order to terminate and seize the side project patents (indeed, on equilibrium path, there are few side patents to seize since the research firm is actually working on the correct project!). The authors show that the contract described here is always optimal if a conditional termination contract is used at all.

Empirically, what does this mean? If I write a research contract for more general research, I should expect more termination rights to be reserved. Further, the liquidity constraint of the research firms matter; if the drug firm could make the research firm pay it back after termination, it would do so, and we could again achieve the first best. So I should expect termination rights to show up particularly for undercapitalized research firms. Lerner and Malmendier create a database from contract data collected by a biotech consulting firm, and show that both of these predictions appear to be borne out. I read these results as in the style of Maskin and Tirole; even when I can’t fully specify all the states of the world in a contract, I can still do a good bit of conditioning.

2008 Working paper (IDEAS version). Final paper in AER 2010. Malmendier will certainly be a factor in the upcoming Clark medal discussion, as she turns 40 this year. Problematically, Nick Bloom (who, says his CV, did his PhD part time?!) also turns 40, and both absolutely deserve the prize. If I were a betting man, I would wager that the just-published-in-the-QJE Does Management Matter in the Third World paper will be the one that puts Bloom over the top, as it’s really the best development paper in many years. That said, I am utterly confused that Finkelstein won last year given that Malmendier and Bloom are both up for their last shot this year. Finkelstein is a great economist, no doubt, but she works in a very similar field to Malmendier, and Malmendier trumps her by any conceivable metric (citations, top cited papers, overall impact, etc.). I thought they switched the Clark Medal to an every-year affair just to avoid such a circumstance, such as when Athey, List and Melitz were all piled up in 2007.

I’m curious what a retrospective Clark Medal would look like, taking into account only research that was done as of the voting year, but allowing us to use our knowledge of the long-run impact of that research. Since 2001, Duflo 2010 and Acemoglu 2005 are locks. I think Rabin keeps his in 2001. Guido Imbens takes Levitt’s spot in 2003. List takes 2007, with Melitz and Athey just missing out (though both are supremely deserving!). Saez keeps 2009. Malmendier takes 2011. Bloom takes 2012. Raj Chetty takes 2013 – still young, but already an obvious lock to win. What’s interesting about this list is just how dominant young folks have been in micro (especially empirical and applied theory); these are essentially the best people working in that area, whereas macro and metrics are still by and large dominated by an older generation.

“Patent Alchemy: The Market for Technology in U.S. History,” N. Lamoreaux, K. Sokoloff & D. Sutthiphisall (2012)

It may appear that the world of innovation looks very different today than it used to. Large in-house R&D outfits – the Bell Labs of the past – are being replaced by small firms who sell the results of their research on to producers. Venture capital funding of research appears more and more important, both for providing capital to inventors and to linking the inventors up with potential buyers. Patent trolls hound the innocent, suing them for patent violations they weren’t even aware of. The speed with which patents are evaluated has slowed to a crawl, and the number of patents being granted continues to grow. Many patents are merely defensive, acquired solely to keep someone else from acquiring them.

Lamoreaux et al, building on earlier work by Lamoreaux and Sokoloff as well as Tom Nicholas’ interesting recent research, point out that none of the above is strange. The rise of in-house R&D is a phenomenon that doesn’t show up in great number in America until well into the twentieth century, only becoming dominant after the Second World War. Around the turn of the century, most innovation was done by small, independent inventors, or by small research firms like Edison’s outfit. A series of intermediaries, principally but not always patent lawyers, served both to file the proper paperwork and to link inventors with potential buyers; the authors provide a bunch of juicy historical stories, derived from lawyer diaries during this period, on exactly how such transactions took place. Railroads were frequently being hounded by patent trolls who tried to catch them unaware, and traveling patentbuyers crossed the Midwest and South suing farmers for using unlicensed barbed wire or milk buckets. Patents took an average of three years to be processed by the early 1900s, and the patenting rate was near an all time high. Firms regularly bought patents just so their competitors wouldn’t have them.

This is all to say that, to the extent we are worried about certain aspects of the patent system today, looking to history may be a useful place to begin. “Submarine patents”, acquired by trolls and kept unused until a particularly juicy potential violator has started to earn large profits, don’t appear to have been too prominent at the turn of the century – given how lucrative this business appears, perhaps an investigation of why they only appear in the present would be worthwhile. The role of a patent as a saleable piece of knowledge, allowing non-producers to do useful research and then sell that research to a firm who finds it useful, surely has some role, as Arrow pointed out in his famous 1962 essay. When patents instead simply add transaction costs or result in thickets, discouraging activity by true innovators, something has gone awry. And when something goes wrong in the world, it is rarely the case that history can offer us no useful guidance.

2012 working paper (No IDEAS version). Prof. Sokoloff passed away from cancer at a young age in 2007, so this may become his final published paper – it incorporates a great number of ideas he worked on throughout his career, so that would be a fitting tribute.

“Recruiting for Ideas: How Firms Exploit the Prior Inventions of New Hires,” J. Singh & A. Agrawal (2011)

Firms poach engineers and researchers from each other all the time. One important reason to do so is to gain access to the individual’s knowledge. A strain of theory going back to Becker, however, suggests that if, after the poaching, the knowledge remains embodied solely in the new employer, it will be difficult for the firm to profit: surely the new employee will have an enormous amount of bargaining power over wages if she actually possesses unique and valuable information. (As part of my own current research project, I learned recently that Charles Martin Hall, co-inventor of the Hall-Heroult process for aluminum smelting, was able to gather a fortune of around $300 million after he brought his idea to the company that would become Alcoa.)

In a resource-based view of the firm, then, you may hope to not only access a new employer’s knowledge, but also spread it to other employees at your firm. By doing this, you limit the wage bargaining power of the new hire, and hence can scrape off some rents. Singh and Agrawal break open the patent database to investigate this. First, use name and industry data to try to match patentees who have an individual patent with one firm at time t, and then another patent at a separate firm some time later; such an employee has “moved”. We can’t simply check whether the receiving firm cites this new employee’s old patents more often, as there is an obvious endogeneity problem. First, firms may recruit good scientists more aggressively. Second, they may recruit more aggressively in technology fields where they are already planning to do work in the future. This suggests that matching plus diff-in-diff may work. Match every patent to another patent held by an inventor who never switches firms, attempting to find a second patent with very similar citation behavior, inventor age, inventor experience, technology class, etc. Using our matched sample, check how much the propensity to cite the mover’s patent changes compares to the propensity to the cite the stayer’s patent. That is, let Joe move to General Electric. Joe had a patent while working at Intel. GE researchers were citing that Intel patent once per year before Joe moved. They were citing a “matched” patent 1 times per year. After the move, they cite the Intel patent 2 times per year, and the “matched” patent 1.1 times per year. The diff-in-diff then suggests that moving increases the propensity to cite the Intel patent at GE by (2-1)-(1.1-1)=.9 citations per year, where the first difference helps account for the first type of endogeneity we discussed above, and the second difference for the second type of endogeneity.

What do we find? It is true that, after a move, the average patent held by a mover is cited more often at the receiving firm, especially in the first couple years after a move. Unfortunately, about half of new patents which cite the new employee’s old patent after she moves are made by the new employee herself, and another fifteen percent or so are made by previous patent collaborators of the poached employee. What’s worse, if you examine these citations by year, even five years after the move, citations to the pre-move patent are still highly likely to come from the poached employee. That is, to the extent that the poached employee had some special knowledge, the firm appears to have simply bought that knowledge embodied in the new employee, rather than gained access to useful techniques that quickly spread through the firm.

Three quick comments. First, applied econometrician friends: is there any reason these days to do diff-in-diff linearly rather than using the nonparametric “changes-in-changes” of Athey and Imbens 2006, which allows recovery of the entire distribution of effects of treatment on the treated? Second, we learn from this paper that the mean poached research employee doesn’t see her knowledge spread through the new firm, which immediately suggests the question of whether there are certain circumstances in which such knowledge spreads. Third, this same exercise could be done using all patents held by the moving employee’s old firm – I may be buying access to general techniques owned by the employee’s old firm rather than the specific knowledge represented in that employee’s own pre-move patents. I wonder if there’s any difference.

Final Management Science version (IDEAS version). Big thumbs up to Jasjit Singh for putting final published versions of his papers up on his site.

%d bloggers like this: