Category Archives: Technology Transfer

“Immigration and the Diffusion of Technology: The Huguenot Diaspora in Prussia,” E. Hornung (2014)

Is immigration good for natives of the recipient country? This is a tough question to answer, particularly once we think about the short versus long run. Large-scale immigration might have bad short-run effects simply because more L plus fixed K means lower average incomes in essentially any economic specification, but even given that fact, immigrants bring with them tacit knowledge of techniques, ideas, and plans which might be relatively uncommon in the recipient country. Indeed, world history is filled with wise leaders who imported foreigners, occasionally by force, in order to access their knowledge. As that knowledge spreads among the domestic population, productivity increases and immigrants are in the long-run a net positive for native incomes.

How substantial can those long-run benefits be? History provides a nice experiment, described by Erik Hornung in a just-published paper. The Huguenots, French protestants, were largely expelled from France after the Edict of Nantes was revoked by the Sun King, Louis XIV. The Huguenots were generally in the skilled trades, and their expulsion to the UK, the Netherlands and modern Germany (primarily) led to a great deal of tacit technology transfer. And, no surprise, in the late 17th century, there was very little knowledge transfer aside from face-to-face contact.

In particular, Frederick William, Grand Elector of Brandenburg, offered his estates as refuge for the fleeing Huguenots. Much of his land had been depopulated in the plagues that followed the Thirty Years’ War. The centralized textile production facilities sponsored by nobles and run by Huguenots soon after the Huguenots arrived tended to fail quickly – there simply wasn’t enough demand in a place as poor as Prussia. Nonetheless, a contemporary mentions 46 professions brought to Prussia by the Huguenots, as well as new techniques in silk production, dyeing fabrics and cotton printing. When the initial factories failed, knowledge among the apprentices hired and purchased capital remained. Technology transfer to natives became more common as later generations integrated more tightly with natives, moving out of Huguenot settlements and intermarrying.

What’s particularly interesting with this history is that the quantitative importance of such technology transfer can be measured. In 1802, incredibly, the Prussians had a census of manufactories, or factories producing stock for a wide region, including capital and worker input data. Also, all immigrants were required to register yearly, and include their profession, in 18th century censuses. Further, Huguenots did not simply move to places with existing textile industries where their skills were most needed; indeed, they tended to be placed by the Prussians in areas which had suffered large population losses following the Thirty Years’ War. These population losses were highly localized (and don’t worry, before using population loss as an IV, Hornung makes sure that population loss from plague is not simply tracing out existing transportation highways). Using input data to estimate a Cobb-Douglas textile production function, an additional percentage point of the population with Huguenot origins in 1700 is associated with a 1.5 percentage point increase in textile productivity in 1800. This result is robust in the IV regression using wartime population loss to proxy for the percentage of Huguenot immigrants, as well as many other robustness checks. 1.5% is huge given the slow rate of growth in this era.

An interesting historical case. It is not obvious to me how relevant this estimation to modern immigration debates; clearly it must depend on the extent to which knowledge can be written down or communicated at distance. I would posit that the strong complementarity of factors of production (including VC funding, etc.) are much more important that tacit knowledge spread in modern agglomeration economies of scale, but that is surely a very difficult claim to investigate empirically using modern data.

2011 Working Paper (IDEAS version). Final paper published in the January 2014 AER.

“Identifying Technology Spillovers and Product Market Rivalry,” N. Bloom, M. Schankerman & J. Van Reenen (2013)

R&D decisions are not made in a vacuum: my firm both benefits from information about new technologies discovered by others, and is harmed when other firms create new products that steal from my firm’s existing product lines. Almost every workhorse model in innovation is concerned with these effects, but measuring them empirically, and understanding how they interact, is difficult. Bloom, Schankerman and van Reenen have a new paper with a simple but clever idea for understanding these two effects (and it will be no surprise to readers given how often I discuss their work that I think these three are doing some of the world’s best applied micro work these days).

First, note that firms may be in the same technology area but not in the same product area; Intel and Motorola work on similar technologies, but compete on very few products. In a simple model, firms first choose R&D, knowledge is produced, and then firms compete on the product market. The qualitative results of this model are as you might expect: firms in a technology space with many other firms will be more productive due to spillovers, and may or may not actually perform more R&D depending on the nature of diminishing returns in the knowledge production function. Product market rivalry is always bad for profits, does not affect productivity, and increases R&D only if research across firms is a strategic complement; this strategic complementarity could be something like a patent race model, where if firms I compete with are working hard trying to invent the Next Big Thing, then I am incentivized to do even more R&D so I can invent first.

On the empirical side, we need a measure of “product market similarity” and “technological similarity”. Let there be M product classes and N patent classes, and construct vectors for each firm of their share of sales across product classes and share of R&D across patent classes. There are many measures of the similarity of a vector, of course, including a well-known measure in innovation from Jaffe. Bloom et al, after my heart, note that we really ought use measures that have proper axiomatic microfoundations; though they do show the properties of a variety of measures of similarity, they don’t actually show the existence (or impossibility) of their optimal measure of similarity. This sounds like a quick job for a good microtheorist.

With similarity measures, all that’s left to do is run regressions of technological and product market similarity, as well as all sorts of fixed effects, on outcomes like R&D performed, productivity (measured using patents or out of a Cobb-Douglas equation) and market value (via the Griliches-style Tobin’s Q). These guys know their econometrics, so I’m omitting many details here, but I should mention that they do use the idea from Wilson’s 2009 ReSTAT of basically random changes in state R&D tax laws as an IV for the cost of R&D; this is a great technique, and very well implemented by Wilson, but getting these state-level R&D costs is really challenging and I can easily imagine a future where the idea is abused by naive implementation.

The results are actually pretty interesting. Qualitatively, the empirical results look quite like the theory, and in particular, the impact of technological similarity looks really important; having lots of firms working on similar technologies but working in different industries is really good for your firm’s productivity and profits. Looking at a handful of high-tech sectors, Bloom et al estimate that the marginal social return on R&D is on the order of 40 percentage points higher than the marginal private return of R&D, implying (with some huge caveats) that R&D in the United States might be something like 3 times smaller than it ought to be. This estimate is actually quite similar to what researchers using other methods have estimated. Interestingly, since bigger firms tend to work in more dense parts of the technology space, they tend to generate more spillovers, hence the common policy prescription of giving smaller firms higher R&D tax credits may be a mistake.

Two caveats. As far as I can tell, the model does not allow a role for absorptive capacity, where firm’s ability to integrate outside knowledge is endogenous to their existing R&D stock. Second, the estimated marginal private rate of return on R&D is something like 20 percent for the average firm; many other papers have estimated very high private benefits from research, but I have a hard time interpreting these estimates. If there really are 20% rates of return lying around, why aren’t firms cranking up their research? At least anecdotally, you hear complaints from industries like pharma about low returns from R&D. Third, there are some suggestive comments near the end about how government subsidies might be used to increase R&D given these huge social returns. I would be really cautious here, since there is quite a bit of evidence that government-sponsored R&D generates a much lower private and social rate of return that the other forms of R&D.

Final July 2013 Econometrica version (IDEAS version). Thumbs up to Nick Bloom for making the final version freely available on his website. The paper has an exhaustive appendix with technical details, as well as all of the data freely available for you to play with.

“Back to Basics: Basic Research Spillovers, Innovation Policy and Growth,” U. Akcigit, D. Hanley & N. Serrano-Velarde (2013)

Basic and applied research, you might imagine, differ in a particular manner: basic research has unexpected uses in a variety of future applied products (though it sometimes has immediate applications), while applied research is immediately exploitable but has fewer spillovers. An interesting empirical fact is that a substantial portion of firms report that they do basic research, though subject to a caveat I will mention at the end of this post. Further, you might imagine that basic and applied research are complements: success in basic research in a given area expands the size of the applied ideas pond which can be fished by firms looking for new applied inventions.

Akcigit, Hanley and Serrano-Velarde take these basic facts and, using some nice data from French firms, estimate a structural endogenous growth model with both basic and applied research. Firms hire scientists then put them to work on basic or applied research, where the basic research “increases the size of the pond” and occasionally is immediately useful in a product line. The government does “Ivory Tower” basic research which increases the size of the pond but which is never immediately applied. The authors give differential equations for this model along a balanced growth path, have the government perform research equal to .5% of GDP as in existing French data, and estimate the remaining structural parameters like innovation spillover rates, the mean “jump” in productivity from an innovation, etc.

The pretty obvious benefit of structural models as compared to estimating simple treatment effects is counterfactual analysis, particularly welfare calculations. (And if I may make an aside, the argument that structural models are too assumption-heavy and hence non-credible is nonsense. If the mapping from existing data to the actual questions of interest is straightforward, then surely we can write a straightforward model generating that external validity. If the mapping from existing data to the actual question of interest is difficult, then it is even more important to formally state what mapping you have in mind before giving policy advice. Just estimating a treatment effect off some particular dataset and essentially ignoring the question of external validity because you don’t want to take a stand on how it might operate makes me wonder why I, the policymaker, should take your treatment effect seriously in the first place. It seems to me that many in the profession already take this stance – Deaton, Heckman, Whinston and Nevo, and many others have published papers on exactly this methodological point – and therefore a decade from now, you will find it equally as tough to publish a paper that doesn’t take external validity seriously as it is to publish a paper with weak internal identification today.)

Back to the estimates: the parameters here suggest that the main distortion is not that firms perform too little R&D, but that they misallocate between basic and applied R&D; the basic R&D spills over to other firms by increasing the “size of the pond” for everybody, hence it is underperformed. This spillover, estimated from data, is of substantial quantitative importance. The problem, then, is that uniform subsidies like R&D tax credits will just increase total R&D without alleviating this misallocation. I think this is a really important result (and not only because I have a theory paper myself, coming at the question of innovation direction from the patent race literature rather than the endogenous growth literature, which generates essentially the same conclusion). What you really want to do to increase welfare is increase the amount of basic research performed. How to do this? Well, you could give heterogeneous subsidies to basic and applied research, but this would involve firms reporting correctly, which is a very difficult moral hazard problem. Alternatively, you could just do more research in academia, but if this is never immediately exploited, it is less useful than the basic research performed in industry which at least sometimes is used in products immediately (by assumption); shades of Aghion, Dewatripont and Stein (2008 RAND) here. Neither policy performs particularly well.

I have two small quibbles. First, basic research in the sense reported by national statistics following the Frascati manual is very different from basic research in the sense of “research that has spillovers”; there is a large literature on this problem, and it is particularly severe when it comes to service sector work and process innovation. Second, the authors suggest at one point that Bayh-Dole style university licensing of research is a beneficial policy: when academic basic research can now sometimes be immediately applied, we can easily target the optimal amount of basic research by increasing academic funding and allowing academics to license. But this prescription ignores the main complaint about Bayh-Dole, which is that academics begin, whether for personal or institutional reasons, to shift their work from high-spillover basic projects to low-spillover applied projects. That is, it is not obvious the moral hazard problem concerning targeting of subsidies is any easier at the academic level than at the private firm level. In any case, this paper is very interesting, and well worth a look.

September 2013 Working Paper (RePEc IDEAS version).

“Inventors, Patents and Inventing Activities in the English Brewing Industry, 1634-1850,” A. Nuvolari & J. Sumner (2013)

Policymakers often assume that patents are necessary for inventions to be produced or, if the politician is sophisticated, for a market in knowledge to develop. Economists are skeptical of such claims, for theoretical and empirical reasons. For example, Petra Moser has shown how few important inventions are ever patented, and Bessen and Maskin have a paper showing how the existence of patents can slow down innovation in certain technical industries. The literature more generally often mentions how heterogeneous appropriation strategies are across industries: some rely entirely on trade secrets, other on open source sharing, and yet others on patent protection.

Nuvolari and Sumner look at the English brewing industry from the 17th to the 19th century. This industry was actually quite innovative, most famously through the (perhaps collective) invention of that delightful winter friend named English Porter. The two look in great detail through lists of patents prior to 1850, and note that, despite the importance of brewing and its technical complexity, beer-related patents make up less than one percent of all patents granted during that period. Further, they note that there are enormous differences in patenting behavior within the brewing industry. Nonetheless, even in the absence of patents, there still existed a market for ideas.

Delving deeper, the authors show that many patentees were seen more as charlatans than as serious inventors. The most important inventors tended to either keep their inventions secret within their firm or guild, keep the inventions partially secret, publicize completely in order to enhance the status of their brewery as “scientific”, or publicize completely in order to garner consulting or engineering contracts. The partial secrecy and status-enhancing publicity reasons are particularly interesting. Humphrey Jackson, an aspiring chemist, sold a book with many technical details left as blank spots; by paying to attend his lecture, the details of his processes could be filled in, though the existence of the lecture was predicated on sufficiently large numbers buying the book! James Bavestock, a brewer in Hampshire, brought his hydrometer to the attention of a prominent London brewer Henry Thrale; in exchange, Thrale could organize entry into the London market, or a job in Thrale’s brewery should the small Hampshire concern go under.

2012 Working Paper (IDEAS version). This article appeared in the new issue of Business History Review, which was particularly good; it also featured, among others, a review on markets for knowledge in 19th century America which will probably be the final publication of the late Kenneth Sokoloff, and a paper by the always interesting Zorina Khan on international technology markets in the 19th century. Many current issues, such as open source, patent trolls, etc. are completely rehashing similar questions during that period, so the articles are well worth a look even for the non-historian.

“Contractability and the Design of Research Agreements,” J. Lerner & U. Malmendier (2010)

Outside research has (as we discussed yesterday) begun to regain prominence as a firm strategy. This is particularly so in biotech: the large drug firms generally do not do the basic research that leads to new products. Rather, they contract this out to independent research firms, then handle the development, licensing and marketing in-house. But such contracts are tough. Not only can do I have trouble writing an enforceable contract that conditions on the effort exerted by the research firm, but the fact that research firms have other projects, and also like to do pure science for prestige reasons, means that they are likely to take my money and use it to fund projects which are not entirely the most preferred of the drug company.

We are in luck: economic theory has a broad array of models of contracting under multitasking worries. Consider the following model of Lerner and Malmendier. The drug firm pays some amount to set up a contract. The research firm then does some research. The drug firm observes the effort of the researcher, who either worked on exactly what the drug company prefers, or on a related project which throws off various side inventions. After the research is performed, the research firm is paid. With perfect ability to contract on effort, this is an easy problem: pay the research firm only if they exert effort on the projects the drug company prefers. When the research project is “tell me whether this compound has this effect”, it might be possible to write such a contract. When the research project is “investigate the properties of this class of compounds and how they might relate to diseases of the heart”, surely no such contract is possible. In that case, the optimal contract may be just to let the research firm work on the broader project it prefers, because at least then the fact that the research firm gets spillovers means that the drug firm can pay the researcher less money. This is clearly second-best.

Can we do better? What about “termination contracts”? After effort is observed, but before development is complete, the drug firm can terminate the contract or not. Payments in the contract can certainly condition on termination. How about the following contract: the drug firm terminates if the research firm works on the broader research project, and it takes the patent rights to the side inventions. Here, if the research firm deviates and works on its own side projects, the drug company gets to keep the patents for those side projects, hence the research firm won’t do such work. And the drug firm further prefers the research firm to work on the assigned project; since termination means that development is not completed, the drug firm won’t just falsely claim that effort was low in order to terminate and seize the side project patents (indeed, on equilibrium path, there are few side patents to seize since the research firm is actually working on the correct project!). The authors show that the contract described here is always optimal if a conditional termination contract is used at all.

Empirically, what does this mean? If I write a research contract for more general research, I should expect more termination rights to be reserved. Further, the liquidity constraint of the research firms matter; if the drug firm could make the research firm pay it back after termination, it would do so, and we could again achieve the first best. So I should expect termination rights to show up particularly for undercapitalized research firms. Lerner and Malmendier create a database from contract data collected by a biotech consulting firm, and show that both of these predictions appear to be borne out. I read these results as in the style of Maskin and Tirole; even when I can’t fully specify all the states of the world in a contract, I can still do a good bit of conditioning.

2008 Working paper (IDEAS version). Final paper in AER 2010. Malmendier will certainly be a factor in the upcoming Clark medal discussion, as she turns 40 this year. Problematically, Nick Bloom (who, says his CV, did his PhD part time?!) also turns 40, and both absolutely deserve the prize. If I were a betting man, I would wager that the just-published-in-the-QJE Does Management Matter in the Third World paper will be the one that puts Bloom over the top, as it’s really the best development paper in many years. That said, I am utterly confused that Finkelstein won last year given that Malmendier and Bloom are both up for their last shot this year. Finkelstein is a great economist, no doubt, but she works in a very similar field to Malmendier, and Malmendier trumps her by any conceivable metric (citations, top cited papers, overall impact, etc.). I thought they switched the Clark Medal to an every-year affair just to avoid such a circumstance, such as when Athey, List and Melitz were all piled up in 2007.

I’m curious what a retrospective Clark Medal would look like, taking into account only research that was done as of the voting year, but allowing us to use our knowledge of the long-run impact of that research. Since 2001, Duflo 2010 and Acemoglu 2005 are locks. I think Rabin keeps his in 2001. Guido Imbens takes Levitt’s spot in 2003. List takes 2007, with Melitz and Athey just missing out (though both are supremely deserving!). Saez keeps 2009. Malmendier takes 2011. Bloom takes 2012. Raj Chetty takes 2013 – still young, but already an obvious lock to win. What’s interesting about this list is just how dominant young folks have been in micro (especially empirical and applied theory); these are essentially the best people working in that area, whereas macro and metrics are still by and large dominated by an older generation.

“Patent Alchemy: The Market for Technology in U.S. History,” N. Lamoreaux, K. Sokoloff & D. Sutthiphisall (2012)

It may appear that the world of innovation looks very different today than it used to. Large in-house R&D outfits – the Bell Labs of the past – are being replaced by small firms who sell the results of their research on to producers. Venture capital funding of research appears more and more important, both for providing capital to inventors and to linking the inventors up with potential buyers. Patent trolls hound the innocent, suing them for patent violations they weren’t even aware of. The speed with which patents are evaluated has slowed to a crawl, and the number of patents being granted continues to grow. Many patents are merely defensive, acquired solely to keep someone else from acquiring them.

Lamoreaux et al, building on earlier work by Lamoreaux and Sokoloff as well as Tom Nicholas’ interesting recent research, point out that none of the above is strange. The rise of in-house R&D is a phenomenon that doesn’t show up in great number in America until well into the twentieth century, only becoming dominant after the Second World War. Around the turn of the century, most innovation was done by small, independent inventors, or by small research firms like Edison’s outfit. A series of intermediaries, principally but not always patent lawyers, served both to file the proper paperwork and to link inventors with potential buyers; the authors provide a bunch of juicy historical stories, derived from lawyer diaries during this period, on exactly how such transactions took place. Railroads were frequently being hounded by patent trolls who tried to catch them unaware, and traveling patentbuyers crossed the Midwest and South suing farmers for using unlicensed barbed wire or milk buckets. Patents took an average of three years to be processed by the early 1900s, and the patenting rate was near an all time high. Firms regularly bought patents just so their competitors wouldn’t have them.

This is all to say that, to the extent we are worried about certain aspects of the patent system today, looking to history may be a useful place to begin. “Submarine patents”, acquired by trolls and kept unused until a particularly juicy potential violator has started to earn large profits, don’t appear to have been too prominent at the turn of the century – given how lucrative this business appears, perhaps an investigation of why they only appear in the present would be worthwhile. The role of a patent as a saleable piece of knowledge, allowing non-producers to do useful research and then sell that research to a firm who finds it useful, surely has some role, as Arrow pointed out in his famous 1962 essay. When patents instead simply add transaction costs or result in thickets, discouraging activity by true innovators, something has gone awry. And when something goes wrong in the world, it is rarely the case that history can offer us no useful guidance.

2012 working paper (No IDEAS version). Prof. Sokoloff passed away from cancer at a young age in 2007, so this may become his final published paper – it incorporates a great number of ideas he worked on throughout his career, so that would be a fitting tribute.

“Recruiting for Ideas: How Firms Exploit the Prior Inventions of New Hires,” J. Singh & A. Agrawal (2011)

Firms poach engineers and researchers from each other all the time. One important reason to do so is to gain access to the individual’s knowledge. A strain of theory going back to Becker, however, suggests that if, after the poaching, the knowledge remains embodied solely in the new employer, it will be difficult for the firm to profit: surely the new employee will have an enormous amount of bargaining power over wages if she actually possesses unique and valuable information. (As part of my own current research project, I learned recently that Charles Martin Hall, co-inventor of the Hall-Heroult process for aluminum smelting, was able to gather a fortune of around $300 million after he brought his idea to the company that would become Alcoa.)

In a resource-based view of the firm, then, you may hope to not only access a new employer’s knowledge, but also spread it to other employees at your firm. By doing this, you limit the wage bargaining power of the new hire, and hence can scrape off some rents. Singh and Agrawal break open the patent database to investigate this. First, use name and industry data to try to match patentees who have an individual patent with one firm at time t, and then another patent at a separate firm some time later; such an employee has “moved”. We can’t simply check whether the receiving firm cites this new employee’s old patents more often, as there is an obvious endogeneity problem. First, firms may recruit good scientists more aggressively. Second, they may recruit more aggressively in technology fields where they are already planning to do work in the future. This suggests that matching plus diff-in-diff may work. Match every patent to another patent held by an inventor who never switches firms, attempting to find a second patent with very similar citation behavior, inventor age, inventor experience, technology class, etc. Using our matched sample, check how much the propensity to cite the mover’s patent changes compares to the propensity to the cite the stayer’s patent. That is, let Joe move to General Electric. Joe had a patent while working at Intel. GE researchers were citing that Intel patent once per year before Joe moved. They were citing a “matched” patent 1 times per year. After the move, they cite the Intel patent 2 times per year, and the “matched” patent 1.1 times per year. The diff-in-diff then suggests that moving increases the propensity to cite the Intel patent at GE by (2-1)-(1.1-1)=.9 citations per year, where the first difference helps account for the first type of endogeneity we discussed above, and the second difference for the second type of endogeneity.

What do we find? It is true that, after a move, the average patent held by a mover is cited more often at the receiving firm, especially in the first couple years after a move. Unfortunately, about half of new patents which cite the new employee’s old patent after she moves are made by the new employee herself, and another fifteen percent or so are made by previous patent collaborators of the poached employee. What’s worse, if you examine these citations by year, even five years after the move, citations to the pre-move patent are still highly likely to come from the poached employee. That is, to the extent that the poached employee had some special knowledge, the firm appears to have simply bought that knowledge embodied in the new employee, rather than gained access to useful techniques that quickly spread through the firm.

Three quick comments. First, applied econometrician friends: is there any reason these days to do diff-in-diff linearly rather than using the nonparametric “changes-in-changes” of Athey and Imbens 2006, which allows recovery of the entire distribution of effects of treatment on the treated? Second, we learn from this paper that the mean poached research employee doesn’t see her knowledge spread through the new firm, which immediately suggests the question of whether there are certain circumstances in which such knowledge spreads. Third, this same exercise could be done using all patents held by the moving employee’s old firm – I may be buying access to general techniques owned by the employee’s old firm rather than the specific knowledge represented in that employee’s own pre-move patents. I wonder if there’s any difference.

Final Management Science version (IDEAS version). Big thumbs up to Jasjit Singh for putting final published versions of his papers up on his site.

“Diffusing New Technology Without Dissipating Rents: Some Historical Case Studies of Knowledge Sharing,” J. Bessen & A. Nuvolari (2012)

The most fundamental fact in the economic history of the world is that, from the dawn on mankind until the middle of the 19th century in a small corner of Europe, the material living standards of the average human varied within a very small range: perhaps the wealthiest places, ever, were five times richer than regions on the edge of subsistence. The end of this Malthusian world is generally credited to changes following the Industrial Revolution. The Industrial Revolution is sometimes credited to changes in the nature of invention in England and Holland in the 1700s. If you believe those claims, then understanding what spurred invention from that point to the present is of singular importance.

A traditional story, going back to North and others, is that property rights were very important here. England had patents. England had well-enforced contracts for labor and capital. But, at least as far as patents are concerned, recent evidence suggests they couldn’t have been too critical. Moser showed that only 10% or so of important inventions in the mid-1800s were ever patented in the UK. Bob Allen, who we’ve met before on this site, has inspired a large literature on collective invention, or periods of open-source style sharing of information among industry leaders during critical phases of tinkering with new techniques.

Why would you share, though? Doesn’t this simply dissipate your rents? If you publicize knowledge of a productive process for which you are earning some rent, imitators can just come in and replicate that technology, competing away your profit. And yet, and yet, this doesn’t appear to happen in many historical circumstances. Bessen (he of Bessen and Maskin 2009, one of my favorite recent theoretical papers on innovation) and Nuvolari examine three nineteenth century industries, American steel, Cornish steam engines and New England power weavers. They show that periods of open sharing on invention, free transfer of technology to rivals, industry newsletters detailing new techniques, etc. can predominate for periods a decade and longer. In all three cases, patents are unimportant in this initial stage, though (at least outside of Cornwall) quite frequently used later in the development of the industry. Further, many of the important cost reducing microinventions in these industries came precisely during the period of collective invention.

The paper has no model, but very simply, here is what is going on. Consider a fast growing industry where some factors important for entry are in fixed supply; for example, the engineer Alexander Holley personally helped design eight of the first nine American mills using Bessemer’s technology. Assume all inventions are cost reducing. Holding sales price and demand constant, cost reductions increase industry profit. Sharing your invention ensures that you will not be frozen out of sharing by others. Trying to rely only on your own inventions to gain a cost advantage is not as useful as in standard Bertrand, since the fixed factors for entry in a new industry mean you can’t expand fast enough to meet market demand even if you had the cost advantage. There is little worry about free riding since the inventions are natural by-products of day-to-day problem solving rather than the result of concentrated effort: early product improvement is often an engineering problem, not a scientific one. Why would I assume sales price is roughly constant? Imagine an industry where the new technology is replacing something already being produced by a competitive industry (link steel rail ties replaced iron ties). The early Bessemer-produced ties in America were exactly this story, initially being a tiny fraction of the rail tie market, so the market price for ties was being determined by the older vintage of technology.

Open source invention is nothing unusual, nor is it something new. It has long coexisted with the type of invention for which patents may (only may!) be more suitable vectors for development. Policies that gunk up these periods of collective invention can be really damaging. I will discuss some new research in coming weeks about a common policy that appears to provide exactly this sort of gunk: the strict enforcement of non-compete agreements in certain states.

2012 Working Paper (IDEAS version)

“Should You Allow Your Agent to Be Your Competitor?,” M. Kräkel & D. Sliwka (2009)

Many industries – especially research heavy fields like high tech and biotech – are riven with “non-compete agreements”, where you sign a contract when you’re hired banning work for a competitor firm in a similar area for some amount of time after you quit. These are controversial. Indeed, California comes close to banning them altogether (more on this in a future post). This seems like a great deal for the employer. If your employee develops any industry-specific human capital, you ensure that they won’t use the knowledge against you by working for a competitor. This immediately raises another question, then: why doesn’t every employer use NCAs?

Theory comes to the rescue, in the form of an extension of Holmstrom’s (may he win his deserved Nobel!) career concerns. Think of your income as having three components: wages, bonuses and “implicit payments”. Wages are set salaries. Bonuses are payments conditional on some verifiable goal. Implicit payments are increases in your total expected lifetime wages and bonuses as a result of some action. For instance, a firm pays a young trainee very little, less than her outside option, but promises that the deal is worth it because the trainee position will develop human capital in such a way that future job offers will be at a high wage.

Kräkel and Sliwka show how this can lead firms to avoid NCAs. Consider a model where you work with a firm on an invention. With probability p, you invent it. With probability q, if you don’t invent, the firm invents anyway. Exactly who came up with invention is not contractible (a common problem with team effort!). The agent’s effort is costly. After the invention is made, if the agent stays with the firm, a big surplus results. Alternatively, if the agent was responsible for the invention, she may receive an outside offer. The initial contract is a triple: wage, bonus conditional on the invention being made, and potentially a non-compete clause which precludes the agent from taking any future outside offer. Total surplus is assumed to be highest when the firm and the agent stay together.

Intuitively, the agent knows if they work hard and are responsible for the invention, and there is an NCA in place, then after the invention is made, the firm is going to claim that it was not the agent’s doing. Hence the incentives for individual effort are fairly low-power. If, however, there is no non-compete clause, the agent gets an outside offer if she is, in fact, the inventor. For the firm to keep the agent, then, it must give her an extra bonus. This outside offer implicitly incentivizes the agent to work harder than she would if only the bonus and wage were available. Further, it is less susceptible to free-riding on the agent’s part, since the outside option only comes about if it was the agent, and not the firm, who made the invention: the bonus when an NCA is in effect has such ability to distinguish, hence the incentive to free-ride is stronger. The model shows this intuition is correct. A non-compete clause will only be imposed when there is a very-high probability that the agent can get an outside offer, and when the relative value to the firm of keeping the agent after an invention is small. Indeed, there is a large range of parameters where I don’t pay any bonuses and I don’t provide an non-compete. The “implicit bonus” that I will have to match the agent’s outside option is enough to encourage effort. A short extension shows that if I can use clawbacks or pay workers set amounts not to compete with me, I prefer to do that always over using an NCA since the incentives can be tuned even finer.

This isn’t to say that noncompete agreements aren’t worrying from a social policy perspective; there are other reasons we should be concerned about them, as I’ll discuss sometime soon. But this result shows again the value of thinking through problems theoretically. In general, the answer to “why doesn’t X screw over Y?” turns out to be “because in equilibrium, it is not in X’s interest to do so”!

2006 Working Paper (IDEAS version). Final paper in 2009 Intl. Econ. Review.

“Profiting from Technological Innovation,” D. Teece (1986)

Teece’s 1986 article in Research Policy is surprisingly little known among economists given that it has been cited something like 10,000 times. I want to give an interpretation of the article similar to that of Sid Winter in his article written on the 20th anniversary of the original.

Schumpeter famously argued that “perfect” competition is, in fact, not so, as the lack of rents given no incentive for firms to spend on R&D, and since growth is so much important for welfare than static inefficiency, we ought be more forgiving of market power. Ken Arrow, in a well-known article from the 1962 NBER Invention volume, maintains that Schumpeter’s logic is incomplete, and that with patent licensing, monopolies can make things worse. Consider a good with marginal cost 2 and demand such that Q=6-p. In the competitive market, price is 2, quantity is 4, and industry profits are zero. With a monopoly, price is 4, quantity is 2, and industry profits are 4. An innovator invents a technique that lowers marginal cost for the good to 1. In the competitive market, he can license this good to all producers, accruing licensing profits of 1×4=4. In the monopoly market, the monopoly with marginal cost of 1 would optimally sell 2.5 units at 3.5 each, earning 2.5×2.5=6.25. Therefore, the invention increases monopoly profit by 6.25-4=2.25, and the inventor can earn no more than 2.25 by licensing to the monopolist. It seems, then, that whether monopoly or perfect competition leads to more invention depends, at least in part, on the ability of inventors to license without being appropriated.

Teece takes that logic a step further. As most inventions can be appropriated, either by direct imitation, or by inventing around the relevant patent, inventions will only pay off for the inventor if she owns the best complementary assets. Consider the case of EMI’s CAT scanner and Searle’s Nutrasweet. The CAT scanner was both invented and commercialized by EMI, leading to a Nobel for one of EMI’s engineers. Nonetheless, EMI would be out of business within a few years, while competitors made bundles of money from similar scanners. Nutrasweet, on the other hand, was enormously profitable for Searle. Why the difference?

The difference is access, through contracting or ownership, to complementary assets. EMI’s imitators had much better medical technology manufacturing and distribution technology than EMI itself. Searle, on the other hand, took deliberate steps to protect itself once its patent ran out, by establishing a strong brand during the patent period, by limiting outside manufacturing (since those contract manufacturers are potential future competitors), and by doing R&D on a product that is difficult to imitate without violating patent; for one, other alternative sugars would need to go through their own FDA approval, which takes years. Teece’s article also provides a second reason why large firms spend more on R&D. It’s not just that they will have market power in the product’s market, but also that they are more likely to own complementary assets.

Final Research Policy version (IDEAS version). A site note: this is the 300th article we’ve discussed on this site. I would love to see more focused research blogs. There are a few (e.g., the NEP-HIS blog with a weekly post on economic history), but that’s it. I’d be glad to share my experience from this blog with anyone interested. For one, the potential audience for discussions of new research is huge – at least half of the readers of this site are non-academics, but instead represent the curious, people working in the tech sector, undergraduate students, etc.


Get every new post delivered to your Inbox.

Join 169 other followers

%d bloggers like this: