Category Archives: Diffusion

“Recruiting for Ideas: How Firms Exploit the Prior Inventions of New Hires,” J. Singh & A. Agrawal (2011)

Firms poach engineers and researchers from each other all the time. One important reason to do so is to gain access to the individual’s knowledge. A strain of theory going back to Becker, however, suggests that if, after the poaching, the knowledge remains embodied solely in the new employer, it will be difficult for the firm to profit: surely the new employee will have an enormous amount of bargaining power over wages if she actually possesses unique and valuable information. (As part of my own current research project, I learned recently that Charles Martin Hall, co-inventor of the Hall-Heroult process for aluminum smelting, was able to gather a fortune of around $300 million after he brought his idea to the company that would become Alcoa.)

In a resource-based view of the firm, then, you may hope to not only access a new employer’s knowledge, but also spread it to other employees at your firm. By doing this, you limit the wage bargaining power of the new hire, and hence can scrape off some rents. Singh and Agrawal break open the patent database to investigate this. First, use name and industry data to try to match patentees who have an individual patent with one firm at time t, and then another patent at a separate firm some time later; such an employee has “moved”. We can’t simply check whether the receiving firm cites this new employee’s old patents more often, as there is an obvious endogeneity problem. First, firms may recruit good scientists more aggressively. Second, they may recruit more aggressively in technology fields where they are already planning to do work in the future. This suggests that matching plus diff-in-diff may work. Match every patent to another patent held by an inventor who never switches firms, attempting to find a second patent with very similar citation behavior, inventor age, inventor experience, technology class, etc. Using our matched sample, check how much the propensity to cite the mover’s patent changes compares to the propensity to the cite the stayer’s patent. That is, let Joe move to General Electric. Joe had a patent while working at Intel. GE researchers were citing that Intel patent once per year before Joe moved. They were citing a “matched” patent 1 times per year. After the move, they cite the Intel patent 2 times per year, and the “matched” patent 1.1 times per year. The diff-in-diff then suggests that moving increases the propensity to cite the Intel patent at GE by (2-1)-(1.1-1)=.9 citations per year, where the first difference helps account for the first type of endogeneity we discussed above, and the second difference for the second type of endogeneity.

What do we find? It is true that, after a move, the average patent held by a mover is cited more often at the receiving firm, especially in the first couple years after a move. Unfortunately, about half of new patents which cite the new employee’s old patent after she moves are made by the new employee herself, and another fifteen percent or so are made by previous patent collaborators of the poached employee. What’s worse, if you examine these citations by year, even five years after the move, citations to the pre-move patent are still highly likely to come from the poached employee. That is, to the extent that the poached employee had some special knowledge, the firm appears to have simply bought that knowledge embodied in the new employee, rather than gained access to useful techniques that quickly spread through the firm.

Three quick comments. First, applied econometrician friends: is there any reason these days to do diff-in-diff linearly rather than using the nonparametric “changes-in-changes” of Athey and Imbens 2006, which allows recovery of the entire distribution of effects of treatment on the treated? Second, we learn from this paper that the mean poached research employee doesn’t see her knowledge spread through the new firm, which immediately suggests the question of whether there are certain circumstances in which such knowledge spreads. Third, this same exercise could be done using all patents held by the moving employee’s old firm – I may be buying access to general techniques owned by the employee’s old firm rather than the specific knowledge represented in that employee’s own pre-move patents. I wonder if there’s any difference.

Final Management Science version (IDEAS version). Big thumbs up to Jasjit Singh for putting final published versions of his papers up on his site.

“Diffusing New Technology Without Dissipating Rents: Some Historical Case Studies of Knowledge Sharing,” J. Bessen & A. Nuvolari (2012)

The most fundamental fact in the economic history of the world is that, from the dawn on mankind until the middle of the 19th century in a small corner of Europe, the material living standards of the average human varied within a very small range: perhaps the wealthiest places, ever, were five times richer than regions on the edge of subsistence. The end of this Malthusian world is generally credited to changes following the Industrial Revolution. The Industrial Revolution is sometimes credited to changes in the nature of invention in England and Holland in the 1700s. If you believe those claims, then understanding what spurred invention from that point to the present is of singular importance.

A traditional story, going back to North and others, is that property rights were very important here. England had patents. England had well-enforced contracts for labor and capital. But, at least as far as patents are concerned, recent evidence suggests they couldn’t have been too critical. Moser showed that only 10% or so of important inventions in the mid-1800s were ever patented in the UK. Bob Allen, who we’ve met before on this site, has inspired a large literature on collective invention, or periods of open-source style sharing of information among industry leaders during critical phases of tinkering with new techniques.

Why would you share, though? Doesn’t this simply dissipate your rents? If you publicize knowledge of a productive process for which you are earning some rent, imitators can just come in and replicate that technology, competing away your profit. And yet, and yet, this doesn’t appear to happen in many historical circumstances. Bessen (he of Bessen and Maskin 2009, one of my favorite recent theoretical papers on innovation) and Nuvolari examine three nineteenth century industries, American steel, Cornish steam engines and New England power weavers. They show that periods of open sharing on invention, free transfer of technology to rivals, industry newsletters detailing new techniques, etc. can predominate for periods a decade and longer. In all three cases, patents are unimportant in this initial stage, though (at least outside of Cornwall) quite frequently used later in the development of the industry. Further, many of the important cost reducing microinventions in these industries came precisely during the period of collective invention.

The paper has no model, but very simply, here is what is going on. Consider a fast growing industry where some factors important for entry are in fixed supply; for example, the engineer Alexander Holley personally helped design eight of the first nine American mills using Bessemer’s technology. Assume all inventions are cost reducing. Holding sales price and demand constant, cost reductions increase industry profit. Sharing your invention ensures that you will not be frozen out of sharing by others. Trying to rely only on your own inventions to gain a cost advantage is not as useful as in standard Bertrand, since the fixed factors for entry in a new industry mean you can’t expand fast enough to meet market demand even if you had the cost advantage. There is little worry about free riding since the inventions are natural by-products of day-to-day problem solving rather than the result of concentrated effort: early product improvement is often an engineering problem, not a scientific one. Why would I assume sales price is roughly constant? Imagine an industry where the new technology is replacing something already being produced by a competitive industry (link steel rail ties replaced iron ties). The early Bessemer-produced ties in America were exactly this story, initially being a tiny fraction of the rail tie market, so the market price for ties was being determined by the older vintage of technology.

Open source invention is nothing unusual, nor is it something new. It has long coexisted with the type of invention for which patents may (only may!) be more suitable vectors for development. Policies that gunk up these periods of collective invention can be really damaging. I will discuss some new research in coming weeks about a common policy that appears to provide exactly this sort of gunk: the strict enforcement of non-compete agreements in certain states.

2012 Working Paper (IDEAS version)

Learning and Liberty Ships, P. Thompson

(Note: This post refers to “How Much Did the Liberty Shipbuilders Learn? New Evidence for an Old Case Study” (2001) and “How Much Did the Liberty Shipbuilders Forget?” (2007), both by Peter Thompson.)

It’s taken for granted now that organizations “learn” as their workers gain knowledge while producing and “forget” when not actively involved in some project. Identifying the importance of such learning-by-doing and organizational forgetting is quite a challenging empirical task. We would need a case where an easily measurable final product was produced over and over by different groups using the same capital and technology, with data fully recorded. And a 1945 article by a man named Searle found just an example: the US Navy Liberty Ships. These standardized ships were produced by the thousand by a couple dozen shipyards during World War II. Searle showed clearly that organizations get better at making ships as they accumulate experience, and the productivity gain of such learning-by-doing is enormous. His data was used in a more rigorous manner by researchers in the decades afterward, generally confirming the learning-by-doing and also showing that shipyards which stopped producing Liberty ships for a month or two very quickly saw their productivity plummet.

But rarely is the real world so clean. Peter Thompson, in this pair of papers (as well as a third published in the AER but discussed here), throws cold water on both the claim that organizations learn rapidly and that they forget just as rapidly. The problem is two fold. First, capital at the shipyards was assumed to be roughly constant. In fact, it was not. Almost all of the Liberty shipyards took some time to gear up their equipment when they began construction. Peter dug up some basic information on capital at each yard from deep in the national archives. Indeed, the terminal capital stock at each yard was three times the initial capital on average. Including a measure of capital in the equation estimating learning-by-doing reduces the importance of learning-by-doing by half.

It gets worse. Fractures were found frequently, accounting for more than 60% of ships built at the most sloppy yard. Speed was encouraged by contract, and hence some of the “learning-by-doing” may simply have been learning how to get away with low quality welding and other tricks. Thompson adjusts the time it took to build each ship to account for an estimate of the repair time required on average for each yard at each point in time. Fixing this measurement error further reduces productivity growth due to learning-by-doing by six percent. The upshot? Organizational learning is real, but the magnitudes everyone knows from the Searle data are vastly overstated. This matters: Bob Lucas, in his well-known East Asian growth miracle paper, notes that worldwide innovation, human capital and physical capital are not enough to account for sustained 6-7% growth like we saw in places like Korea in the 70s and 80s. He suggests that learning-by-doing as firms move up the export-goods quality ladder might account for such rapid growth. But such a growth miracle requires quite rapid on the job productivity increases. (The Lucas paper is also great historical reading: he notes that rapid growth in Korea and other tigers – in 1991, as rich as Mexico and Yugoslavia, what a miracle! – will continue, except, perhaps, in the sad case of Hong Kong!)

Thompson also investigates organizational forgetting. Old estimates using Liberty ship data find worker productivity on Liberty ships falling a full 25% per month when the workers were not building Liberty ships. Perhaps this is because the shipyards’ “institutional memory” was insufficient to transmit the tricks that had been learned, or because labor turnover meant good workers left in the interim period. The mystery of organizational forgetting in Liberty yards turns out to have a simpler explanation: measurement error. Yards would work on Liberty ships, then break for a few months to work on a special product or custom ship of some kind, then return to the Liberty. But actual production was not so discontinuous: some capital and labor transitioned (in a way not noticed before) back to the Liberty ships with delay. This appears in the data as decreased productivity right after a return to Liberty production, with rapid “learning” to get back to the frontier. Any estimate of such a nonlinear quantity is bound to be vague, but Peter’s specifications give organizational forgetting in Liberty ship production of 3-5% per month, and finds little evidence that this is related to labor turnover. This estimate is similar to other recent production line productivity forgetting estimates, such as that found in Benkard’s 2000 AER on the aircraft industry.

How Much did the Liberty Shipbuilders Learn? (final published version (IDEAS page). Final version published in JPE 109.1 2001.

How Much did the Liberty Shipbuilders Forget? (2005 working paper) (IDEAS page). Final paper in Management Science 53.6, 2007.

“The Future of Taypayer-Funded Research,” Committee for Economic Development (2012)

It’s one month after SOPA/PIPA. Congress is currently considering two bills. The Federal Research Public Access Act would require federal funders to insist on open-access publication of funded research papers after an embargo period. The NIH currently has such a policy, with a one year embargo. As of now, the FRPAA has essentially no chance of passing. On the other hand, the Fair Copyright in Research Works Act would reverse the current NIH policy and ban any other federal funders from setting similar access mandates. It has heavy Congressional support. How should you think of this as an economist? (A quick side note for economists: the world we live in, where working papers are universally available on author’s personal websites, is almost unheard of in other fields. Only about 20% of academic papers published last year were available online in ungated versions. This is about 100% in economics and high energy physics and a few other fields, and close to 0% otherwise.)

I did some consulting in the fall for a Kaufmann-funded CED report released yesterday called The Future of Taxpayer-Funded Research. There is a simple necessary condition that any government policy concerning new goods should not violate: call it The First Law of Zero Marginal Product Goods. The First Law says that if some policy increases consumption of something with zero marginal cost (an idea, an academic paper, a song, an e-book, etc.), a minimum, necessary condition to restrict that policy is that the variety of affected new goods must decrease. So if music piracy increases the number of songs consumed (and the number of songs illegally downloaded in any period of time is currently much higher than worldwide sales during that period), a minimum economic justification for a government crackdown on piracy is that the number of new songs created has decreased (in this case, they have not). Applying The First Law to open access mandates, a minimum economic justification for opposing such mandates is that either open access has no benefits, or that open access will make peer reviewed journals economically infeasible. To keep this post from becoming a mess of links, I leave out citations, but you can find all of the numbers below in the main report.

On the first point, open access has a ton of benefits even when most universities subscribe to nearly all the important journals. It “speeds up” the rate at which knowledge diffuses, which is important because science is cumulative. It helps solve access difficulties for private sector researchers and clinicians, who generally do not have subscriptions due to the cost; this website is proof that non-academics have interest in reading academic work, as I regularly receive email from private sector workers or the simply curious. Most importantly, even the minor access difficulties caused by the current gated system, such as having to go to a publisher website, having to click “Accept terms & conditions”, etc., versus just reading a pdf, matter. Look at the work by Fiona Murray and Scott Stern and Heidi Williams and others, much of which has been covered on this website: minor restrictions on ease can cause major deviations to efficiency in a world where results are cumulative. Such effects are only going to become more important as we move into a world where computer programs search and synthesize and translate research results.

The second point, whether open access makes peer review infeasible, is more important. The answer is that open access appears to have no such effects. Over time, we have seen many funders and universities, from MIT to the Wellcome Trust, impose open access mandates on their researchers. This has, to my knowledge, not led to the shutdown of even a single prominent journal. Not one. Profits in science publishing remain really, really high, as you’d expect in an industry with a lot of market power due to lock-in. Cross-sectionally, there is a ton of heterogeneity in norms: every high energy physicist and mathematician puts their work on arXiv, and every economist backs up their work online, yet none of this has led to the demise of peer reviewed journals and their dissemination function in those fields. Even within fields, radically different policies have proven sustainable. The New England Journal of Medicine makes all articles freely accessible after 6 months. The PLoS journals are totally open access, charging only a publication fee of $1350 upon acceptance. Other journals keep their entire archive gated. All are financially sustainable models, though of course they may differ in terms of how much profit the journal can extract.

One more point, and it’s an important one. Though the American Economics Association has not taken a position on these bills – as far as I know, the AEA does very little lobbying at all, keeping its membership fee low, for which I’m glad! – many other scholarly societies have taken a position. And I think many of their members would be surprised that their own associations oppose public access, something which I think can safely be said to be supported by nearly all of their members. Here is a full list of responses to the recent White House RFI on public access mandates. The American Anthropological Association opposes public access. The American Sociological Association and the American Psychological Association both strongly oppose public access. These groups all claim first that there is no access problem to begin with – simply untrue for the reasons above, all of which are expanded on in the CED paper – and that open access is incompatible with social science publishing, where articles are long and even rejected articles regularly receive many comments from peer review. But we know from the cross section that this isn’t true. Many learned societies publish open access journals, even in the social sciences, and many of them don’t charge any publication fee at all. The two main societies in economics, thankfully, both publish OA journals: the AEA’s Journal of Economic Perspectives, and the Econometric Society’s TE and QE. And even non-OA economics journals essentially face an open access mandate with a 0-month embargo, since everyone puts their working papers online. Econ is not unique in the social sciences: the Royal Society’s Philosophical Transactions, for instance, is open access. If you’re a member of the APA, ASA or AAA, you ought voice your displeasure!

http://www.ced.org/images/content/issues/innovation-technology/DCCReport_Final_2_9-12.pdf (Final published version of CED report – freely available online, of course!)

“Climbing Atop the Shoulders of Giants,” J. Furman & S. Stern (2011)

I have been asked a couple times what the difference is between innovation and invention. A critical distinction is that innovation involves not just the creation of new knowledge, but also its dissemination. And dissemination is what really matters when it comes to economic growth or pushing the frontier of science. Particularly given Ben Jones’s papers about science slowing because reaching the frontier takes longer for young researchers, you might wonder whether there are policies governments and R&D labs can enact in order to get their scientists up to speed more quickly.

There are many such policies, in principle. We can have editors rewrite scientific studies in order to make them easier to read (or set up weblogs devote to summaries of new research, of course). We can have replication done in a more formal way in order to make previous results easier to trust. We can establish open data policies to make it easier to follow up on earlier articles. Are such policies worthwhile?

Furman and Stern examine a set of institutions called biological resource centers (BRCs). These centers, of which there are many, archive and certify biomaterial used in research studies. They forward samples to researchers who want to follow up on earlier work. Famously, Kary Mullis, a private sector researcher, used a strange organism that lived in geysers at Yellowstone and had been archived a decade before at a BRC, to develop a technique allowing quick replication of genetic material. He won the Nobel for this work. When archived, no one suspected the organism would have any major practical use.

Assuming citations measure dissemination of knowledge in some sense, do studies using BRC-archived material get cited more than other work in similar journals on similar topics, and does the citation profile over time look different? The BRC articles get 220 percent more citations, though much of this is due to a selection effect: high quality articles are more likely to have their materials archived. Furman and Stern have a nice trick to control for selection, though. On three occasions in their sample, a private sector “special collection” was forwarded to a major public BRC – for example, one private sector lab shut down and forwarded many years of their internal archives. Papers using biomaterial from the special collection have an expected lifetime citation profile given the citation they received during, say, the nine years they were not in a BRC. The marginal impact of being added to a BRC is 50 to 125 percent. The impact is biggest for articles originally published in lower ranked journals (a certification effect), and BRC accession leads to a roughly 100 percent increase in unique labs and universities appearing in future citations. The idea in the latter effect is that, with private archiving, follow-up studies generally come from friends and associates of the original author, whereas BRC material is open to everybody.

Finally, a nice back of the envelope calculation: are the RBCs cost effective? We have estimates from other research about the average “cost” of a citation in biology studies, which is about 2400 dollars. Accession of materials to BRCs costs an average of 10000 dollars. Given the marginal impacts of BRC accession, funding of BRCs is 3 to 10 times more cost effective than funding new research. More broadly, it might do the NIH and NSF well to shift some money from new research to dissemination strategies. And indeed they are doing just that – I hope to show you some results of this policy by next spring.

ftp://ftp.zew.de/pub/zew-docs/veranst_upload/1232/525_BRC%20FS%20(Jun-06-2010).pdf (June 2010 working paper – final version in AER 2011. Ironically given the focus of this paper on quick dissemination of research, Furman and Stern took at least seven years to publish; there are 2004 working paper versions that have basically the same results as the final published version. Ellison showed in a paper a few years back that the slow pace of publishing in economics is due to referees asking for endless rounds of minor revisions. It’s outrageous that our field puts up with 5-10 year lags in publishing.)

%d bloggers like this: