Category Archives: Innovation

Tunzelmann and the Nature of Social Savings from Steam

Research Policy, the premier journal for innovation economists, recently produced a symposium on the work of Nick von Tunzelmann. Tunzelmann is best known for exploring the social value of the invention of steam power. Many historians had previously granted great importance to the steam engine as a driver of the Industrial Revolution. However, as with Fogel’s argument that the railroad was less important to the American economy than previously believed (though see Donaldson and Hornbeck’s amendment claiming that market access changes due to rail were very important), the role of steam in the Industrial Revolution may have been overstated.

This is surprising. To my mind, the four most important facts for economics to explain is why the world economy (in per capita terms) stagnated until the early 1800s, why cumulative per-capita growth began then in a corner of Northwest Europe, why growth at the frontier has continued to the present, and why growth at the frontier has been so consistent over this period. The consistency is really surprising, given that individual non-frontier country growth rates, and World GDP growth, has vacillated pretty wildly on a decade-by-decade basis.

Malthus’ explanation still explains the first puzzle best. But there remain many competing explanations for how exactly the Malthusian trap was broken. The idea that a thrifty culture or expropriation of colonies was critical sees little support from economic historians; as McCloskey writes, “Thrifty self-discipline and violent expropriation have been too common in human history to explain a revolution utterly unprecedented in scale and unique to Europe around 1800.” The problem, more generally, of explaining a large economic X on the basis of some invention/program/institution Y is that basically everything in the economic world is a complement. Human capital absent good institutions has little value, modern management techniques absent large markets is ineffective, etc. The problem is tougher when it comes to inventions. Most “inventions” that you know of have very little immediate commercial importance, and a fair ex-post reckoning of the critical parts of the eventual commercial product often leaves little role for the famous inventor.

What Tunzelmann and later writers in his tradition point out is that even though Watt’s improvement to the steam engine was patented in 1769, steam produces less horsepower than water in the UK as late as 1830, and in the US as late as the Civil War. Indeed, even today, hydropower based on the age-old idea of the turbine is still an enormous factor in the siting of electricity-hungry industries. It wasn’t until the invention of high-pressure steam engines like the Lancanshire boiler in the 1840s that textile mills really saw steam power as an economically viable source of energy. Most of the important inventions in the textile industry were designed originally for non-steam power sources.

The economic historian Nicholas Crafts supports Tunzelmann’s original argument against the importance of steam using a modern growth accounting framework. Although the cost of steam power fell rapidly following Watt, and especially after the Corliss engine in the mid 19th century, steam was still a relatively small part of economy until the mid-late 19th century. Therefore, even though productivity growth within steam was quick, only a tiny portion of overall TFP growth in the early Industrial Revolution can be explained by steam. Growth accounting exercises have a nice benefit over partial equilibrium social savings calculations because the problem that “everything is a complement” is taken care of so long as you believe the Cobb-Douglas formulation.

The December 2013 issue of Research Policy (all gated) is the symposium on Tunzelmann. For some reason, Tunzelmann’s “Steam Power and British Industrialization Until 1860″ is quite expensive used, but any decent library should have a copy.

“Identifying Technology Spillovers and Product Market Rivalry,” N. Bloom, M. Schankerman & J. Van Reenen (2013)

R&D decisions are not made in a vacuum: my firm both benefits from information about new technologies discovered by others, and is harmed when other firms create new products that steal from my firm’s existing product lines. Almost every workhorse model in innovation is concerned with these effects, but measuring them empirically, and understanding how they interact, is difficult. Bloom, Schankerman and van Reenen have a new paper with a simple but clever idea for understanding these two effects (and it will be no surprise to readers given how often I discuss their work that I think these three are doing some of the world’s best applied micro work these days).

First, note that firms may be in the same technology area but not in the same product area; Intel and Motorola work on similar technologies, but compete on very few products. In a simple model, firms first choose R&D, knowledge is produced, and then firms compete on the product market. The qualitative results of this model are as you might expect: firms in a technology space with many other firms will be more productive due to spillovers, and may or may not actually perform more R&D depending on the nature of diminishing returns in the knowledge production function. Product market rivalry is always bad for profits, does not affect productivity, and increases R&D only if research across firms is a strategic complement; this strategic complementarity could be something like a patent race model, where if firms I compete with are working hard trying to invent the Next Big Thing, then I am incentivized to do even more R&D so I can invent first.

On the empirical side, we need a measure of “product market similarity” and “technological similarity”. Let there be M product classes and N patent classes, and construct vectors for each firm of their share of sales across product classes and share of R&D across patent classes. There are many measures of the similarity of a vector, of course, including a well-known measure in innovation from Jaffe. Bloom et al, after my heart, note that we really ought use measures that have proper axiomatic microfoundations; though they do show the properties of a variety of measures of similarity, they don’t actually show the existence (or impossibility) of their optimal measure of similarity. This sounds like a quick job for a good microtheorist.

With similarity measures, all that’s left to do is run regressions of technological and product market similarity, as well as all sorts of fixed effects, on outcomes like R&D performed, productivity (measured using patents or out of a Cobb-Douglas equation) and market value (via the Griliches-style Tobin’s Q). These guys know their econometrics, so I’m omitting many details here, but I should mention that they do use the idea from Wilson’s 2009 ReSTAT of basically random changes in state R&D tax laws as an IV for the cost of R&D; this is a great technique, and very well implemented by Wilson, but getting these state-level R&D costs is really challenging and I can easily imagine a future where the idea is abused by naive implementation.

The results are actually pretty interesting. Qualitatively, the empirical results look quite like the theory, and in particular, the impact of technological similarity looks really important; having lots of firms working on similar technologies but working in different industries is really good for your firm’s productivity and profits. Looking at a handful of high-tech sectors, Bloom et al estimate that the marginal social return on R&D is on the order of 40 percentage points higher than the marginal private return of R&D, implying (with some huge caveats) that R&D in the United States might be something like 3 times smaller than it ought to be. This estimate is actually quite similar to what researchers using other methods have estimated. Interestingly, since bigger firms tend to work in more dense parts of the technology space, they tend to generate more spillovers, hence the common policy prescription of giving smaller firms higher R&D tax credits may be a mistake.

Two caveats. As far as I can tell, the model does not allow a role for absorptive capacity, where firm’s ability to integrate outside knowledge is endogenous to their existing R&D stock. Second, the estimated marginal private rate of return on R&D is something like 20 percent for the average firm; many other papers have estimated very high private benefits from research, but I have a hard time interpreting these estimates. If there really are 20% rates of return lying around, why aren’t firms cranking up their research? At least anecdotally, you hear complaints from industries like pharma about low returns from R&D. Third, there are some suggestive comments near the end about how government subsidies might be used to increase R&D given these huge social returns. I would be really cautious here, since there is quite a bit of evidence that government-sponsored R&D generates a much lower private and social rate of return that the other forms of R&D.

Final July 2013 Econometrica version (IDEAS version). Thumbs up to Nick Bloom for making the final version freely available on his website. The paper has an exhaustive appendix with technical details, as well as all of the data freely available for you to play with.

“Is Knowledge Trapped Inside the Ivory Tower?,” M. Bikard (2013)

Simultaneous discovery, as famously discussed by Merton, is really a fascinating idea. On the one hand, we have famous examples like Bell and Gray sending in patents for a telephone on exactly the same day. On the other hand, when you investigate supposed examples of simultaneous discovery more closely, it is rarely the case that the discoveries are that similar. The legendary Jacob Schmookler described – in a less-than-politically-correct way! – historians who see patterns of simultaneous discovery everywhere as similar to tourists who think “all Chinamen look alike.” There is sufficient sociological evidence today that Schmookler largely seems correct: simultaneous discovery, like “lucky” inventions, are much less common than the man on the street believes (see, e.g., Simon Schaeffer’s article on the famous story of the dragon dream and the invention of benzene for a typical reconstruction of how “lucky” inventions actually happen).

Michaƫl Bikard thinks we are giving simultaneous discovery too little credit as a tool for investigating important topics in the economics of innovation. Even if simultaneous discovery is uncommon, it still exists. If there were an automated process to generate a large set of simultaneous inventions (on relatively more minor topics than the telephone), there are tons of interesting questions we can answer, since we would have compelling evidence of the same piece of knowledge existing in different places at the same time. For instance, how important are agglomeration economies? Does a biotech invention get developed further if it is invented on Route 128 in Massachusetts instead of in Lithuania?

Bikard has developed an automated process to do this (and that linked paper also provides a nice literature review concerning simultaneous discovery). Just scrape huge number of articles and their citations, look for pairs of papers which were published at almost the same time and cited frequently in the future, and then limit further to articles which have a “Jaccard index” which implies that they are frequently cited together if they are cited at all. Applying this technique to the life sciences, he finds 578 examples of simultaneous discovery; chatting with a randomly selected sample of the researchers, most mentioned the simultaneous discovery without being asked, though at least one claimed his idea had been stolen! 578 is a ton: this is more than double the number that the historical analysis in Merton discovered, and as noted, many of the Merton multiples are not really examples of simultaneous discovery at all.

He then applies this dataset in a second paper, asking whether inventions in academia are used more often (because of the culture of openness) or whether private sector inventions are used more often in follow-up inventions (because the control rights can help even follow-up inventors extract rents). It turns out that private-sector inventors of the identical invention are three times more likely to patent, but even excluding the inventors themselves, the private sector inventions are cited 10-20% more frequently in future patents. The sample size of simultaneous academic-private discovery is small, so this evidence is only suggestive. You might imagine that the private sector inventors are more likely to be colocated near other private sector firms in the same area; we think that noncodified aspects of knowledge flow locally, so it wouldn’t be surprising that the private sector multiple was cited more often in future patents.

Heavy caveats are also needed on the sample. This result certainly doesn’t suggest that, overall, private sector workers are doing more “useful” work than Ivory Tower researchers, since restricting the sample to multiple discoveries limits the potential observations to areas where academia and the private sector are working on the same type of discovery. Certainly, academics and the private sector often work on different types of research, and openness is probably more important in more basic discoveries (where transaction or bargaining costs on follow-up uses are more distortionary). In any case, the method for identifying simultaneous discoveries is quite interesting indeed; if you are empirically minded, there are tons of interesting questions you could investigate with such a dataset.

September 2012 working paper (No IDEAS version). Forthcoming in Management Science.

“Back to Basics: Basic Research Spillovers, Innovation Policy and Growth,” U. Akcigit, D. Hanley & N. Serrano-Velarde (2013)

Basic and applied research, you might imagine, differ in a particular manner: basic research has unexpected uses in a variety of future applied products (though it sometimes has immediate applications), while applied research is immediately exploitable but has fewer spillovers. An interesting empirical fact is that a substantial portion of firms report that they do basic research, though subject to a caveat I will mention at the end of this post. Further, you might imagine that basic and applied research are complements: success in basic research in a given area expands the size of the applied ideas pond which can be fished by firms looking for new applied inventions.

Akcigit, Hanley and Serrano-Velarde take these basic facts and, using some nice data from French firms, estimate a structural endogenous growth model with both basic and applied research. Firms hire scientists then put them to work on basic or applied research, where the basic research “increases the size of the pond” and occasionally is immediately useful in a product line. The government does “Ivory Tower” basic research which increases the size of the pond but which is never immediately applied. The authors give differential equations for this model along a balanced growth path, have the government perform research equal to .5% of GDP as in existing French data, and estimate the remaining structural parameters like innovation spillover rates, the mean “jump” in productivity from an innovation, etc.

The pretty obvious benefit of structural models as compared to estimating simple treatment effects is counterfactual analysis, particularly welfare calculations. (And if I may make an aside, the argument that structural models are too assumption-heavy and hence non-credible is nonsense. If the mapping from existing data to the actual questions of interest is straightforward, then surely we can write a straightforward model generating that external validity. If the mapping from existing data to the actual question of interest is difficult, then it is even more important to formally state what mapping you have in mind before giving policy advice. Just estimating a treatment effect off some particular dataset and essentially ignoring the question of external validity because you don’t want to take a stand on how it might operate makes me wonder why I, the policymaker, should take your treatment effect seriously in the first place. It seems to me that many in the profession already take this stance – Deaton, Heckman, Whinston and Nevo, and many others have published papers on exactly this methodological point – and therefore a decade from now, you will find it equally as tough to publish a paper that doesn’t take external validity seriously as it is to publish a paper with weak internal identification today.)

Back to the estimates: the parameters here suggest that the main distortion is not that firms perform too little R&D, but that they misallocate between basic and applied R&D; the basic R&D spills over to other firms by increasing the “size of the pond” for everybody, hence it is underperformed. This spillover, estimated from data, is of substantial quantitative importance. The problem, then, is that uniform subsidies like R&D tax credits will just increase total R&D without alleviating this misallocation. I think this is a really important result (and not only because I have a theory paper myself, coming at the question of innovation direction from the patent race literature rather than the endogenous growth literature, which generates essentially the same conclusion). What you really want to do to increase welfare is increase the amount of basic research performed. How to do this? Well, you could give heterogeneous subsidies to basic and applied research, but this would involve firms reporting correctly, which is a very difficult moral hazard problem. Alternatively, you could just do more research in academia, but if this is never immediately exploited, it is less useful than the basic research performed in industry which at least sometimes is used in products immediately (by assumption); shades of Aghion, Dewatripont and Stein (2008 RAND) here. Neither policy performs particularly well.

I have two small quibbles. First, basic research in the sense reported by national statistics following the Frascati manual is very different from basic research in the sense of “research that has spillovers”; there is a large literature on this problem, and it is particularly severe when it comes to service sector work and process innovation. Second, the authors suggest at one point that Bayh-Dole style university licensing of research is a beneficial policy: when academic basic research can now sometimes be immediately applied, we can easily target the optimal amount of basic research by increasing academic funding and allowing academics to license. But this prescription ignores the main complaint about Bayh-Dole, which is that academics begin, whether for personal or institutional reasons, to shift their work from high-spillover basic projects to low-spillover applied projects. That is, it is not obvious the moral hazard problem concerning targeting of subsidies is any easier at the academic level than at the private firm level. In any case, this paper is very interesting, and well worth a look.

September 2013 Working Paper (RePEc IDEAS version).

“Patents and Cumulative Innovation: Causal Evidence from the Courts,” A. Galasso & M. Schankerman (2013)

Patents may increase or hinder cumulative invention. On the one hand, a patentholder can use his patent to ensure that downstream innovators face limited competition and thus have enough rents to make it worthwhile developing their product. On the other hand, holdup and other licensing difficulties have been shown in many theoretical models to make patents counterproductive. Galasso and Schankerman use patent invalidation trials to try and separate out the effect, and the broad strokes of the theory appear to hold up: on average, patents do limit follow-up invention, but this limitation appears to solely result from patents held by large firms, used by small firms, in technologically complex areas without concentrated power.

The authors use a clever IV to generate this result. The patent trials they look at involve three judges, selected at random. Looking at other cases the individual judges have tried, we can estimate the proclivity to strike down a patent for a given judge, and thus predict the probability a certain panel in the future will strike down a certain patent. That is, the proclivity of the judges to strike down the patent is a nice IV for whether the patent is actually struck down. In the second stage of the IV, investigate how this predicted probability of being invalidated, along with covariates and the pre-trial citation path, impact post-trial citations. And the impact is large: on average, citations increase 50% following an invalidation (and indeed, the Poisson IV estimate mentioned in a footnote, which seems more justified econometrically to me, is even larger).

There is, however, substantial heterogeneity. Estimating a marginal treatment effect (using a trick of Heckman and Vycatil’s) suggests the biggest impact of invalidation on patents whose unobservables make them less likely to be overturned. To investigate this heterogeneity further, the authors run their regressions again including measures of technology class concentration (what % of patents in a given subclass come from the top few patentees) and industry complexity (using the Levin survey). They also denote how many patents the patentee involved in the trial received in the years around the trial, as well as the number of patents received by those citing the patentee. The harmful effect of patents on future citations appears limited to technology classes with relatively low concentration, complex classes, large firms with the invalidated patent, and small firms doing the citing. These characteristics all match well with the type of technologies theory imagines to be linked to patent thickets, holdup potential or high licensing costs.

In the usual internal validity/external validity way, I don’t know how broadly these results generalize: even using the judges as an IV, we are still deriving treatment effects conditional on the patent being challenged in court and actually reaching a panel decision concerning invalidation; it seems reasonable to believe that the mere fact a patent is being challenged is evidence that licensing is problematic, and the mere fact that a settlement was not reached before trial even more so. The social welfare impact is also not clear to me: theory suggests that even when patents are socially optimal for cumulative invention, the primary patentholder will limit licensing to a small number of firms in order to protect their rents, hence using forward citations as a measure of cumulative invention allows no way to separate socially optimal from socially harmful limits. But this is at least some evidence that patents certainly don’t democratize invention, and that result fits squarely in with a growing literature on the dangers of even small restrictions on open science.

August 2013 working paper (No IDEAS version).

“Technology and Learning by Factory Workers: The Stretch-Out at Lowell, 1842,” J. Bessen (2003)

This is a wonderful piece of theory-driven economic history. Everyone knows that machinery in the Industrial Revolution was “de-skilling”, replacing craft workers with rote machine work. Bessen suggests, using data from mid-19th century mills in New England, that this may not be the case; capital is expensive and sloppy work can cause it to be out of service, so you may want to train your workers even more heavily as you deepen capital. It turns out that it is true that literate Yankee girls were largely replaced by illiterate, generally Irish workers (my ancestors included!) at Lowell and Waltham, while simultaneously the amount of time spend training (off of piece-wages) increased as did the number of looms run by each worker. How can we account for this?

Two traditional stories – that history is driven by the great inventor, or that the mill-owners were driven by philanthropy – are quickly demolished. The shift to more looms per worker was not the result of some new technology. Indeed, adoption of the more rigorous process spread slowly to Britain and southern New England. As for philanthropy, an economic model of human capital acquisition shows that the firms appear to have shifted toward unskilled workers for profit-based reasons.

Here’s the basic idea. If I hire literate workers like the Yankee farm girls, I can better select high-quality workers, but these workers will generally return home to marry after a short tenure. If I hire illiterate workers, their initial productivity is lower but, having their family in the mill town, they are less likely to leave the town. Mill owners had a number of local methods to collude and earn rents, hence they have some margin to pay for training. Which type should I prefer? If there exist many trained illiterate workers in town already, I just hire them. If not, the higher the ratio of wage to cloth price, the more I am willing to invest in training; training takes time during which no cloth is made, but increases future productivity at any given wage.

Looking at the Massachusetts mill data, a structural regression suggests that almost all of the increase in labor productivity between 1834 and 1855 was the result of increasing effective worker experience, a measure of industry-specific human capital (and note that a result of this kind is impossible without some sort of structural model). Why didn’t firms move to illiterate workers with more training earlier? Initially, there was no workforce that was both skilled and stable. With cloth prices relatively high compared to wages, it was initially (as can be seen in Bessen’s pro forma calculation) much more profitable to use a labor system that tries to select high quality workers even though they leave quickly. Depressed demand in the late 1830s led cloth prices to fall, which narrowed the relative profitability of well-trained but stable illiterate workers as compared to the skilled but unstable farm girls. A few firms began hiring illiterate workers and training them (presumably selecting high quality illiterate workers based on modern-day unobservables). This slowly increased the supply of trained illiterate workers, making it more profitable to switch a given factory floor over to three or four looms per worker, rather than two. By the 1850s, there was a sufficiently large base of trained illiterate workers to make them more profitable than the farm girls. Some light counterfactual calculations suggest that pure profit incentive is enough to drive the entire shift.

What is interesting is that the shift to what was ex-post a far more productive system appears to hinge critically on social factors – changes in the nature of the local labor supply, changes in demand for downstream products, etc. – rather than on technological change embodied in new inventions or managerial techniques. An important lesson to keep in mind, as nothing in the above story had any Whiggish bias toward increasing productivity!

Final working paper (IDEAS version). Final paper published in the Journal of Economic History, 2003. I’m a big fan of Bessen’s work, so I’m sure I’ve mentioned before on this site the most fascinating part of his CV: he has no graduate degree of any kind, yet has a faculty position at a great law school and an incredible publication record in economics, notably his 2009 paper on socially inefficient patents with Eric Maskin. Pretty amazing!

“Does Knowledge Accumulation Increase the Returns to Collaboration?,” A. Agrawal, A. Goldfarb & F. Teodoridis (2012)

The size of academic research “teams” has been increasing, inexorably, in essentially every field over the past few decades. This may be because of bad incentives for researchers (as Stan Liebowitz has argued), or because more expensive capital is required for research as in particle physics, or because communication technology has decreased the cost of collaboration. A much more worrying explanation is, simply, that reaching the research frontier is getting harder. This argument is most closely associated with my adviser Ben Jones, who has noticed that while team size has increased, the average age star researchers do their best work has increased, co-inventors on inventions has increased, and the number of researchers doing work across fields has decreased. If the knowledge frontier is becoming more expensive to reach, theory suggests a role for greater subsidization of early-career researchers and of potential development traps due to the complementary nature of specialized fields.

Agrawal et al use a clever device to investigate whether the frontier is indeed becoming more burdensome. Note that the fact that science advances does not mean, ipso facto, that reaching the frontier is harder: new capital like computers or Google Scholar may make it easier to investigate questions or get up to date in related fields, and certain developments completely subsume previous developments (think of, say, how a user of dynamic programming essentially does not need to bother learning the calculus of variations; the easier but more powerful technique makes the harder but less powerful technique unnecessary). Agrawal et al’s trick is to look at publication trends in mathematics. During the Soviet era, mathematics within the Soviet Union was highly advanced, particularly in certain areas of functional analysis, but Soviet researchers had little ability to interact with non-Soviets and they generally published only in Russian. After the fall of the Soviet Union, there was a “shock” to the knowledge frontier in mathematics as these top Soviet researchers began interacting with other mathematicians. A paper by Borjas and Doran in the QJE last year showed that Soviet mathematics were great in some areas and pretty limited in others. This allows for a diff-in-diff strategy: look at the change in team size following 1990 in fields where Soviets were particularly strong versus fields where the Soviets were weak.

Dropping papers with a Russian-named coauthor, classifying papers by fields using data from the AMS, the authors find that papers in Soviet-heavy fields had the number of coauthors increase from 1.34 to 1.78, whereas Soviet-weak fields teams grew only from 1.26 to 1.55. This difference appears quite robust, and is derived from hundreds of thousands of publications. To check that Soviet-rich fields actually had influence, they note that papers in Soviet-rich subfields cited Soviet-era publications at a greater rate after 1990 than Soviet-poor subfields, and that the increase in coauthoring tended to be driven by papers with a young coauthor. The story here is, roughly, that Soviet emigres would have tooled up young researchers in Soviet-rich fields, and then those young coauthors would have a lot of complementary skills which might drive collaboration with other researchers.

So it appears that the increasing burden of the knowledge frontier does drive some of the increase in team size. The relative importance of this factor, however, is something tough to tease out without some sort of structural model. Getting around the burden of knowledge by making it easier to reach the frontier is also worthy of investigation – a coauthor and I have a pretty cool new paper (still too early to make public) on exactly this topic, showing an intervention that has a social payoff an order of magnitude higher than funding new research.

Oct 2012 working paper (no IDEAS version). As a sidenote, the completely bizarre “copyright notice” on the first page is about the most ridiculous thing I have seen on a working paper recently: besides the fact that authors hold the copyright automatically without such a notice, the paper itself is literally about the social benefits of free knowledge flows! I can only hope that the copyright notice is the result of some misguided university policy.

“A Penny for your Quotes: Patent Citations and the Value of Innovation,” M. Trajtenberg (1990)

This is one of those classic papers where the result is so well-known I’d never bothered to actually look through the paper itself. Manuel Trajtenberg, in the late 1980s, wrote a great book about Computed Tomography, or CAT scans. He gathered exhaustive data on sales by year to hospitals across the US, the products/attributes available at any time, and the prices paid. Using some older results from economic theory, a discrete choice model can be applied to infer willingness-to-pay for various types of CAT scanners over time, and from there to infer the total social surplus being generated at any time. Even better, Trajtenberg was able to calculate the lifetime discounted value of innovations occurring during any given period by looking at the eventual diffusion path of those technologies; that is, if a representative consumer is willing to pay Y in 1981 for CAT scanner C, and the CAT scanner diffused to 50 percent market share over the next five years, we can integrate the willingness to pay over the diffusion curve to get a rough estimate of the social surplus generated. CAT innovations during their heyday (roughly the 1970s, before MRI began to diffuse) generated about 17 billion dollars of surplus in 1982 dollars.

That alone is interesting, but Trajtenberg takes this fact one step further. There has long been a debate about whether patent citations tell you much about actual innovation. We know from a variety of sources that most important inventions are not patented, that many low-quality inventions of little social value are patented, and that patents are used in enormously different ways depending on market structure. Since Trajtenberg has an actual measure of social welfare created by newly-introduced products in each period, and a measure of industry R&D in each period, and a measure counting patents issued in CT in each period (nearly 500 in total), he can actually check: is patenting activity actually correlated with socially beneficial innovation?

The answer, it turns out, is no. A count of patents, at any reasonable lag and any restriction to “core” CT firms or otherwise, never has a correlation with change in total social value of more than .13. On the other hand, patents lagged five months has a correlation of .933 with industry R&D. No surprise, R&D appears to buy patents at a pretty constant rate, but not to buy important breakthroughs. This doesn’t, however, mean patent data is worthless to the analyst. Instead of looking at patents, we can look at citation-weighted patents. A patent that gets cited 10 times is surely more important than one which is issued and never heard from again. Weighing patents by citation count, the correlation between the number of weighted patents (lagged a few months to give products time to reach the market) and total social welfare created is in the area of .75! This result has been confirmed many, many, many times since Trajtenberg’s paper. Harhoff et al (1999) found, using survey data, that each single patent citation for highly-cited patents is a signal that the patent has a additional private value of a million US dollars. Hall, Jaffe and Trajtenberg (2005) found that, using Tobin’s Q on stock market data holding firm R&D and total number of patents constant, an additional patent citation improves firm value by an average of 3%.

Final 1990 RAND copy (IDEAS page).

“Inventors, Patents and Inventing Activities in the English Brewing Industry, 1634-1850,” A. Nuvolari & J. Sumner (2013)

Policymakers often assume that patents are necessary for inventions to be produced or, if the politician is sophisticated, for a market in knowledge to develop. Economists are skeptical of such claims, for theoretical and empirical reasons. For example, Petra Moser has shown how few important inventions are ever patented, and Bessen and Maskin have a paper showing how the existence of patents can slow down innovation in certain technical industries. The literature more generally often mentions how heterogeneous appropriation strategies are across industries: some rely entirely on trade secrets, other on open source sharing, and yet others on patent protection.

Nuvolari and Sumner look at the English brewing industry from the 17th to the 19th century. This industry was actually quite innovative, most famously through the (perhaps collective) invention of that delightful winter friend named English Porter. The two look in great detail through lists of patents prior to 1850, and note that, despite the importance of brewing and its technical complexity, beer-related patents make up less than one percent of all patents granted during that period. Further, they note that there are enormous differences in patenting behavior within the brewing industry. Nonetheless, even in the absence of patents, there still existed a market for ideas.

Delving deeper, the authors show that many patentees were seen more as charlatans than as serious inventors. The most important inventors tended to either keep their inventions secret within their firm or guild, keep the inventions partially secret, publicize completely in order to enhance the status of their brewery as “scientific”, or publicize completely in order to garner consulting or engineering contracts. The partial secrecy and status-enhancing publicity reasons are particularly interesting. Humphrey Jackson, an aspiring chemist, sold a book with many technical details left as blank spots; by paying to attend his lecture, the details of his processes could be filled in, though the existence of the lecture was predicated on sufficiently large numbers buying the book! James Bavestock, a brewer in Hampshire, brought his hydrometer to the attention of a prominent London brewer Henry Thrale; in exchange, Thrale could organize entry into the London market, or a job in Thrale’s brewery should the small Hampshire concern go under.

2012 Working Paper (IDEAS version). This article appeared in the new issue of Business History Review, which was particularly good; it also featured, among others, a review on markets for knowledge in 19th century America which will probably be the final publication of the late Kenneth Sokoloff, and a paper by the always interesting Zorina Khan on international technology markets in the 19th century. Many current issues, such as open source, patent trolls, etc. are completely rehashing similar questions during that period, so the articles are well worth a look even for the non-historian.

“Do Inventors Value Secrecy in Patenting? Evidence from the American Inventor’s Patenting Act of 1999,” S. Graham & D. Hegde (2013)

The patent system has many ridiculous properties for us economists to grouse about (Boldrin and Levine have a well-known book on the topic, but I think you’ll find James Bessen’s tome the best). The problem of disclosure, whereby patentholders are meant to exchange a description of how their design works in exchange for the limited monopoly that is a patent, is particularly strange. First, it is not obvious at all that disclosure matters as a method for spreading information – as Petra Moser and coauthors have shown, it is very easy to be obtuse enough in some types of patents that a well-trained outsider will find it impossible to construct the original device, and further, in many cases simply seeing the patented device is sufficient to understand how it works. Second, since patents are not disclosed immediately upon application, there is quite a bit of scope for the dreaded “submarine patent”: I see someone infringing, but I don’t accuse them of anything until they do a bunch of work building up the industry, then at the height I pop up and sue them for all they are worth.

Now, a 1999 law, the AIPA, had a provision meant to restrict submarines somewhat. Eighteen months after applying, your patent application is made public; note that it can take many years for the actual patent to be granted. There are exceptions here, but basically, you can only request the application be kept secret longer if you never want foreign patent protection, and, further, you can also request an earlier disclosure of the app, which the authors do not examine specifically in the current draft. A number of thinkers (among them both Samuelson and Friedman!) were worried that this requirement would be harmful to small inventors, who often lack the legal ability to sue early infringers, particularly when the infringers are foreign firms.

We have data to settle the issue now (Stuart Graham, an academic, is the Chief Economist at the US patent and trademark office; if you want evidence of how bad the economic reasoning behind our IP policy is, note that Graham is the first ever Chief Economist appointed at the PTO!). Graham and Hegde collect data, harmonized in an EU database, of all US patent applications since the policy went into place, and code them by whether they also applied for a foreign patent, the technology class, and whether the applicant was a large firm, a small firm or solo inventor, or a foreign firm. Overall, less than 8% of applicants requested secrecy; of those who could have requested secrecy since they never file overseas, about 15% request secrecy. Small inventors preferences are fairly similar to large firms. Of the patents where secrecy was requested by small inventors, the patents kept secret appear to be, if anything, less important inventions: they receive fewer onward citations, have fewer claims, and are granted in a shorter amount of time (earlier papers suggested that breakthrough inventions tend to involve a lot of finetuning in their claims, hence longer waits between application and grant). In none of the technology classes are more than a quarter of applications kept secret. Unmentioned in the paper is a more recent fact: the percentage of applicants requesting secrecy continues to fall every year.

Given all this evidence against the importance of secrecy, it is perhaps no surprise that there is a currently a bill in Congress that would remove these disclosure requirements. What can you do?

2013 working paper (No IDEAS version). If you want some other cool recent empirical work fighting bogus ideas about innovation (bogus, yes, but which nonetheless carry great weight in policy discussions!), check out the great work by Paul Heald at the U of Illinois law school concerning the question of “necessary property”. An argument by the content industries about why we should retroactively extend copyright (where, for sure, a 2013 law cannot affect incentives to create in 1920) is that IP without an owner will either be overexploited (Mickey in porno films) or not updated (no quality audiobooks of classics, say). Heald shows that music which falls out of copyright is no more or less likely to appear in modern films, and that bestselling books in the public domain (those written 1913-1922) are much more likely to have high-quality audiobook versions for sale than bestselling books still under copyright (those written in the following decade). Heald’s results put the lie to the argument that “content needs an owner to be exploited optimally”, but you don’t even need his research to know this: even if content needed an owner for efficient exploitation, what reason do we have to think that the previous copyright holder is the most efficient one? Why not, say, rotate the rights to the early Mickey Mouse films randomly among preservationists and film firms? Indeed, why not auction off the retroactive extension? (But of course, you know why we don’t do these things: because the executives and congressmen who supported the CTEA care and understand not a whit about the welfare analysis, but quite a bit about lobbying from their “I will never take a lobbying job” hack of an ex-colleague.)

Follow

Get every new post delivered to your Inbox.

Join 169 other followers

%d bloggers like this: