Category Archives: STS

Laboratory Life, B. Latour & S. Woolgar (1979)

Let’s do one more post on the economics of science; if you haven’t heard of Latour and the book that made him famous, all I can say is that it is 30% completely crazy (the author is a French philosopher, after all!), 70% incredibly insightful, and overall a must read for anyone trying to understand how science proceeds or how scientists are motivated.

Latour is best known for two ideas: that facts are socially constructed (and hence science really isn’t that different from other human pursuits) and that objects/ideas/networks have agency. He rose to prominence with Laboratory Life, which followed two years observing a lab, that of future Nobel Winner Roger Guillemin at the Salk Institute at UCSD.

What he notes is that science is really strange if you observe it proceeding without any priors. Basically, a big group of people use a bunch of animals and chemicals and technical devices to produce beakers of fluids and points on curves and colored tabs. Somehow, after a great amount of informal discussion, all of these outputs are synthesized into a written article a few pages long. Perhaps, many years later, modalities about what had been written will be dropped; “X is a valid test for Y” rather than “W and Z (1967) claim that X is a valid test for Y” or even “It has been conjectured that X may be a valid test for Y”. Often, the printed literature will later change its mind; “X was once considered a valid test for Y, but that result is no longer considered convincing.”

Surely no one denies that the last paragraph accurately describes how science proceeds. But recall the schoolboy description, in which there are facts in the world, and then scientists do some work and run some tests, after which a fact has been “discovered”. Whoa! Look at all that is left out! How did we decide what to test, or what particulars constitute distinct things? How did we synthesize all of the experimental data into a few pages of formal writeup? Through what process did statements begin to be taken for granted, losing their modalities? If scientists actually discover facts, then how can a “fact” be overturned in the future? Latour argues, and gives tons of anecdotal evidence from his time at Salk, that providing answers to those questions basically constitutes the majority of what scientists actually do. That is, it is not that the fact is out there in nature waiting to be discovered, but that the fact is constructed by scientists over time.

That statement can be misconstrued, of course. That something is constructed does not mean that it isn’t real; the English language is both real and it is uncontroversial to point out that it is socially constructed. Latour and Woolgar: “To say that [a particular hormone] is constructed is not to deny its solidity as a fact. Rather, it is to emphasize how, where and why it was created.” Or later, “We do not wish to say that facts do not exist nor that there is no such thing as reality. In this simple sense we are not relativist. Our point is that ‘out-there-ness’ is the consequence of scientific work rather than its cause.” Putting their idea another way, the exact same object or evidence can at one point be considered up for debate or perhaps just a statistical artefact, yet later is considered a “settled fact” and yet later still will occasionally revert again. That is, the “realness” of the scientific evidence is not a property of the evidence itself, which does not change, but a property of the social process by which science reifies that evidence into an object of significance.

Latour and Woolgar also have an interesting discussion of why scientists care about credit. The story of credit as a reward, or credit-giving as some sort of gift exchange is hard to square with certain facts about why people do or do not cite. Rather, credit can be seen as a sort of capital. If you are credited with a certain breakthrough, you can use that capital to get a better position, more equipment and lab space, etc. Without further breakthroughs for which you are credited, you will eventually run out of such capital. This is an interesting way to think about why and when scientists care about who is credited with particular work.

Amazon link. This is a book without a nice summary article, I’m afraid, so you’ll have to stop by your library.

Advertisement

“Why Did Universities Start Patenting?: Institution Building and the Road to the Bayh-Dole Act,” E. P. Berman (2008)

It goes without saying that the Bayh-Dole Act had huge ramifications for science in the United States. Passed in 1980, Bayh-Dole permitted (indeed, encouraged) universities to patent the output of federally-funded science. I think the empirical evidence is still not complete on whether this increase in university patenting has been good (more, perhaps, incentive to develop products based on university research), bad (patents generate static deadweight loss, and exclusive patent licenses limit future developers) or “worse than the alternative” (if the main benefit of Bayh-Dole is encouraging universities to promote their research to the private sector, we can achieve that goal without the deadweight loss of patents).

As a matter of theory, however, it’s hard for me to see how university patenting could be beneficial. The usual static tradeoff with patents is deadweight loss after the product is developed in exchange for the quasirents that incentivize fixed costs of research to be paid by the initial developer. With university research, you don’t even get that benefit, since the research is being done anyway. This means you have to believe the “increased incentive for someone to commercialize” under patents is enough to outweight the static deadweight loss; it is not even clear that there is any increased incentive in the first place. Scientists seem to understand what is going on: witness the license manager of the enormously profitable Cohen-Boyer recombinant DNA patent, “[W]hether we licensed it or not, commercialisation of recombinant DNA was going forward. As I mentioned, a non-exclusive licensing program, at its heart, is really a tax … [b]ut it’s always nice to say technology transfer.” That is, it is clear why cash-strapped universities like Bayh-Dole regardless of the social benefit.

In today’s paper, Elizabeth Popp Berman, a sociologist, poses an interesting question. How did Bayh-Dole ever pass given the widespread antipathy toward “locking up the results of public research” in the decades before its passage? She makes two points of particular interest. First, it’s not obvious that there is any structural break in 1980 in university patenting, as university patents increased 250% in the 12 years before the Act and about 300% in the 12 years afterward. Second, this pattern holds because the development of institutions and interested groups necessary for the law to change was a fairly continuous process beginning perhaps as early as the creation of the Research Corporation in 1912. What this means for economists is that we should be much more careful about seeing changes in law as “exogenous” since law generally just formalized already changing practice, and that our understanding of economic events driven by rational agents acting under constraints ought sometimes focus more on the constraints and how they develop rather than the rational action.

Here’s the history. Following World War II, the federal government became a far more important source of funding for university and private-sector science in the United States. Individual funding agencies differed in their patent policy; for instance, the Atomic Energy Commission essentially did not allow university scientists to patent the output of federally-funded research, whereas the Department of Defense permitted patents from their contactors. Patents were particularly contentious since over 90% of federal R&D in this period went to corporations rather than universities. Through the 1960s, the NIH began to fund more and more university science, and they hired a patent attorney in 1963, Norman Latker, who was very much in favor of private patent rights.

Latker received support for his position from two white papers published in 1968 that suggested the HEW (the parent of the NIH) was letting medical research languish because they wouldn’t grant exclusive licenses to pharma firms, who in turn argued that without the exclusive license they wouldn’t develop the research into a product. The politics of this report allowed Latker enough bureaucratic power to freely develop agreements with individual universities allowing them to retain patents in some cases. The rise of these agreements led many universities to hire patent officers, who would later organize into a formal lobbying group pushing for more ability to patent federally-funded research. Note essentially what is going on: individual actors or small groups take actions in each period which change the payoffs to future games (partly by incurring sunk costs) or by introducing additional constraints (reports that limit the political space for patent opponents, for example). The eventual passage of Bayh-Dole, and its effects, necessarily depend on that sort of institution building which is often left unmodeled in economic or political analysis. Of course, the full paper has much more detail about how this program came to be, and is worth reading in full.

Final version in Social Studies of Science (gated). I’m afraid I could not find an ungated copy.

“Is Knowledge Trapped Inside the Ivory Tower?,” M. Bikard (2013)

Simultaneous discovery, as famously discussed by Merton, is really a fascinating idea. On the one hand, we have famous examples like Bell and Gray sending in patents for a telephone on exactly the same day. On the other hand, when you investigate supposed examples of simultaneous discovery more closely, it is rarely the case that the discoveries are that similar. The legendary Jacob Schmookler described – in a less-than-politically-correct way! – historians who see patterns of simultaneous discovery everywhere as similar to tourists who think “all Chinamen look alike.” There is sufficient sociological evidence today that Schmookler largely seems correct: simultaneous discovery, like “lucky” inventions, are much less common than the man on the street believes (see, e.g., Simon Schaeffer’s article on the famous story of the dragon dream and the invention of benzene for a typical reconstruction of how “lucky” inventions actually happen).

Michaël Bikard thinks we are giving simultaneous discovery too little credit as a tool for investigating important topics in the economics of innovation. Even if simultaneous discovery is uncommon, it still exists. If there were an automated process to generate a large set of simultaneous inventions (on relatively more minor topics than the telephone), there are tons of interesting questions we can answer, since we would have compelling evidence of the same piece of knowledge existing in different places at the same time. For instance, how important are agglomeration economies? Does a biotech invention get developed further if it is invented on Route 128 in Massachusetts instead of in Lithuania?

Bikard has developed an automated process to do this (and that linked paper also provides a nice literature review concerning simultaneous discovery). Just scrape huge number of articles and their citations, look for pairs of papers which were published at almost the same time and cited frequently in the future, and then limit further to articles which have a “Jaccard index” which implies that they are frequently cited together if they are cited at all. Applying this technique to the life sciences, he finds 578 examples of simultaneous discovery; chatting with a randomly selected sample of the researchers, most mentioned the simultaneous discovery without being asked, though at least one claimed his idea had been stolen! 578 is a ton: this is more than double the number that the historical analysis in Merton discovered, and as noted, many of the Merton multiples are not really examples of simultaneous discovery at all.

He then applies this dataset in a second paper, asking whether inventions in academia are used more often (because of the culture of openness) or whether private sector inventions are used more often in follow-up inventions (because the control rights can help even follow-up inventors extract rents). It turns out that private-sector inventors of the identical invention are three times more likely to patent, but even excluding the inventors themselves, the private sector inventions are cited 10-20% more frequently in future patents. The sample size of simultaneous academic-private discovery is small, so this evidence is only suggestive. You might imagine that the private sector inventors are more likely to be colocated near other private sector firms in the same area; we think that noncodified aspects of knowledge flow locally, so it wouldn’t be surprising that the private sector multiple was cited more often in future patents.

Heavy caveats are also needed on the sample. This result certainly doesn’t suggest that, overall, private sector workers are doing more “useful” work than Ivory Tower researchers, since restricting the sample to multiple discoveries limits the potential observations to areas where academia and the private sector are working on the same type of discovery. Certainly, academics and the private sector often work on different types of research, and openness is probably more important in more basic discoveries (where transaction or bargaining costs on follow-up uses are more distortionary). In any case, the method for identifying simultaneous discoveries is quite interesting indeed; if you are empirically minded, there are tons of interesting questions you could investigate with such a dataset.

September 2012 working paper (No IDEAS version). Forthcoming in Management Science.

“Technology and Learning by Factory Workers: The Stretch-Out at Lowell, 1842,” J. Bessen (2003)

This is a wonderful piece of theory-driven economic history. Everyone knows that machinery in the Industrial Revolution was “de-skilling”, replacing craft workers with rote machine work. Bessen suggests, using data from mid-19th century mills in New England, that this may not be the case; capital is expensive and sloppy work can cause it to be out of service, so you may want to train your workers even more heavily as you deepen capital. It turns out that it is true that literate Yankee girls were largely replaced by illiterate, generally Irish workers (my ancestors included!) at Lowell and Waltham, while simultaneously the amount of time spend training (off of piece-wages) increased as did the number of looms run by each worker. How can we account for this?

Two traditional stories – that history is driven by the great inventor, or that the mill-owners were driven by philanthropy – are quickly demolished. The shift to more looms per worker was not the result of some new technology. Indeed, adoption of the more rigorous process spread slowly to Britain and southern New England. As for philanthropy, an economic model of human capital acquisition shows that the firms appear to have shifted toward unskilled workers for profit-based reasons.

Here’s the basic idea. If I hire literate workers like the Yankee farm girls, I can better select high-quality workers, but these workers will generally return home to marry after a short tenure. If I hire illiterate workers, their initial productivity is lower but, having their family in the mill town, they are less likely to leave the town. Mill owners had a number of local methods to collude and earn rents, hence they have some margin to pay for training. Which type should I prefer? If there exist many trained illiterate workers in town already, I just hire them. If not, the higher the ratio of wage to cloth price, the more I am willing to invest in training; training takes time during which no cloth is made, but increases future productivity at any given wage.

Looking at the Massachusetts mill data, a structural regression suggests that almost all of the increase in labor productivity between 1834 and 1855 was the result of increasing effective worker experience, a measure of industry-specific human capital (and note that a result of this kind is impossible without some sort of structural model). Why didn’t firms move to illiterate workers with more training earlier? Initially, there was no workforce that was both skilled and stable. With cloth prices relatively high compared to wages, it was initially (as can be seen in Bessen’s pro forma calculation) much more profitable to use a labor system that tries to select high quality workers even though they leave quickly. Depressed demand in the late 1830s led cloth prices to fall, which narrowed the relative profitability of well-trained but stable illiterate workers as compared to the skilled but unstable farm girls. A few firms began hiring illiterate workers and training them (presumably selecting high quality illiterate workers based on modern-day unobservables). This slowly increased the supply of trained illiterate workers, making it more profitable to switch a given factory floor over to three or four looms per worker, rather than two. By the 1850s, there was a sufficiently large base of trained illiterate workers to make them more profitable than the farm girls. Some light counterfactual calculations suggest that pure profit incentive is enough to drive the entire shift.

What is interesting is that the shift to what was ex-post a far more productive system appears to hinge critically on social factors – changes in the nature of the local labor supply, changes in demand for downstream products, etc. – rather than on technological change embodied in new inventions or managerial techniques. An important lesson to keep in mind, as nothing in the above story had any Whiggish bias toward increasing productivity!

Final working paper (IDEAS version). Final paper published in the Journal of Economic History, 2003. I’m a big fan of Bessen’s work, so I’m sure I’ve mentioned before on this site the most fascinating part of his CV: he has no graduate degree of any kind, yet has a faculty position at a great law school and an incredible publication record in economics, notably his 2009 paper on socially inefficient patents with Eric Maskin. Pretty amazing!

“The Flexible Unity of Economics,” M. J. Reay (2012)

Michael Reay recently published this article on the economics profession in the esteemed American Journal of Sociology, and as he is a sociologist, I hope the econ navel-gazing can be excused. What Reay points out is that critical discourse about modern economics entails a paradox. On the one hand, economics is a unified, neoliberal-policy-endorsing monolith with great power, and on the other hand, in practice economists often disagree with each other and their memoirs are filled with sighs about how little their advice is valued by policymakers. In my field, innovation policy, there is a wonderful example of this impotence: the US Patent and Trademark Office did not hire a chief economist until – and this is almost impossible to believe – 2010. Lawyers with hugely different analytic techniques (I am being kind here) and policy suggestions both did and still continue to run the show at every important world venue for patent and copyright policy.

How ought we explain this? Reay interviews a number of practicing economists in and out of academia. Nearly all agree on a core of techniques: mathematical formalism, a focus on incentives at the level of individuals, and a focus on unexpected “general equilibrium” effects. None of these core ideas really has anything to do with “markets” or their supremacy as a form of economic organization, of course; indeed, Reay points out that roughly the same core was used in the 1960s when economists as a whole were much more likely to support various forms of government intervention. Further, none of the core ideas suggest that economic efficiency need be prioritized over concerns like equity, as the technique of mathematical optimization says very little about what is to be optimized.

However, the choice of which questions to work on, and what evidence to accept, is guided by “subframes” that are often informed by local contexts. To analyze the power of economists, it is essential to focus on existing local power situations. Neoliberal economic policy enters certain Latin American countries hand-in-hand with political leaders already persuaded that government involvement in the economy must decrease, whereas it enters the US and Europe in a much more limited way due to countervailing institutional forces. That is, regardless of what modern economic theory suggests on a given topic, policymakers have their priors, and they will frame questions such that they advice their economic advisers gives is limited in relation to those frames. Further, regardless of the particular institutional setup, the basic core ideas about what is accepted as evidence to all economists means that the set of possible policy advice is not unbounded.

One idea Reay should have considered further, and which I think is a useful way for non-economists to understand what we do, is the question of why mathematical formalism is so central a part of the economics core vis-a-vis other social sciences. I suggest that it is the economists’ historic interest in counterfactual policy that implies the mathematical formalism rather than the other way around. A mere collection of data a la Gustav Schmoller can say nothing about counterfactuals; for this, theory is essential. Where theory is concerned, limiting the scope for gifted rhetoricians to win the debate by de facto obfuscation requires theoretical statements to be made in a clear way, and for deductive consequences of those statements to be clear as well. Modern logic, roughly equivalent to the type of mathematics economists use in practice, does precisely that. I find that focusing on “quantitative economics” meaning “numerical data” misleading, as it suggests that the data economists collect and use is the reason certain conclusions (say, neoliberal policy) follow. Rather, much of economics uses no quantitative data at all, and therefore it is the limits of mathematics as logic rather than the limits of mathematics as counting that must provide whatever implicit bias exists.

Final July 2012 AJS version (Note: only the Google Docs Preview allows the full article to be viewed, so I’ve linked to that. Sociologists, get on the open access train and put your articles on your personal websites! It’s 2012!

“Innovation: The History of a Category,” B. Godin (2008)

What is innovation? What, indeed, is invention? I am confident that the average economist could not answer these questions. Is invention merely a novel process or idea? A novel process or idea for a given person? A new way of combining real resources like capital and labor? A new process which allows more of something to be created using a given amount of real resources? Does the new process need to be used, or embodied in technology, or is the idea enough?

None of these definitions seem satisfactory. A poem is a “new idea”, but we wouldn’t call it an invention. Novelty for a given person without technological embodiment, as a definition, doesn’t seem to distinguish between diffusion and simple learning. The idea of technology as a Solow residual means that merely using different mixtures of capital and labor to make the same product doesn’t qualify, and further the Solow residual includes things like Bowles-style adaptations to a more cooperative or trusting culture, which we generally don’t think of as innovation. Was Schumpeter correct that invention is a mere act of creativity “without importance to economic analysis”, or does the sequential nature of ideas mean that even non-embodied ideas are economically important?

In an interesting “genealogy of an idea”, Benoit Godin examines the history of how the terms invention and innovation were used in the Western World. The term invention goes back to Cicero, who listed the development of new argumentative concepts as one of the five tools of rhetoric. From the 15th to 19th centuries, invention was used occasionally to mean novel thoughts, but also novel recombinations (as in painting) or simple imitation (such as the patents given to importers in 18th century England).

It is really quite late in the game – well into the twentieth century – that something like “innovation is the invention, embodiment and diffusion of a commercial product” begins to be accepted as a definition. Part of this involves the shift from the individual inventor, the lone genius, to commercial firm R&D, as well as a recognition that simultaneous discovery and ex-post construction of credit meant that the lone genius inventor probably never existed. The terms discovery and invention began to separate. Science policy began to focus much more on the quantifiable, inventions as discoveries embodied in products or countable as patents. The word innovation became identified with an economic sense rather than an artistic sense which it previously possessed.’

Even the economic definition that would eventually be adopted is not the only one that could have developed. Schumpeter is often recognized as the father of economic studies of technological change, but his definition of innovation includes many concepts no longer covered by that term. For Schumpeter, innovation was tightly linked to creative destruction, or the dynamic ability of economic change to remake the commercial sphere. The opening of new commercial markets, for example, was an important part of innovation, whereas pure science was not.

http://www.csiic.ca/PDF/IntellectualNo1.pdf (2008 Working Paper – this is still unpublished, as far as I can tell).

“The Credit Crisis as a Problem in the Sociology of Knowledge,” D. Mackenzie (2011)

(Tip of the hat for pointing out Mackenzie’s article to Dan Hirschman)

The financial crisis, it is quite clear by now, will be the worst worldwide economic catastrophe since the Great Depression. There are many explanations involving mistaken or misused economic theory, rapaciousness, political decisions, ignorance, and many more; two interesting examples here are Alp Simsek’s job market paper from a couple years ago on the impact of overly optimistic potential buyers who need to get loans from sedate lenders (one takeaway for me was that financial problems can’t be driven by the ignorant masses, as they have no money), and Coven, Jurek and Stafford’s brilliant 2009 AER on catastrophe bonds (summary here) which points out how ridiculous it is to legally define risk in terms of default risk, since we have known for decades in theory that Arrow-Debreu securities’ values depend both on the payoffs in future states and on the relative prices in those states. A bond whose default occurs in catastrophic states ought be much more expensive than the same bond whose default is negatively correlated with background risk.

But the catastrophe also involves a sociological component. Markets are made: they don’t arise from thin air. Certain markets don’t exist for reasons of repulsion, as Al Roth has mentioned in the context of organ sales. Other markets don’t exist because the value of the proposed good in that market is not clear. Removing uncertainty and clarifying the nature of a good is a important precondition, and one that economic sociologists, including Donald Mackenzie, have discussed at great length in their work. The evaluation of new products, perhaps not surprisingly, depends both on analogies to forms a firm has seen before, and on the particular parts of the firm who handle the evaluation.

Consider the ABS CDO – a collateralized debt obligation where the underlying debt are securitized assets, most commonly mortgages. The ABS CDO market grew enormously in the 2000s, and was not understood at nearly the same level as traditional CDO or ABS evaluation, topics on which there are hundreds of research papers. ABS and CDO teams tended to be quite separate in investment banks and ratings agencies, with the CDO team generally well trained in derivatives and the highly quantitative evaluation procedures of such products. For ABSs, particularly US mortgages, the implicit government guarantee against default meant that prepayment risk was the most important factor when pricing such securities. CDOs, often based on corporate debt, were used to treating correlation between various corporations in a given CDO as the most important metric.

Mackenzie gives exhaustive individual detail, but roughly, he does not blame the massive default rates on even AAA-rated ABS CDOs on greed or malfeasance. Rather, he describes how evaluation of ABS CDOs by ratings agencies used to dealing with either an ABS or a CDO, but not both, could lead to a utter misunderstanding of risk. While it is perfectly possible to “drill down” a complex derivative into its constituent parts, then subject the individual derivative to a stress test against some macroeconomic hypothetical, this was rarely done, particularly by individual investors. Mackenzie also gives a brief story of why these assets, revealed in 2008 to be superbly high risk, were being held by the banks at all instead of sold off to hedge funds and pensions. Apparently, the assets held were generally ones with very low return and very low perceived risk which were created as a byproduct of the bundling that created the ABS CDOs. That is, arbitrage was created when individual ABSs were bundled into an ABS CDO, the mezzanine and other tranches aside from the most senior AAA tranche were sold off, and the “basically risk-free” senior tranches were held by the bank as they would be difficult to sell directly. The evaluation of the risk, of course, was mistaken.

This is a very interesting descriptive presentation of what happened in 07 and 08.

http://www.socialwork.ed.ac.uk/__data/assets/pdf_file/0019/36082/CrisisRevised.pdf (Final version from the May 2011 American Journal of Sociology)

“737-Cabriolet: The Limits of Knowledge and the Sociology of Inevitable Failure,” J. Downer (2011)

Things go wrong. Nuclear power plants melt down. Airplanes fall from the sky. Wars break out even when both parties mean only to bluff. Financial shocks propagate in unexpected ways. There are two traditional ways of thinking about these events. First, we might look for the cause and apportion blame for such an unusual event. Company X used cheap, low-quality glue. Bureaucrat Y was poorly trained and made an obviously-incorrect decision. In these cases, we learn from our mistakes, and the mistakes are often not simply problems of engineering, but sociological problems: Why did the social setup of a group fail to catch the mistake? The second type of accident, the “normal accident” described famously by Charles Perrow, offers no lessons and is uncatchable in hindsight because it is too regular. That is, if a system is suitably complex, and if minor effects all occur roughly simultaneously, then the one-in-a-billion combination of minor effects can cause a serious problem. Another way to put this is that even if disasters are one-in-a-billion events, a system which throws out billions of possible disasters of this type is likely to produce one. The most famous case here is Three Mile Island, where among the many failsafes which simultaneously went awry was an indicator light that happened, on the fateful day, to have been blocked by a Post-It note.

John Downer proposes a third category, the “epistemic accident,” which is perhaps well-understood by engineers and scientists, but not by policymakers. An epistemic accident is when a problem occurs due to an error or a gap in our understanding of the world when we designed the system. Epistemic accidents are not normal, since once they happen we can correct them in the future, and since they do not depend on a rare concordance of events. But they also do not lend themselves to blame, since at the time they happen, the scientific knowledge necessary to prevent them was not yet known. This is a fundamentally constructivist way of viewing the world. Constructivism says, roughly, that there is no Platonic Ideal for science to reach. Experiments are theory-laden and models are necessarily abstract. This does mean science is totally relative or pointless, but rather that it is limited, and we will always be, on occasion, surprised by how our models (and this is true in social science as well!) perform in the “real world”. Being cognizant of the limits of scientific knowledge is important for evaluating accidents: particularly innovative systems will be more prone to epistemic accidents, for one.

Downer’s example is the famous Aloha Airlines 243 accident in 1988. On a routine flight from Hilo to Honolulu, the fuselage ripped right off of a 737, exposing a huge chunk of the passenger cabin while the plane was traveling at full speed. Luckily, the plane was not far from Maui, and managed to land with only one death – passengers had to, while themselves strapped in, lean over and hold down a stewardess who was lying down in the aisle in order to keep her from flying out of the plane. This was shocking since the 737 was built with multiple failsafes to ensure that such a rupture did not happen; roughly, the rupture would only happen, it was believed, if a crack many feet long developed on the airplane skin, and this would have been caught at a much smaller stage by regular maintenance.

It turns out that testing of the plane was missing two concepts. First, a combination of the glue being used with salt-heavy air made cracks more likely, and second, the way the rivets were lined up happens to make metal fatigue compound as minor cracks near each rivet connect with each other. And indeed, even in the minor world of massive airplane decompression, this was not the first “epistemic accident”. The reason airplane windows are oval and not square is to avoid almost exactly the same problem: some British-made Comets in the 50s crashed and the impact of their square windows with metal fatigue was found to be the culprit.

What does this mean for economics? I think it means quite a bit for policy. Complicated systems will always have problems that are beyond the bounds of designers to understand, at least until the problem arises. New systems, rather than existing systems, will tend to see these problems, as we learn what is important to include in our models and tests, and what is not. That is, the “flash crash” looks a lot like a “normal accident”, whereas the financial crisis has many aspects that look like epistemic accidents. New and complicated systems, such as those introduced in the financial world, should be handled in a fundamentally conservative way by policymakers in order to deal with the uncertainty in our models. And it’s not just finance: we know, for instance, of many unforeseen methods of collusion that have stymied even well-designed auctions constructed by our best mechanism designers. This is not strange, or a failure, but rather part of science, and we ought be upfront with it.

Google Docs Link (The only ungated version I can find is the Google Docs Quick View above which happens to sneak around a gate. Sociologists, my friends, you’ve got to tell your publishers that it’s no longer acceptable in 2012 to not have ungated working papers! If you have JSTOR access, and in case the link above goes dead, the final version in the November 2011 AJS is here)

“Note on the Theory of the Economy of Research,” C. S. Peirce (1879)

Though this site is devoted generally to new research, the essay discussed in this post, I trust, will be new enough to the vast majority of readers. Charles Sanders Peirce is a titan of analytic philosophy, and there is certainly a case to be made that he is the greatest American philosopher of all time. He also has had a fairly well-known indirect influence on economics: Peirce was in some ways rediscovered by the great mathematician Alfred Tarski, who then taught Kenneth Arrow, and in doing so may have introduced Peirce’s relational algebra to the field of economics. (You may be thinking, relational algebra, what is that? But you certainly know what it is: take a set, apply a perhaps partial, often binary ordering with certain properties, then prove results. This surely describes every modern introduction to the theory of preferences, does it not?) But Peirce also has an essay more directly on economics that is fascinating to see in retrospect. This Peirce essay is reprinted in Phil Mirowski’s book “Science Bought and Sold” along with notes on the essay by James Wible which I shall also draw from.

Two final things. First, I note, if only to myself, the following quote from Peirce to be used in a future research paper of my own: “Economical science is particularly profitable to science; and that of all the branches of economy, the economy of research is the most profitable.” Second, check out where this essay was published: the annual report of the U.S. government Coast Survey of 1879! No wonder it has been overlooked. If you know anything of the biography of Peirce, though, there is not much surprising in this odd location. Peirce was supposedly such a nut that, despite obvious brilliance, he was repeatedly blackballed from academic appointments by future colleagues around the country!

Wible claims, and I also know of no earlier such work, that this Peirce essay is the earliest mathematical work on the theory of invention. And given the intellectual history, this seems almost certain to be so. The essay was written right at the cusp of the marginal revolution and mathematical political economy, Peirce is known to have been familiar with the few scraps of earlier mathematical economics like Cournot’s famous 1838 essay, and Peirce is the father of a philosophical school for which selecting the best line of research to examine in order to learn inductively was a pressing concern. If you’ve ever read economics articles from the middle of the 19th century, this one will shock you: in style, I think it is essentially publishable today. It looks like 21st century economics. There are marginal tradeoffs. There is social science done by mathematical manipulation of heavily abstracted concepts. There is even a Marshallian diagram! It’s phenomenal. Since this looks like modern economics, let’s discuss it like modern economics; what does Peirce’s theory say?

As he introduced it, “I considered this problem. Somebody furnished a fund to be expended upon research without restrictions. What sort of researches should it be expended upon?” Essentially, there are some scientific problems which we understand only vaguely; you may think of this purely qualitatively, or as meaning something is measured to within some confidence interval. There are diminishing returns to science, so that while decreasing error can be done at linear cost, the utility gained from such reduction is concave (the inverse is quadratic in Peirce’s formulation). There is a total fixed research budget. What should be worked on first? Note that this paper was first written in 1876: there is no stochastic learning or any such thing, as the mathematics to discuss bandits and related objects was not yet developed. Learning is purely deterministic here.

Solving that constrained maximization problem gives the now-familiar, but then-nonexistent, result that we should compare ratios of MU/MC across different projects. Peirce called this ratio of marginal utility to marginal cost the “economic urgency” of a given line of research. He notes that, given that functional form assumptions, new research fields where we know very little are particularly worthwhile investments: the gains from increasing our knowledge are exponential in ignorance, whereas the cost is linear. As an example, an early chemist with simple vials is able to provide results with more social utility than a thousand chemists working in Peirce’s day with all sorts of modern equipment. Peirce also derives a result concerning sampling which is a bit opaque for modern readers given that it is couched in terms of “accidental probable error” rather than confidence intervals; nonetheless, it is very Wald-esque in that it explicitly argues that optimal sample size in experiments depends crucially on the budget, the costs of sampling and the utility of learning inferences from that sampling. Such considerations are absolutely ignored in a lot of research design even today!

http://books.google.com/books?id=ux79s_IhpFYC (Both Peirce’s original essay and Wible’s commentary appear in “Science Bought and Sold,” edited by Mirowski and Sent. The Google Books Preview is generous enough here for you to read the entirety of both essays; I do not see any other ungated copies of either online.)

“What Does it Mean to Say that Economics is Performative?,” M. Callon (2007)

With the last three posts being high mathematical-economic theory, let’s go 180 degrees and look at this recent essay – the introduction of a book, actually – by Michel Callon, one of the deans of actor-network theory (along with Bruno Latour, of course). I know what you’re thinking: a French sociologist of science who thinks objects have agency? You’re probably running away already! But stay; I promise it won’t be so bad. And as Callon mentions, sociologists of science and economic theory have a long connection: Robert K. Merton, legendary author of The Sociology of Science, is the father of Robert C. Merton, the Nobel winning economist.

The concept here is performativity in economics. An essay by William Baumol and a coauthor in the JEL tried to examine whether economic theory had made any major contributions. Of the 9 theories they studied (marginalism, Black-Scholes, etc.), only a couple could reasonably be said to be invented and disseminated by academic economists. But performativity is not so sanguine. Performativity suggests that, rather than theories being true or false, they are accepted or not accepted, and there are many roles to be played in this acceptance process by humans and non-humans alike. For example, the theory of Black-Scholes could be accepted in academia, but to be performed by a broader network, certain technologies were needed (frequent stock quotes), market participants needed to believe the theory, regulators needed to be persuaded (that, for one, options are not just gambling); this process is reflexive and the way the theory is performed feeds back into the construction of novel theories. A role exists for economists as scientists across this entire performance.

The above does not simply mean that beliefs matter, or that economic theories are “performed” as self-fulfilling prophecies. Callon again: “The notion of expression is a powerful vaccination against a reductionist interpretation of performativity; a reminder that performativity is not about creating but about making happen.” Not all potential self-fulfilling prophecies are equal: traders did in fact use Black-Scholes, but they never began to use sunspots to coordinate. Sometimes theories outside academia are performed in economics: witness financial chartism. It’s not about “truth” or “falsehood”: Callon’s school of sociology/anthropology is fundamentally agnostic.

There is an interesting link between the jargon of the actor-network theory literature and standard economics. I think you can see it in the following passage:

“In the paper world to which it belongs, marginalist analysis thrives. All it needs are some propositions on decreasing returns, the convexity of utility curves, and so forth. Transported into an electricity utility (for example Electricité de France), it needs the addition of time-ofday meters set up wherever people consume electricity and without which calculations are impossible; introduced into a private firm, it requires analytical accounting and a system of recording and cost assessment that prove to be hardly feasible. This does not mean that marginalist analysis has become false. As everyone knows, it is still true in (most) universities.”

Economists surely see a quote like the above and think, surely there is something more to this theory of performance than information economics and technological constraints. But really there isn’t. Rather, we economists generally do model why information is the way it is, or why certain agents get certain signals. A lot of this branch of sociology should be read as an investigation into how agents (including nonhumans, such as firms) get, or search for, information, particularly to the extent that such a search is reflexive to a new economic theory being proposed.

http://halshs.archives-ouvertes.fr/docs/00/09/15/96/PDF/WP_CSI_005.pdf (July 2006 working paper – final version published in McKenzie et al (Eds.), Do Economists Make Markets by Princeton Univ. Press)

%d bloggers like this: