“737-Cabriolet: The Limits of Knowledge and the Sociology of Inevitable Failure,” J. Downer (2011)

Things go wrong. Nuclear power plants melt down. Airplanes fall from the sky. Wars break out even when both parties mean only to bluff. Financial shocks propagate in unexpected ways. There are two traditional ways of thinking about these events. First, we might look for the cause and apportion blame for such an unusual event. Company X used cheap, low-quality glue. Bureaucrat Y was poorly trained and made an obviously-incorrect decision. In these cases, we learn from our mistakes, and the mistakes are often not simply problems of engineering, but sociological problems: Why did the social setup of a group fail to catch the mistake? The second type of accident, the “normal accident” described famously by Charles Perrow, offers no lessons and is uncatchable in hindsight because it is too regular. That is, if a system is suitably complex, and if minor effects all occur roughly simultaneously, then the one-in-a-billion combination of minor effects can cause a serious problem. Another way to put this is that even if disasters are one-in-a-billion events, a system which throws out billions of possible disasters of this type is likely to produce one. The most famous case here is Three Mile Island, where among the many failsafes which simultaneously went awry was an indicator light that happened, on the fateful day, to have been blocked by a Post-It note.

John Downer proposes a third category, the “epistemic accident,” which is perhaps well-understood by engineers and scientists, but not by policymakers. An epistemic accident is when a problem occurs due to an error or a gap in our understanding of the world when we designed the system. Epistemic accidents are not normal, since once they happen we can correct them in the future, and since they do not depend on a rare concordance of events. But they also do not lend themselves to blame, since at the time they happen, the scientific knowledge necessary to prevent them was not yet known. This is a fundamentally constructivist way of viewing the world. Constructivism says, roughly, that there is no Platonic Ideal for science to reach. Experiments are theory-laden and models are necessarily abstract. This does mean science is totally relative or pointless, but rather that it is limited, and we will always be, on occasion, surprised by how our models (and this is true in social science as well!) perform in the “real world”. Being cognizant of the limits of scientific knowledge is important for evaluating accidents: particularly innovative systems will be more prone to epistemic accidents, for one.

Downer’s example is the famous Aloha Airlines 243 accident in 1988. On a routine flight from Hilo to Honolulu, the fuselage ripped right off of a 737, exposing a huge chunk of the passenger cabin while the plane was traveling at full speed. Luckily, the plane was not far from Maui, and managed to land with only one death – passengers had to, while themselves strapped in, lean over and hold down a stewardess who was lying down in the aisle in order to keep her from flying out of the plane. This was shocking since the 737 was built with multiple failsafes to ensure that such a rupture did not happen; roughly, the rupture would only happen, it was believed, if a crack many feet long developed on the airplane skin, and this would have been caught at a much smaller stage by regular maintenance.

It turns out that testing of the plane was missing two concepts. First, a combination of the glue being used with salt-heavy air made cracks more likely, and second, the way the rivets were lined up happens to make metal fatigue compound as minor cracks near each rivet connect with each other. And indeed, even in the minor world of massive airplane decompression, this was not the first “epistemic accident”. The reason airplane windows are oval and not square is to avoid almost exactly the same problem: some British-made Comets in the 50s crashed and the impact of their square windows with metal fatigue was found to be the culprit.

What does this mean for economics? I think it means quite a bit for policy. Complicated systems will always have problems that are beyond the bounds of designers to understand, at least until the problem arises. New systems, rather than existing systems, will tend to see these problems, as we learn what is important to include in our models and tests, and what is not. That is, the “flash crash” looks a lot like a “normal accident”, whereas the financial crisis has many aspects that look like epistemic accidents. New and complicated systems, such as those introduced in the financial world, should be handled in a fundamentally conservative way by policymakers in order to deal with the uncertainty in our models. And it’s not just finance: we know, for instance, of many unforeseen methods of collusion that have stymied even well-designed auctions constructed by our best mechanism designers. This is not strange, or a failure, but rather part of science, and we ought be upfront with it.

Google Docs Link (The only ungated version I can find is the Google Docs Quick View above which happens to sneak around a gate. Sociologists, my friends, you’ve got to tell your publishers that it’s no longer acceptable in 2012 to not have ungated working papers! If you have JSTOR access, and in case the link above goes dead, the final version in the November 2011 AJS is here)

Advertisement

2 thoughts on ““737-Cabriolet: The Limits of Knowledge and the Sociology of Inevitable Failure,” J. Downer (2011)

  1. Matthew Squair says:

    See http://eprints.lse.ac.uk/36542/1/Disspaper61.pdf for a copy of an associated paper.

  2. Matthew Squair says:

    One of the points that John made is that even in a very mature technology (aircraft structural design with aluminium) there is still a significant component of epistemic uncertainty, and therefore room for discussion, debate and disagreement amongst experts.

    Yet despite this we engineers still all profess to believe in the myth of mechanical objectivity in what we do. There’s a followup presentation on Fukushima as a case study here, http://www.rvs.uni-bielefeld.de/Bieleschweig/eleventh/DownerB11Slides.pdf

Comments are closed.

%d bloggers like this: