Category Archives: Environment

How We Create and Destroy Growth: A Nobel for Romer and Nordhaus

Occasionally, the Nobel Committee gives a prize which is unexpected, surprising, yet deft in how it points out underappreciated research. This year, they did no such thing. Both William Nordhaus and Paul Romer have been running favorites for years in my Nobel betting pool with friends at the Federal Reserve. The surprise, if anything, is that the prize went to both men together: Nordhaus is best known for his environmental economics, and Romer for his theory of “endogenous” growth.

On reflection, the connection between their work is obvious. But it is the connection that makes clear how inaccurate many of today’s headlines – “an economic prize for climate change” – really is. Because it is not the climate that both winners build on, but rather a more fundamental economic question: economic growth. Why are some places and times rich and others poor? And what is the impact of these differences? Adam Smith’s “The Wealth of Nations” is formally titled “An Inquiry into the Nature and Causes of the Wealth of Nations”, so these are certainly not new questions in economics. Yet the Classical economists did not have the same conception of economic growth that we have; they largely lived in a world of cycles, of ebbs and flows, with income per capita facing the constraint of agricultural land. Schumpeter, who certainly cared about growth, notes that Smith’s discussion of the “different progress of opulence in different nations” is “dry and uninspired”, perhaps only a “starting point of a sort of economic sociology that was never written.”

As each generation became richer than the one before it – at least in a handful of Western countries and Japan – economists began to search more deeply for the reason. Marx saw capital accumulation as the driver. Schumpeter certainly saw innovation (though not invention, as he always made clear) as important, though he had no formal theory. It was two models that appear during and soon after World War II – that of Harrod-Domar, and Solow-Swan-Tinbergen – which began to make real progress. In Harrod-Domar, economic output is a function of capital Y=f(K), nothing is produced without capital f(0)=0, the economy is constant returns to scale in capital df/dK=c, and the change in capital over time depends on what is saved from output minus what depreciates dK/dt=sY-zK, where z is the rate of depreciation. Put those assumptions together and you will see that growth, dY/dt=sc-z. Since c and z are fixed, the only way to grow is to crank up the savings rate, Soviet style. And no doubt, capital deepening has worked in many places.

Solow-type models push further. They let the economy be a function of “technology” A(t), the capital stock K(t), and labor L(t), where output Y(t)=K^a*(A(t)L(t))^(1-a) – that is, that production is constant returns to scale in capital and labor. Solow assumes capital depends on savings and depreciation as in Harrod-Domar, that labor grows at a constant rate n, and that “technology” grows at constant rate g. Solving this model gets you that the economy grows such that dY/dt=sy-k(n+z+g), and that output is exactly proportional to capital. You can therefore just run a regression: we observe the amount of labor and capital, and Solow shows that there is not enough growth in those factors to explain U.S. growth. Instead, growth seems to be largely driven by change in A(t), what Abramovitz called “the measure of our ignorance” but which we often call “technology” or “total factor productivity”.

Well, who can see that fact, as well as the massive corporate R&D facilities of the post-war era throwing out inventions like the transistor, and not think: surely the factors that drive A(t) are endogenous, meaning “from within”, to the profit-maximizing choices of firms? If firms produce technology, what stops other firms from replicating these ideas, a classic positive externality which would lead the rate of technology in a free market to be too low? And who can see the low level of convergence of poor country incomes to rich, and not think: there must be some barrier to the spread of A(t) around the world, since otherwise the return to capital must be extraordinary in places with access to great technology, really cheap labor, and little existing capital to combine with it. And another question: if technology – productivity itself! – is endogenous, then ought we consider not just the positive externality that spills over to other firms, but also the negative externality of pollution, especially climate change, that new technologies both induce and help fix? Finally, if we know how to incentivize new technology, and how growth harms the environment, what is the best way to mitigate the great environmental problem of our day, climate change, without stopping the wondrous increase in living standards growth keeps providing? It is precisely for helping answer these questions that Romer and Nordhaus won the Nobel.

Romer and Endogenous Growth

Let us start with Paul Romer. You know you have knocked your Ph.D. thesis out of the park when the great economics journalist David Warsh writes an entire book hailing your work as solving the oldest puzzle in economics. The two early Romer papers, published in 1986 and 1990, have each been cited more than 25,000 times, which is an absolutely extraordinary number by the standards of economics.

Romer’s achievement was writing a model where inventors spend money to produce inventions with increasing returns to scale, other firms use those inventions to produce goods, and a competitive Arrow-Debreu equilibrium still exists. If we had such a model, we could investigate what policies a government might wish to pursue if it wanted to induce firms to produce growth-enhancing inventions.

Let’s be more specific. First, innovation is increasing returns to scale because ideas are nonrival. If I double the amount of labor and capital, holding technology fixed, I double output, but if I double technology, labor, and capital, I more than double output. That is, give one person a hammer, and they can build, say, one staircase a day. Give two people two hammers, and they can build two staircases by just performing exactly the same tasks. But give two people two hammers, and teach them a more efficient way to combine nail and wood, and they will be able to build more than two staircases. Second, if capital and labor are constant returns to scale and are paid their marginal product in a competitive equilibrium, then there is no output left to pay inventors anything for their ideas. That is, it is not tough to model in partial equilibrium the idea of nonrival ideas, and indeed the realization that a single invention improves productivity for all is also an old one: as Thomas Jefferson wrote in 1813, “[h]e who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.” The difficulty is figuring out how to get these positive spillovers yet still have “prices” or some sort of rent for the invention. Otherwise, why would anyone pursue costly invention?

We also need to ensure that growth is not too fast. There is a stock of existing technology in the world. I use that technology to create new innovations which grow the economy. With more people over time and more innovations over time, you may expect the growth rate to be higher in bigger and more technologically advanced societies. It is in part, as Michael Kremer points out in his One Million B.C. paper. Nonetheless, the rate of growth is not asymptotically increasing by any stretch (see, e.g., Ben Jones on this point). Indeed, growth is nearly constant, abstracting from the business cycle, in the United States, despite a big growth in population and the stock of existing technology.

Romer’s first attempt at endogenous growth was based on his thesis and published in the JPE in 1986. Here, he adds “learning by doing” to Solow: technology is a function of the capital stock A(t)=bK(t). As each firm uses capital, they generate learning which spills over to other firms. Even if population is constant, with appropriate assumptions on production functions and capital depreciation, capital, output, and technology grow over time. There is a problem here, however, and one that is common to any model based on learning-by-doing which partially spills over to other firms. As Dasgupta and Stiglitz point out, if there is learning-by-doing which only partially spills over, the industry is a natural monopoly. And even if it starts competitively, as I learn more than you, dynamically I can produce more efficiently, lower my prices, and take market share from you. A decentralized competitive equilibrium with endogenous technological growth is unsustainable!

Back to the drawing board, then. We want firms to intentionally produce technology in a competitive market as they would other goods. We want technology to be nonrival. And we want technology production to lead to growth. Learning-by-doing allows technology to spill over, but would simply lead to a monopoly producer. Pure constant-returns-to-scale competitive production, where technology is just an input like capital produced with a “nonconvexity” – only the initial inventor pays the fixed cost of invention – means that there is no output left to pay for invention once other factors get their marginal product. A natural idea, well known to Arrow 1962 and others, emerges: we need some source of market power for inventors.

Romer’s insight is that inventions are nonrival, yes, but they are also partially excludable, via secrecy, patents, or other means. In his blockbuster 1990 JPE Endogenous Technological Change, he lets inventions be given an infinite patent, but also be partially substitutable by other inventions, constraining price (this is just a Spence-style monopolistic competition model). The more inventions there are, the more efficiently final goods can be made. Future researchers can use present technology as an input to their invention for free. Invention is thus partially excludable in the sense that my exact invention is “protected” from competition, but also spills over to other researchers by making it easier for them to invent other things. Inventions are therefore neither public nor private goods, and also not “club goods” (nonrival but excludable) since inventors cannot exclude future inventors from using their good idea to motivate more invention. Since there is free entry into invention, the infinite stream of monopoly rents from inventions is exactly equal to their opportunity cost.

From the perspective of final goods producers, there are just technologies I can license as inputs, which I then use in a constant returns to scale way to produce goods, as in Solow. Every factor is paid its marginal product, but inventions are sold for more than their marginal cost due to monopolistic excludability from secrecy or patents. The model is general equilibrium, and gives a ton of insight about policy: for instance, if you subsidize capital goods, do you get more or less growth? In Romer (1986), where all growth is learning-by-doing, cheaper capital means more learning means more growth. In Romer (1990), capital subsidies can be counterproductive!

There are some issues to be worked out: the Romer models still have “scale effects” where growth is not constant, roughly true in the modern world, despite changes in population and the stock of technology (see Chad Jones’ 1995 and 1999 papers). The neo-Schumpeterian models of Aghion-Howitt and Grossman-Helpman add the important idea that new inventions don’t just add to the stock of knowledge, but also make old inventions less valuable. And really critically, the idea that institutions and not just economic fundamentals affect growth – meaning laws, culture, and so on – is a massive field of research at present. But it was Romer who first cracked the nut of how to model invention in general equilibrium, and I am unaware of any later model which solves this problem in a more satisfying way.

Nordhaus and the Economic Solution to Pollution

So we have, with Romer, a general equilibrium model for thinking about why people produce new technology. The connection with Nordhaus comes in a problem that is both caused by, and potentially solved by, growth. In 2018, even an ignoramus knows the terms “climate change” and “global warming”. This was not at all the case when William Nordhaus began thinking about how the economy and the environment interrelate in the early 1970s.

Growth as a policy goal was fairly unobjectionable as a policy goal in 1960: indeed, a greater capability of making goods, and of making war, seemed a necessity for both the Free and Soviet worlds. But by the early 1970s, environmental concerns arose. The Club of Rome warned that we were going to run out of resources if we continued to use them so unsustainably: resources are of course finite, and there are therefore “limits to growth”. Beyond just running out of resources, growth could also be harmful because of negative externalities on the environment, particularly the newfangled idea of global warming an MIT report warned about in 1970.

Nordhaus treated those ideas both seriously and skeptically. In a 1974 AER P&P, he notes that technological progress or adequate factor substitution allow us to avoid “limits to growth”. To put it simply, whales are limited in supply, and hence whale oil is as well, yet we light many more rooms than we did in 1870 due to new technologies and substitutes for whale oil. Despite this skepticism, Nordhaus does show concern for the externalities of growth on global warming, giving a back-of-the-envelope calculation that along a projected Solow-type growth path, the amount of carbon in the atmosphere will reach a dangerous 487ppm by 2030, surprisingly close to our current estimates. In a contemporaneous essay with Tobin, and in a review of an environmentalist’s “system dynamics” predictions of future economic collapse, Nordhaus reaches a similar conclusion: substitutable factors mean that running out of resources is not a huge concern, but rather the exact opposite, that we will have access to and use too many polluting resources, should worry us. That is tremendous foresight for someone writing in 1974!

Before turning back to climate change, can we celebrate again the success of economics against the Club of Rome ridiculousness? There were widespread predictions, from very serious people, that growth would not just slow but reverse by the end of the 1980s due to “unsustainable” resource use. Instead, GDP per capita has nearly doubled since 1990, with the most critical change coming for the very poorest. There would have been no greater disaster for the twentieth century than had we attempted to slow the progress and diffusion of technology, in agriculture, manufacturing and services alike, in order to follow the nonsense economics being promulgated by prominent biologists and environmental scientists.

Now, being wrong once is no guarantee of being wrong again, and the environmentalists appear quite right about climate change. So it is again a feather in the cap of Nordhaus to both be skeptical of economic nonsense, and also sound the alarm about true environmental problems where economics has something to contribute. As Nordhaus writes, “to dismiss today’s ecological concerns out of hand would be reckless. Because boys have mistakenly cried “wolf’ in the past does not mean that the woods are safe.”

Just as we can refute Club of Rome worries with serious economics, so too can we study climate change. The economy affects the climate, and the climate effects the economy. What we need an integrated model to assess how economic activity, including growth, affects CO2 production and therefore climate change, allowing us to back out the appropriate Pigouvian carbon tax. This is precisely what Nordhaus did with his two celebrated “Integrated Assessment Models”, which built on his earlier simplified models (e.g., 1975’s Can We Control Carbon Dioxide?). These models have Solow-type endogenous savings, and make precise the tradeoffs of lower economic growth against lower climate change, as well as making clear the critical importance of the social discount rate and the micro-estimates of the cost of adjustment to climate change.

The latter goes well beyond the science of climate change holding the world constant: the Netherlands, in a climate sense, should be underwater, but they use dikes to restraint the ocean. Likewise, the cost of adjusting to an increase in temperature is something to be estimated empirically. Nordhaus takes climate change very seriously, but he is much less concerned about the need for immediate action than the famous Stern report, which takes fairly extreme positions about the discount rate (1000 generations in the future are weighed the same as us, in Stern) and the costs of adjustment.

Consider the following “optimal path” for carbon from Nordhaus’ most recent run of the model, where the blue line is his optimum.

Note that he permits much more carbon than Stern or a policy which mandates temperatures stay below a 2.5 C rise forever. The reason is the costs to growth in the short term are high: the world is still very poor in many places! There was a vitriolic debate following the Stern report about who was correct: whether the appropriate social discount rate is zero or something higher is a quasi-philosophical debate going back to Ramsey (1928). But you can see here how important the calibration is.

There are other minor points of disagreement between Nordhaus and Stern, and my sense is that there has been some, though not full, convergence if their beliefs about optimal policy. But there is no disagreement whatsoever between the economic and environmental community that the appropriate way to estimate the optimal response to climate change is via an explicit model incorporating some sort of endogeneity of economic reaction to climate policy. The power of the model is that we can be extremely clear about what points of disagreement remain, and we can examine the sensitivity of optimal policy to factors like climate “tipping points”.

There is one other issue: in Nordhaus’ IAMs, and in Stern, you limit climate change by imposing cap and trade or carbon taxes. But carbon harms cross borders. How do you stop free riding? Nordhaus, in a 2015 AER, shows theoretically that there is no way to generate optimal climate abatement without sanctions for non-participants, but that relatively small trade penalties work quite well. This is precisely what Emmanuel Macron is currently proposing!

Let’s wrap up by linking Nordhaus even more tightly back to Romer. It should be noted that Nordhaus was very interested in the idea of pure endogenous growth, as distinct from any environmental concerns, from the very start of his career. His thesis was on the topic (leading to a proto-endogenous growth paper in the AER P&P in 1969), and he wrote a skeptical piece in the QJE in 1973 about the then-leading theories of what factors induce certain types of innovation (objections which I think have been fixed by Acemoglu 2002). Like Romer, Nordhaus has long worried that inventors do not receive enough of the return to their invention, and that we measure innovation poorly – see his classic NBER chapter on inventions in lighting, and his attempt to estimate how much of how much of society’s output goes to innovators.

The connection between the very frontier of endogenous growth models, and environmental IAMs, has not gone unnoticed by other scholars. Nordhaus IAMs tend to have limited incorporation of endogenous innovation in dirty or clean sectors. But a fantastic paper by Acemoglu, Aghion, Bursztyn, and Hemous combines endogenous technical change with Nordhaus-type climate modeling to suggest a middle ground between Stern and Nordhaus: use subsidies to get green energy close to the technological frontier, then use taxes once their distortion is relatively limited because a good green substitute exists. Indeed, since this paper first started floating around 8 or so years ago, massive subsidies to green energy sources like solar by many countries have indeed made the “cost” of stopping climate change much lower than if we’d relied solely on taxes, since now production of very low cost solar, and mass market electric cars, is in fact economically viable.

It may indeed be possible to solve climate change – what Stern called “the greatest market failure” man has ever seen – by changing the incentives for green innovation, rather than just by making economic growth more expensive by taxing carbon. Going beyond just solving the problem of climate change, to solving it in a way that minimizes economic harm, is a hell of an accomplishment, and more than worthy of the Nobel prizes Romer and Nordhaus won for showing us this path!

Some Further Reading

In my PhD class on innovation, the handout I give on the very first day introduces Romer’s work and why non-mathematical models of endogenous innovation mislead. Paul Romer himself has a nice essay on climate optimism, and the extent to which endogenous invention matters for how we stop global warming. On why anyone signs climate change abatement agreements, instead of just free riding, see the clever incomplete contracts insight of Battaglini and Harstad. Romer has also been greatly interested in the policy of “high-growth” places, pushing the idea of Charter Cities. Charter Cities involve Hong Kong like exclaves of a developing country where the institutions and legal systems are farmed out to a more stable nation. Totally reasonable, but in fact quite controversial: a charter city proposal in Madagascar led to a coup, and I can easily imagine that the Charter City controversy delayed Romer’s well-deserved Nobel laurel. The New York Times points out that Nordhaus’ brother helped write the Clean Air Act of 1970. Finally, as is always true with the Nobel, the official scientific summary is lucid and deep in its exploration of the two winners’ work.

Advertisements

William Baumol: Truly Productive Entrepreneurship

It seems this weblog has become an obituary page rather than a simple research digest of late. I am not even done writing on the legacy of Ken Arrow (don’t worry – it will come!) when news arrives that yet another product of the World War 2 era in New York City, an of the CCNY system, has passed away: the great scholar of entrepreneurship and one of my absolute favorite economists, William Baumol.

But we oughtn’t draw the line on his research simply at entrepreneurship, though I will walk you through his best piece in the area, a staple of my own PhD syllabus, on “creative, unproductive, and destructive” entrepreneurship. Baumol was also a great scholar of the economics of the arts, performing and otherwise, which were the motivation for his famous cost disease argument. He was a very skilled micro theorist, a talented economic historian, and a deep reader of the history of economic thought, a nice example of which is his 2000 QJE on what we have learned since Marshall. In all of these areas, his papers are a pleasure to read, clear, with elegant turns of phrase and the casual yet erudite style of an American who’d read his PhD in London under Robbins and Viner. That he has passed without winning his Nobel Prize is a shame – how great would it have been had he shared a prize with Nate Rosenberg before it was too late for them both?

Baumol is often naively seen as a Schumpeter-esque defender of the capitalist economy and the heroic entrepreneur, and that is only half right. Personally, his politics were liberal, and as he argued in a recent interview, “I am well aware of all the very serious problems, such as inequality, unemployment, environmental damage, that beset capitalist societies. My thesis is that capitalism is a special mechanism that is uniquely effective in accomplishing one thing: creating innovations, applying those innovations and using them to stimulate growth.” That is, you can find in Baumol’s work many discussions of environmental externalities, of the role of government in funding research, in the nature of optimal taxation. You can find many quotes where Baumol expresses interest in the policy goals of the left (though often solved with the mechanism of the market, and hence the right). Yet the core running through much of Baumol’s work is a rigorous defense, historically and theoretically grounded, in the importance of getting incentives correct for socially useful innovation.

Baumol differs from many other prominent economists of innovation because is at his core a neoclassical theorist. He is not an Austrian like Kirzner or an evolutionary economist like Sid Winter. Baumol’s work stresses that entrepreneurs and the innovations they produce are fundamental to understanding the capitalist economy and its performance relative to other economic systems, but that the best way to understand the entrepreneur methodologically was to formalize her within the context of neoclassical equilibria, with innovation rather than price alone being “the weapon of choice” for rational, competitive firms. I’ve always thought of Baumol as being the lineal descendant of Schumpeter, the original great thinker on entrepreneurship and one who, nearing the end of his life and seeing the work of his student Samuelson, was convinced that his ideas should be translated into formal neoclassical theory.

A 1968 essay in the AER P&P laid out Baumol’s basic idea that economics without the entrepreneur is, in a line he would repeat often, like Hamlet without the Prince of Denmark. He clearly understood that we did not have a suitable theory for oligopoly and entry into new markets, or for the supply of entrepreneurs, but that any general economic theory needed to be able to explain why growth is different in different countries. Solow’s famous essay convinced much of the profession that the residual, interpreted then primarily as technological improvement, was the fundamental variable explaining growth, and Baumol, like many, believed those technological improvements came mainly from entrepreneurial activity.

But what precisely should the theory look like? Ironically, Baumol made his most productive step in a beautiful 1990 paper in the JPE which contains not a single formal theorem nor statistical estimate of any kind. Let’s define an entrepreneur as “persons who are ingenious or creative in finding ways to add to their wealth, power, or prestige”. These people may introduce new goods, or new methods of production, or new markets, as Schumpeter supposed in his own definition. But are these ingenious and creative types necessarily going to do something useful for social welfare? Of course not – the norms, institutions, and incentives in a given society may be such that the entrepreneurs perform socially unproductive tasks, such as hunting for new tax loopholes, or socially destructive tasks, such as channeling their energy into ever-escalating forms of warfare.

With the distinction between productive, unproductive, and destructive entrepreneurship in mind, we might imagine that the difference in technological progress across societies may have less to do with the innate drive of the society’s members, and more to do with the incentives for different types of entrepreneurship. Consider Rome, famously wealthy yet with very little in the way of useful technological diffusion: certainly the Romans appear less innovative than either the Greeks or Europe of the Middle Ages. How can a society both invent a primitive steam engine – via Herod of Alexandria – and yet see it used for nothing other than toys and religious ceremonies? The answer, Baumol notes, is that status in Roman society required one to get rich via land ownership, usury, or war; commerce was a task primarily for slaves and former slaves! And likewise in Song dynasty China, where imperial examinations were both the source of status and the ability to expropriate any useful inventions or businesses that happened to appear. In the European middle ages, incentives shift for the clever from developing war implements to the diffusion of technology like the water-mill under the Cistercians back to weapons. These examples were expanded to every society from Ancient Mesopotamia to the Dutch Republic to the modern United States in a series of economically-minded historians in a wonderful collection of essays called “The Invention of Enterprise” which was edited by Baumol alongside Joel Mokyr and David Landes.

Now we are approaching a sort of economic theory of entrepreneurship – no need to rely on the whims of character, but instead focus on relative incentives. But we are still far from Baumol’s 1968 goal: incorporating the entrepreneur into neoclassical theory. The closest Baumol comes is in his work in the early 1980s on contestable markets, summarized in the 1981 AEA Presidential Address. The basic idea is this. Assume industries have scale economies, so oligopoly is their natural state. How worried should we be? Well, if there are no sunk costs and no entry barriers for entrants, and if entrants can siphon off customers quicker than incumbents can respond, then Baumol and his coauthors claimed that the market was contestable: the threat of entry is sufficient to keep the incumbent from exerting their market power. On the one hand, fine, we all agree with Baumol now that industry structure is endogenous to firm behavior, and the threat of entry clearly can restrain market power. But on the other hand, is this “ultra-free entry” model the most sensible way to incorporate entry and exit into a competitive model? Why, as Dixit argued, is it quicker to enter a market than to change price? Why, as Spence argued, does the unrealized threat of entry change equilibrium behavior if the threat is truly unrealized along the equilibrium path?

It seems that what Baumol was hoping this model would lead to was a generalized theory of perfect competition that permitted competition for the market rather than just in the market, since the competition for the market is naturally the domain of the entrepreneur. Contestable markets are too flawed to get us there. But the basic idea, that game-theoretic endogenous market structure, rather than the old fashioned idea that industry structure affects conduct affects performance, is clearly here to stay: antitrust is essentially applied game theory today. And once you have the idea of competition for the market, the natural theoretical model is one where firms compete to innovate in order to push out incumbents, incumbents innovate to keep away from potential entrants, and profits depend on the equilibrium time until the dominant firm shifts: I speak, of course, about the neo-Schumpeterian models of Aghion and Howitt. These models, still a very active area of research, are finally allowing us to rigorously investigate the endogenous rewards to innovation via a completely neoclassical model of market structure and pricing.

I am not sure why Baumol did not find these neo-Schumpeterian models to be the Holy Grail he’d been looking for; in his final book, he credits them for being “very powerful” but in the end holding different “central concerns”. He may have been mistaken in this interpretation. It proved quite interesting to give a careful second read of Baumol’s corpus on entrepreneurship, and I have to say it disappoints in part: the questions he asked were right, the theoretical acumen he possessed was up to the task, the understanding of history and qualitative intuition was second to none, but in the end, he appears to have been just as stymied by the idea of endogenous neoclassical entrepreneurship as the many other doyens of our field who took a crack at modeling this problem without, in the end, generating the model they’d hoped they could write.

Where Baumol has more success, and again it is unusual for a theorist that his most well-known contribution is largely qualitative, is in the idea of cost disease. The concept comes from Baumol’s work with William Bowen (see also this extension with a complete model) on the economic problems of the performing arts. It is a simple idea: imagine productivity in industry rises 4% per year, but “the output per man-hour of a violinist playing a Schubert quarter in a standard concert hall” remains fixed. In order to attract workers into music rather than industry, wages must rise in music at something like the rate they rise in industry. But then costs are increasing while productivity is not, and the arts looks “inefficient”. The same, of course, is said for education, and health care, and other necessarily labor-intensive industries. Baumol’s point is that rising costs in unproductive sectors reflect necessary shifts in equilibrium wages rather than, say, growing wastefulness.

How much can cost disease explain? Because the concept is so widely known by now that it is, in fact, used to excuse stagnant industries. Teaching, for example, requires some labor, but does anybody believe that it is impossible for R&D and complementary inventions (like the internet, for example) to produce massive productivity improvements? Is it not true that movie theaters now show opera live from the world’s great halls on a regular basis? Is it not true that my Google Home can, activated by voice, call up two seconds from now essentially any piece of recorded music I desire, for free? Speculating about industries that are necessarily labor-intensive (and hence grow slowly) from those with rapid technological progress is a very difficult game, and one we ought hesitate to play. But equally, we oughtn’t forget Baumol’s lesson: in some cases, in some industries, what appears to be fixable slack is in fact simply cost disease. We may ask, how was it that Ancient Greece, with its tiny population, put on so many plays, while today we hustle ourselves to small ballrooms in New York and London? Baumol’s answer, rigorously shown: cost disease. The “opportunity cost” of recruiting a big chorus was low, as those singers would otherwise have been idle or working unproductive fields gathering olives. The difference between Athens and our era is not simply that they were “more supportive of the arts”!

Baumol was incredibly prolific, so these suggestions for further reading are but a taste: An interview by Alan Krueger is well worth the read for anecdotes alone, like the fact that apparently one used to do one’s PhD oral defense “over whiskies and sodas at the Reform Club”. I also love his defense of theory, where if he is very lucky, his initial intuition “turn[s] out to be totally wrong. Because when I turn out to be totally wrong, that’s when the best ideas come out. Because if my intuition was right, it’s almost always going to be simple and straightforward. When my intuition turns out to be wrong, then there is something less obvious to explain.” Every theorist knows this: formalization has this nasty habit of refining our intuition and convincing us our initial thoughts actually contain logical fallacies or rely on special cases! Though known as an applied micro theorist, Baumol also wrote a canonical paper, with Bradford, on optimal taxation: essentially, if you need to raise $x in tax, how should you optimally deviate from marginal cost pricing? The history of thought is nicely diagrammed, and of course this 1970 paper was very quickly followed by the classic work of Diamond and Mirrlees. Baumol wrote extensively on environmental economics, drawing in many of his papers on the role nonconvexities in the social production possibilities frontier play when they are generated by externalities – a simple example of this effect, and the limitations it imposes on Pigouvian taxation, is in the link. More recently, Baumol has been writing on international trade with Ralph Gomory (the legendary mathematician behind a critical theorem in integer programming, and later head of the Sloan Foundation); their main theorems are not terribly shocking to those used to thinking in terms of economies of scale, but the core example in the linked paper is again a great example of how nonconvexities can overturn a lot of our intuition, in the case on comparative advantage. Finally, beyond his writing on the economics of the arts, Baumol proved that there is no area in which he personally had stagnant productivity: an art major in college, he was also a fantastic artist in his own right, picking up computer-generated art while in his 80s and teaching for many years a course on woodworking at Princeton!

“Pollution for Promotion,” R. Jia (2012)

Ruixue Jia is on the job market from IIES in Stockholm, and she has the good fortune to have a job market topic which is very much au courant. In China, government promotions often depend both on the inherent quality of the politician and on how connected you are to current leaders; indeed, a separate paper by Jia finds that promotion probability in China depends only on the interaction of economic growth and personal connections rather than either factor by itself. Assume that a mayor can choose how much costly effort to exert. The mayor chooses how much dirty and clean technology – complements in production – to use, with the total amount of technology available an increasing function of the mayor’s effort. The mayor may personally dislike dirty technology. For any given bundle of technology, the observed economic output is higher the higher the mayor’s inherent quality (which he does not know). The central government, when deciding on promotions, only observes economic output.

Since mayors with good connections have a higher probability of being promoted for any level of output in their city, the marginal return to effort and the marginal return to dirty technology are increasing in the connectedness of the mayor. For any given distaste for pollution among the mayor, a more connected mayor will mechanically want to substitute clean for dirty technology since higher output is more valuable to him for career concerns while the marginal cost of distaste for pollution has not changed. Further, by a Le Chatelier argument, higher marginal returns to output increase the optimal effort choice, which allows a higher budget to purchase technology, dirty tech included. To the extent that the government cares about limiting the (unobserved) use of dirty tech, this is “almost” the standard multitasking concern: the folly of rewarding A and hoping for B. Although in this case, empirically there is no evidence that the central government cares about promoting local politicians who are good for the environment!

How much do local leaders increase pollution (and simultaneously speed up economic growth!) in exchange for a shot at a better job? The theory above gives us some help. We see that the same politician will substitute in dirty technology if, in some year, his old friends get on the committee that assigns promotions (the Politburo Standing Committee, or PSC, in China’s case). This allows us to see the effect of the Chinese incentive system on pollution even if we know nothing about the quality of each individual politician or whether highly-connected politicians get plum jobs in low pollution regions, since every effect we find is at the within-politician level. Using a diff-in-diff, Jia finds that in the year after a politician’s old friend makes the PSC, sulfur dioxide goes up 25%, a measure of river pollution goes up by a similar amount, industrial GDP rises by 15%, and non-industrial GDP does not change. So it appears that China’s governance institution does incentivize governors, although whether those incentives are good or bad for welfare depends on how you trade off pollution and growth in your utility function.

Good stuff. A quick aside, since what I like about Jia’s work is that she makes an attempt to more than simply find a clever strategy for getting internal validity. Many other recent job market stars – Dave Donaldson and Melissa Dell, for instance – have been equally good when it comes to caring about more than just nice identification. But such care is rare indeed! It has been three decades since we, supposedly, “took the ‘con’ out of Econometrics”. And yet an unbearable number of papers are still floating around which quite nicely identify a relationship of interest in a particular dataset, then go on to give only the vaguest and most unsatisfying remarks concerning external validity. That’s a much worse con than bad identification! Identification, by definition, can only hold ceteris paribus. Even perfect identification of some marginal effect tells me absolutely nothing about the magnitude of that effect when I go to a different time, or a different country, or a more general scenario. The only way – the only way! – to generalize an internally valid result, and the only way to explain why that result is the way it is, is to use theory. A good paper puts the theoretical explanation and the specific empirical case examined in context with other empirical papers on the same general topic, rather than stopping after the identification is cleanly done. And a good empirical paper needs to explain, and needs to generalize, because we care about unemployment (not unemployment in border counties of New Jersey in the 1990s) and we care about the effect of military training on labor supply (not the effect of the Vietnam War on labor supply in the few years following), etc. If we really want the credibility revolution in empirical economics to continue, let’s spend less seminar and referee time worrying only about internal validity, and more time shutting down the BS that is often passed off as “explanation”.

November 2012 working paper. Jia also has an interesting paper about the legacy of China’s treaty ports, as well as a nice paper (a la Nunn and Qian) on the importance of the potato in world history (really! I may be a biased Dorchester-born Mick, but still, the potato has been fabulously important).

“Welfare Gains from Optimal Pollution Regulation,” J. M. Abito (2012)

Mechanism design isn’t just some theoretical curiosity or a trick for examining auctions. It has, in the hands of skilled practitioners like David Baron, Jean-Jacques Laffont, Jean Tirole and David Besanko (an advisor of mine!), had a huge impact on economic regulation. Consider regulating a natural monopoly that has private information about its costs. In the standard sorting problem, I am going to have to pay information rents to firms that have low costs, since otherwise they will claim to have high costs and thus get to charge higher prices. If funds are costly – and the standard estimate in US public finance is that the marginal dollar of taxation imposes 30 cents of deadweight loss on society – then those information rents are a welfare loss and not just a transfer. Hence I may be willing to sacrifice some allocative efficiency by, for example, randomizing over all firms who claim to be at least somewhat efficient rather than paying a large information rent to learn exactly who the efficient firm is. Laffont’s 1994 Econometric Society address covers this basic point quite nicely.

Mike Abito, on the job market here at Northwestern, notes that few real-world policies actually take account of this tradeoff. Consider a regulator who wants polluting firms to abate their pollution when economically feasible. If the distribution of abatement costs is widely dispersed, then low cost firms have a large incentive to claim high costs and therefore avoid paying for abatement. Especially in this case, it may be worthwhile to sacrifice some allocative efficiency in an optimal pollution abatement scheme, having low cost firms not abate as much as they would if the regulator wanted all information about each firm’s costs. In order to design the optimal pollution regulation scheme, then, we need to know the distribution of marginal abatement costs, which is not something we know immediately from data. In particular, consider regulating SO2 among power plants. Hence, to the world of theory, my friends! (And, briefly, why not just sell pollution permits? If you give away the permits to each plant, then the same informational issue arises, and you do not earn any tax revenue that could offset distortionary taxes elsewhere in the economy.)

Let a power plant, at some cost and effort level, produce some bundle of electricity and SO2. Observed costs alone are not enough if firms have inherently high costs, since firms may appear to have high costs when in fact they are simply exerting low effort. Abito notices that power plants are both rate regulated – meaning that they are natural monopolies whose rates are set by a government agency that estimates their costs – and regulated for pollution reasons. By writing down an auditing game, you see that in the periods the firm is being watched for rate-setting purposes, they exert low effort. They do exert effort in future periods, since any cost reduction comes to them as profits. Indeed, if you look at, for instance, heat generation during years where the plants are being watched, the amount of heat generated declines by roughly the same amount as effort is estimated to decline in the model, so the hypothesized equilibrium of the auditing game is not totally out of line with the data.

What this wedge between cost efficiency in years when the plant is being watched and in other years gives us is an estimate of the cost function, including disutility of effort, which generates some bundle of SO2 and electricity. In fact, it gives us just enough of an exclusion restriction to estimate the distribution of marginal abatement costs of SO2 using techniques from dynamic structural IO. Once we have estimated that distribution, we can solve for numerical estimates of the welfare gain from various abatement policies. Laffont long ago showed that the optimal pollution regulation under this private information, assuming we know the distribution of marginal abatement costs, involves a bundle of type-dependent emission taxes and type-dependent transfers which give the least efficient firm zero profits, but which also lead to less effort and less pollution abatement for more efficient firms that you would get with full information; again, this is just the tradeoff between information rents and allocative efficiency. Such a heterogeneous policy might be tough to implement in practice, however. Welfare gains from the optimal policy instead of a uniform emissions standard, given the estimated distribution of marginal abatement costs, are equal to about 10% of the entire variable cost of the average plant. A uniform emissions tax (rather than a standard which imposes a maximum amount of emissions) captures something like 60-70% of this improvement, and is easier to implement.

More generally, the gain to society of using regulatory regimes that condition on the underlying properties of each firm really depends on properties like the distribution of marginal abatement costs which atheoretically can never be known, but which with the use of proper structure can actually be estimated. What is particularly cool here is that, unlike most earlier work, the underlying firm properties are estimated without assuming that the regulator is already optimizing, an assumption that is simply false in the case of pollution regulation. Good stuff.

November 2012 Working Paper (Not available on IDEAS). There are a number of interesting papers in environmental economics on the job market this year. Lint Barrage at Yale discusses how carbon taxes and other taxes should interact in optimal fiscal policy. In particular, since carbon in the atmosphere lowers the productive capacity of assets (like agricultural land) in the future, not taxing carbon is identical to taxing capital, producing the same distortion. When the economy already has distortionary taxation, the optimal rate of carbon taxation will need to be adjusted. Joseph Shapiro from MIT estimates the environmental damage from CO2 produced in international trade. It is two orders of magnitude smaller than the gains from that trade, and a small carbon tax on international shipping is optimal. In a separate paper, Shapiro and coauthors find that US mortality during heat waves declined massively over the twentieth century, that all of the decline appears to be linked to adoption of air conditioning, and hence that mitigation of some negative health impacts of climate change in poor countries will likely be handled by A/C. Since A/C uses electricity, non-carbon methods of generating that electricity are critical if we want to avoid making climate change worse while we mitigate these impacts.

“Buffalo Hunt: International Trade and the Virtual Extinction of the North American Bison,” M. S. Taylor (2011)

In less than 10 years near the end of the 19th century, the US buffalo population fell from up to 15 million to fewer than 100. This near extinction is directly linked to much of the early environmental movement in America, as it was directly witnessed by Roosevelt and Muir, among others. Historians have long asked why the slaughter was so punctuated. Was it a result of the railroad easing access? Indian hunts made easier with the gun? A US government policy aimed at starving out troublesome tribes? In a new AER, Taylor presents quite interesting cliometric evidence that another source is primarily responsible: a technological development on the other side of the Atlantic.

Until 1870, buffalo were hunted mainly for meat or, in the case of the Northern Herd during winter, their heavy robes. The meat rotted relatively quickly and was difficult to transport – a fact you know if you’ve played the old computer game Oregon Trail. There was no simple way to tan the hides and create leather. But sometime around 1870 or ’71, German and English inventors discovered how to tan buffalo. This discovery made the shooting of buffalo much more lucrative. Ox leather was also roughly a substitute for buffalo leather, so even a massive slaughter of buffalo, which represented at the peak only a small percentage of world hides, would have had little impact on the price of hides. Following the invention, kills of buffalo and exports to England and Germany of buffalo hides – a series constructed using a nifty set of assumptions, actually – soared. The Southern Herd was totally depleted by 1879. In 1881, following the defeat of the Sioux, relatively safe hunting was again possible among the (smaller) Northern Herd, which itself was depleted within a couple years.

Evidence is provided that this increase in exports was related to the tanning technology, and was not simply the result of a US supply or a different European demand shock. The relative share of hides in Europe that came from the US soared; this was not true of Canada and other countries, which did not have access to the tanning knowledge. Canada saw no large increase in US-exported hides during the buffalo slaughter. The raw number of exports, plus reasonable estimates of wastage, roughly matches the size of the slaughter.

The other stories about buffalo extinction are less compelling. No direct evidence has ever been found of a US policy to exterminate the buffalo, though there are certainly instances of individuals or groups from the Army killing wantonly. The railroad arrived in the region of the Southern Herd a few years before the slaughter began; the large increase in buffalo killed was only seen after the tanning technology was invented.

What does this tell us about today? It is another example of potential harms when a country joins the international market. A lack of property rights plus demand for exports can be devastating to newly integrating countries. This suggests that development plans and loosening of trade restriction may, in some cases, indeed best be linked with environmental regulation – the buffalo slaughter, Taylor argues, would not have occurred as it did if the US had been in a state of technological autarky.

http://works.bepress.com/cgi/viewcontent.cgi?article=1000&context=taylor (2007 Working Paper. Final version in December 2011 AER. I see this paper listed as an R&R at AER as far back as 2008 – yet more evidence of the utterly ridiculous publication lags in economics. Are journal editors aware that when lags reach 4 or 5 years, to say nothing of longer ones, the journal itself is in danger of becoming useless? Or that editors in other fields, and some in economics like the J. Urban E., seem to have problem running peer review and revision in well under a year?)

“Buy Coal!: A Case for Supply Side Environmental Policy,” B. Harstad (2012)

The vast majority of world nations are not currently participating in agreements to limit global warming. Many countries cut down their rainforests in a way harmful to global social welfare. Even worse, attempts to improve things by the countries that do care can be self-defeating because of the problem of “leakage”, or what we economists just call supply and demand. Imagine Sweden cuts emissions by lowering domestic demand for oil. That lowers world demand for oil, lowering world price, hence increasing quantity demanded elsewhere. Boycotts may work in a similar way: when consumers in Canada stop buying some rare wood, the price of that wood falls, increasing the quantity of wood consumed in other countries.

What to do? Well, Coase tells us that externalities are in many cases not a worry when property rights are properly defined. Instead of trying to limit demand side consumption, why not limit supply? In particular, imagine that one set of countries (call them Scandinavia) imagines some harm from consumption of oil, and another set doesn’t care (let’s call them Tartary, after my favorite long-lost empire). Oil is costly to produce, and there is no entry, which isn’t a bad assumption for something like oil. Let there be a market for oil deposits – and you may have noticed from the number of Chinese currently laying pipe in Africa that such a market exists!

Let (q*,p*) be the quantity and price that clears the world market. Let h be the marginal harm to Scandinavia from global oil consumption. Let qopt be the socially optimal level of consumption from the perspective of Scandinavia, and popt the price that clears the market at that quantity. The Scandinavians just need to buy all the oil deposits whose cost of extraction is higher than popt minus h and lower than popt. Once they own the rights, they place an extraction tax on those deposits equal to the harm, h. With such a policy, no one exploits these marginal oil fields because of the tax, and no one exploits any more costly-to-extract fields because the cost of extraction is lower than the world oil price. There are many well-known mechanisms for buying the marginal oil fields at a cost lower than the harm inflicted on Scandinavia if the oil were exploited: the exact cost is particularly low if a few countries own all the world oil, since that country will benefit from Scandinavia’s policy as the world oil price rises following Scandinavia’s purchase of the marginal fields. Note that this policy is also nice in that oil, after the policy, costs exactly the same in Tartary and Scandinavia, so there is no worry about firms moving to the country with lax environmental policies. Another benefit is that it avoids the time inconsistency of related dynamic problems, such as using subsidies for green technology until they are invented, then getting rid of the subsidies.

There are some policies like this currently in place: for example, Norway’s environmental agency buys the rights to forest tracts and keeps them unexploited. But note that you have to buy the right tract to avoid leakage: you want to buy the tract that is worth exploiting, but just barely. This is great for you as the environmentalist, though, since this will be the cheapest tract to buy given the limited profit to be made if it is cut down!

This paper should also suggest to you other ways to enact environmental policy when facing leakage: political recalcitrance doesn’t mean we are completely out of options. The problem is that you want to decrease quantity consumed in your country – whose policies you control – without causing quantity consumed to rise elsewhere as price falls. The Pigouvian solution is to make the marginal suppliers unprofitable, or make the marginal demanders lower their willingness to pay. One way to do this without tax policy is to introduce products that are substitutes for the polluting good: clean energy, for instance. Or introduce complements for products which are substitutes for the polluting product. There are many more!

http://www.kellogg.northwestern.edu/faculty/harstad/htm/deposits.pdf (January 2012 draft – forthcoming in the Journal of Political Economy)

“Cents and Sensibility: Economic Valuation and the Nature of ‘Nature’,” M. Fourcade (2011)

The US, certainly, is a “cost-benefit state,” as Cass Sunstein puts it, and much of the world has gone the same way. No major policy can be enacted without weighing the monetary costs and benefits. Even legal cases, in large part, involve such a reckoning. This isn’t to say that cost-benefit analysis is strictly good – see Marx on commodification, or more recently Gneezy and Rustichini’s famous paper on Israeli daycare centers. These basic criticisms aside, though, cost-benefit analysis does seem practical and technocratic and, indeed, democratic: we let the numbers rather the agent of the state do the deciding when it comes to policy. Alas, things aren’t so simple, as Marion Fourcade points out in the present paper from a recent issue of the American Journal of Sociology.

What’s not so simple is, for non-market goods, how we ought to construct the costs and the benefits in the first place. Examples include the value of human life, compensation for injury, “pain and suffering”, and destruction of nature, among others. Fourcade considers a recent example: court settlements following oil spills on the coast of France in 1978 and Alaska in 1989, the latter, of course, being Exxon Valdez. Ignoring direct payments for immediate cleanup and restitution to fisherman, there was a huge difference in what the oil companies paid for their damage to “nature”. In particular, the American case involved a massive payment for damages to Prince William Sound incurred by non-locals, whereas the French spill involved no payments of any kind for destruction to nature directly.

The reasons why are sociohistorical, of course. In the US, land has long been held by the government (today, 13% of the US is federally owned), and the conception of a “public trust” or government interest in preserving waterways, forests, mines, etc. for the future use of citizens has a longstanding history. France, with many conflicting local land claims, centuries-long patterns of regional settlement, and near-nonexistence of federal land, instead tends to be focused on the national patrimony, a conception that tends to see nature through the lens of her local users. For instance, France did not establish a national park until the 1960s; if you’re a hiker, you can’t help but notice the difference between a French Grande RandonnĂ©e, with cabins and good food every night, and America’s long distance trails like the Pacific Crest, with weeks-long detours into wilderness. Conceptions of the role of money and morality also differ across cultures, in the obvious direction: the French are more skeptical than Americans of putting monetary value of things outside the traditional economic sphere.

These sociohistorical factors led to different ideas about how to present evidence of damage to courts. The French, organized generally at the level of a group of villages where the spill was most prominent, basically just counted up the amount of biomass destroyed, valued this at market price (without accounting for movement on the demand curve that such biomass might cause), and threw in the cost of restoration of the shore. The Americans (under guide from Solow, among others), at the level of the federal government, attempted to construct a “contingent value” of the existence of a pristine Prince William Sound. All Americans, not just locals, were seen as relevant economic actors. Surveys found the median US household would pay $31 for the existence of such a bay, whether or not they might eventually use it in the future, which gives a contingent valuation of $2.8 billion for a clean shore. (We’ll ignore the many problems with contingent valuation here, only noting that two of my favorite economists, Peter Diamond and Jerry Hausman, have a lengthy 1994 JEP on the topic that, indeed, may be too nice; in any case, my distrust of learning about the economy from surveys is probably too extreme.) That figure is orders of magnitude more than the French settlement.

Fourcade also briefly discusses performativity at the end of the article. If you don’t know the concept, performativity basically just makes the point that social scientists affect the social world by their research: game theory or whatever economic concept you like is not exogenous to society, but rather is “performed” over time. In the case of valuation of nature, the use of contingent valuation in Exxon Valdez led to a large settlement that was partially used to fund ecosystem research that has found interesting results about the links between animals in the food chain which justifies higher payments in future natural disasters than those used in Valdez.

One final point I’m curious about: would this be published in a top economics journal? The most famous papers about contingent valuation that I know – Hausman’s critique and the Kakadu Park paper – appeared in a book and in Oxford Economic Papers, neither of which are prominent for economists. I’d like to think an interesting article like the present one could find a home in a top 5 journal, or in J. Envir. Econ. at least – after adjustments to the style to better fit the norms in economics, of course – but I’m not sure what reviewers would make of the methodology. What do you think?

http://sociology.berkeley.edu/profiles/fourcade/pdf/AJSIII_Published.pdf (Final published version – big thumbs up to Fourcade for good Green Open Access practice! May she be emulated by her fellow sociologists!)

Advertisements
%d bloggers like this: