Category Archives: Industrial Organization

“Buying Locally,” G. J. Mailath, A. Postlewaite & L. Samuelson (2015)

Arrangements where agents commit to buy only from selected vendors, even when there are more preferred products at better prices from other vendors, are common. Consider local currencies like “Ithaca Hours”, which can only be used at other participating stores and which are not generally convertible, or trading circles among co-ethnics even when trust or unobserved product quality is not important. The intuition people have for “buying locally” is to, in some sense, “keep the profits in the community”; that is, even if you don’t care at all about friendly local service or some other utility-enhancing aspect of the local store, you should still patronize it. The fruit vendor, should buy from the local bookstore even when her selection is subpar, and the book vendor should in turn patronize you even when fruits are cheaper at the supermarket.

At first blush, this seems odd to an economist. Why would people voluntarily buy something they don’t prefer? What Mailath and his coauthors show is that, actually, the noneconomist intuition is at least partially correct when individuals are both sellers and buyers. Here’s the idea. Let there be a local fruit vendor, a supermarket, a local bookstore and a chain bookstore. Since the two markets are not perfectly competitive, firms earn a positive rent with each sale. Assume that, tomorrow, the fruit vendor, the local book merchant, and each of the chain managers draw a random preference. Each food seller is equally likely to need a book sold by either the local or chain store, and likewise each bookstore employee is equally likely to need a piece of fruit sold either by the local vendor or the supermarket; you might think of these preferences as reflecting prices, or geographical distance, or product variety, etc. In equilibrium, prices of each book and each fruit are set equally, and each vendor expects to accrue half the sales.

Now imagine that the local bookstore owner and fruit vendor commit in advance not to patronize the other stores, regardless of which preference is drawn tomorrow. Assume for now that they also commit not to raise prices because of this agreement (this assumption will not be important, it turns out). Now the local stores expect to make 3/4 of all sales, since they still get the purchases of the chain managers with probability .5. Since the markup does not change, and there is a constant profit on each sale, then profits improve. And here is the sustainability part: as long as the harm from buying the “wrong product” is not too large, the benefit for the vendor-as-producer of selling more products exceeds the harm to the vendor-as-consumer of buying a less-than-optimal product.

That tradeoff can be made explicit, but the implication is quite general: as the number of firms you can buy at grows large, the benefit to belonging to a buy local arrangement falls. The harm of having to buy from a local producer is big because it is very unlikely the local producer is your first choice, and the price firms set in equilibrium falls because competition is stronger, hence there is less to gain for the vendor-as-producer from belonging to the buy local agreement. You will only see “buy local” style arrangements, like Ithaca Hours, or social shaming, in communities where vendors-as-consumers already purchase most of what they want from vendors-as-producers in the same potential buy local group.

One thing that isn’t explicit in the paper, perhaps because it is too trivial despite its importance, is how buy local arrangements affect welfare. Two possibilities exist. First, if in-group and out-of-group sellers have the same production costs, then “buy local” arrangements simply replace the producer surplus of out-of-group sellers with deadweight loss and some, perhaps minor, surplus for in group members. They are privately beneficial yet socially harmful. However, an intriguing possibility is that “buy local” arrangements may not harm social welfare at all, even if they are beneficial to in-group members. How is that? In-group members are pricing above marginal cost due to market power. A “buy local” agreement increases the quantity of sales they make. If the in-group member has lower costs than out of group members, the total surplus generated by shifting transactions to the in-group seller may be positive, even though there is some deadweight loss created when consumers do not buy their first choice good (in particular, this is true whenever the average willingness-to-pay differential for people who switch to the in-group seller once the buy local group is formed exceeds the average marginal cost differential between in-group and out-of-group sellers.)

May 2015 working paper (RePEc IDEAS version)

“Bonus Culture: Competitive Pay, Screening and Multitasking,” R. Benabou & J. Tirole (2014)

Empirically, bonus pay as a component of overall renumeration has become more common over time, especially in highly competitive industries which involve high levels of human capital; think of something like management of Fortune 500 firms, where the managers now have their salary determined globally rather than locally. This doesn’t strike most economists as a bad thing at first glance: as long as we are measuring productivity correctly, workers who are compensated based on their actual output will both exert the right amount of effort and have the incentive to improve their human capital.

In an intriguing new theoretical paper, however, Benabou and Tirole point out that many jobs involve multitasking, where workers can take hard-to-measure actions for intrinsic reasons (e.g., I put effort into teaching because I intrinsically care, not because academic promotion really hinges on being a good teacher) or take easy-to-measure actions for which there might be some kind of bonus pay. Many jobs also involve screening: I don’t know who is high quality and who is low quality, and although I would optimally pay people a bonus exactly equal to their cost of effort, I am unable to do so since I don’t know what that cost is. Multitasking and worker screening interact among competitive firms in a really interesting way, since how other firms incentivize their workers affects how workers will respond to my contract offers. Benabou and Tirole show that this interaction means that more competition in a sector, especially when there is a big gap between the quality of different workers, can actually harm social welfare even in the absence of any other sort of externality.

Here is the intuition. For multitasking reasons, when different things workers can do are substitutes, I don’t want to give big bonus payments for the observable output, since if I do the worker will put in too little effort on the intrinsically valuable task: if you pay a trader big bonuses for financial returns, she will not put as much effort into ensuring all the laws and regulations are followed. If there are other finance firms, though, they will make it known that, hey, we pay huge bonuses for high returns. As a result, workers will sort, with all of the high quality traders will move to the high bonus firm, leaving only the low quality traders at the firm with low bonuses. Bonuses are used not only to motivate workers, but also to differentially attract high quality workers when quality is otherwise tough to observe. There is a tradeoff, then: you can either have only low productivity workers but get the balance between hard-to-measure tasks and easy-to-measure tasks right, or you can retain some high quality workers with large bonuses that make those workers exert too little effort on hard-to-measure tasks. When the latter is more profitable, all firms inefficiently begin offering large, effort-distorting bonuses, something they wouldn’t do if they didn’t have to compete for workers.

How can we fix things? One easy method is with a bonus cap: if the bonus is capped at the monopsony optimal bonus, then no one can try to screen high quality workers away from other firms with a higher bonus. This isn’t as good as it sounds, however, because there are other ways to screen high quality workers (such as offering lower clawbacks if things go wrong) which introduce even worse distortions, hence bonus caps may simply cause less efficient methods to perform the same screening and same overincentivization of the easy-to-measure output.

When the individual rationality or incentive compatibility constraints in a mechanism design problem are determined in equilibrium, based on the mechanisms chosen by other firms, we sometimes called this a “competing mechanism”. It seems to me that there are quite a number of open questions concerning how to make these sorts of problems tractable; a talented young theorist looking for a fun summer project might find it profitable to investigate this as-yet small literature.

Beyond the theoretical result on screening plus multitasking, Tirole and Benabou also show that their results hold for market competition more general than just perfect competition versus monopsony. They do this through a generalized version of the Hotelling line which appears to have some nice analytic properties, at least compared to the usual search-theoretic models which you might want to use when discussing imperfect labor market competition.

Final copy (RePEc IDEAS version), forthcoming in the JPE.

“The Power of Communication,” D. Rahman (2014)

(Before getting to Rahman’s paper, a quick note on today’s Clark Medal, which went to Roland Fryer, an economist at Harvard who is best known for his work on the economics of education. Fryer is no question a superstar, and is unusual in leaving academia temporarily while still quite young to work for the city of New York on improving their education policy. His work is a bit outside my interests, so I will leave more competent commentary to better informed writers.

The one caveat I have, however, is the same one I gave last year: the AEA is making a huge mistake in essentially changing this prize from “Best Economist Under 40” to “Best Applied Microeconomist Under 40”. Of the past seven winners, the only one who isn’t obviously an applied microeconomist is Levin, and yet even he describes himself as “an applied economist with interests in industrial organization, market design and the economics of technology.” It’s not that Saez, Duflo, Levin, Finkelstein, Chetty, Gentzkow and Fryer are doing bad work – their research is all of very high quality and by no means “cute-onomics” – but simply that the type of research they do is a very small subset of what economists work on. This style of work is particularly associated with the two Cambridge schools, and it’s no surprise that all of the past seven winners either did their PhD or postdoc in Cambridge. Where are the macroeconomists, when Europe is facing unemployment rates upwards of 30% in some regions? Where are the finance and monetary folks, when we just suffered the worst global recession since the 1930s? Where are the growth economists, when we have just seen 20 years of incredible economic growth in the third world? Where are the historians? Where are the theorists, microeconomic and econometric, on whose backs the applied work winning the prizes are built? Something needs to change.)

Enough bellyaching. Let’s take a look at Rahman’s clever paper, which might be thought as “when mediators are bad for society”; I’ll give you another paper shortly about “when mediators are good”. Rahman’s question is simple: can firms maintain collusion without observing what other firms produce? You might think this would be tricky if the realized price only imperfectly reflects total production. Let the market price p be a function of total industry production q plus an epsilon term. Optimally, we would jointly produce the monopoly quantity and split the rents. However, the epsilon term means that simply observing the market price doesn’t tell my firm whether the other firm cheated and produced too much.

What can be done? Green and Porter (1984), along with Abreu, Pearce and Stacchetti two years later, answered that collusion can be sustained: just let the equilibrium involve a price war if the market price drops below a threshold. Sannikov and Skrzypacz provided an important corollary, however: if prices can be monitored continuously, then collusion unravels. Essentially, if actions to increase production can be taken continuously, the price wars required to prevent cheating must be so frequent that join profit from sometimes colluding and sometimes fighting price wars is worse than joint profit than from just playing static Cournot.

Rahman’s trick saves collusion even when, as is surely realistic, cheaters can act in continuous time. Here is how it works. Let there be a mediator – an industry organization or similar – who can talk privately to each firm. Colluding firms alternate who is producing at any given time, with the one producing firm selling the monopoly level of output. The firms who are not supposed to produce at time t obviously have an incentive to cheat and produce a little bit anyway. Once in a while, however, the mediator tells the firm who is meant to produce in time t to produce a very large amount. If the price turns out high, the mediator gives the firm that was meant to produce a very large amount less time in the future to act as the monopolist, whereas if the price turns out low, the mediator gives that firm more monopolist time in the future. The latter condition is required to incentivize the producing firm to actually ramp up production when told to do so. Either a capacity constraint, or a condition on the demand function, is required to keep the producing firm from increasing production too much.

Note that if a nonproducing firm cheats and produce during periods you were meant to be producing 0, and the mediator happens to secretly ask the temporary monopolist firm to produce a large amount, you are just increasing the probability that the other firm gets to act as the monopolist in the future while you just get to produce zero. Even better, since the mediator only occasionally asks the producing firm to overproduce, and other firms don’t know when this time might be, the nonproducing firms are always wary of cheating. That is, the mediator’s ability to make private recommendations permits more scope for collusion than firms who only options are to punish based on continuously-changing public prices, because there are only rare yet unknown times when cheating could be detected. What’s worse for policymakers, the equilibrium here which involves occasional overproduction shows that such overproduction is being used to help maintain collusion, not to deviate from it; add overproduction to Green-Porter price wars as phenomena which look like collusion breaking down but are instead collusion being maintained.

Final working paper (RePEc IDEAS). Final version published in AER 2014. If you don’t care about proof details, the paper is actually a very quick read. Perhaps no surprise, but the results in this paper are very much related to those in Rahman’s excellent “Who will Monitor the Monitor?” which was discussed on this site four years ago.

“Dynamic Commercialization Strategies for Disruptive Technologies: Evidence from the Speech Recognition Industry,” M. Marx, J. Gans & D. Hsu (2014)

Disruption. You can’t read a book about the tech industry without Clayton Christensen’s Innovator’s Dilemma coming up. Jobs loved it. Bezos loved it. Economists – well, they were a bit more confused. Here’s the story at its most elemental: in many industries, radical technologies are introduced. They perform very poorly initially, and so are ignored by the incumbent. These technologies rapidly improve, however, and the previously ignored entrants go on to dominate the industry. The lesson many tech industry folks take from this is that you ought to “disrupt yourself”. If there is a technology that can harm your most profitable business, then you should be the one to develop it; take Amazon’s “Lab126” Kindle skunkworks as an example.

There are a couple problems with this strategy, however (well, many problems actually, but I’ll save the rest for Jill Lepore’s harsh but lucid takedown of the disruption concept which recently made waves in the New Yorker). First, it simply isn’t true that all innovative industries are swept by “gales of creative destruction” – consider automobiles or pharma or oil, where the major players are essentially all quite old. Gans, Hsu and Scott Stern pointed out in a RAND article many years ago that if the market for ideas worked well, you would expect entrants with good ideas to just sell to incumbents, since the total surplus would be higher (less duplication of sales assets and the like) and since rents captured by the incumbent would be higher (less product market competition). That is, there’s no particular reason that highly innovative industries require constant churn of industry leaders.

The second problem concerns disrupting oneself or waiting to see which technologies will last. Imagine it is costly to investigate potentially disruptive technologies for the incumbent. For instance, selling mp3s in 2002 would have cannibalized existing CD sales at a retailer with a large existing CD business. Early on, the potentially disruptive technology isn’t “that good”, hence it is not in and of itself that profitable. Eventually, some of these potentially disruptive technologies will reveal themselves to actually be great improvements on the status quo. If that is the case, then, why not just let the entrant make these improvements/drive down costs/learn about market demand, and then buy them once they reveal that the potentially disruptive product is actually great? Presumably the incumbent even by this time still retains its initial advantage in logistics, sales, brand, etc. By waiting and buying instead of disrupting yourself, you can still earn those high profits on the CD business in 2002 even if mp3s had turned out to be a flash in the pan.

This is roughly the intuition in a new paper by Matt Marx – you may know his work on non-compete agreements – Gans and Hsu. Matt has also collected a great dataset from industry journals on every firm that ever operated in automated speech recognition. Using this data, the authors show that a policy by entrants of initial competition followed by licensing or acquisition is particularly common when the entrants come in with a “disruptive technology”. You should see these strategies, where the entrant proves the value of their technology and the incumbent waits to acquire, in industries where ideas are not terribly appropriable (why buy if you can steal?) and entry is not terribly expensive (in an area like biotech, clinical trials and the like are too expensive for very small firms). I would add that you also need complementary assets to be relatively hard to replicate; if they aren’t, the incumbent may well wind up being acquired rather than the entrant should the new technology prove successful!

Final July 2014 working paper (RePEc IDEAS). The paper is forthcoming in Management Science.

“Upstream Innovation and Product Variety in the U.S. Home PC Market,” A. Eizenberg (2014)

Who benefits from innovation? The trivial answer would be that everyone weakly benefits, but since innovation can change the incentives of firms to offer different varieties of a product, heterogeneous tastes among buyers may imply that some types of innovation makes large groups of people worse off. Consider computers, a rapidly evolving technology. If Lenovo introduces a laptop with a faster processor, they may wish to discontinue production of a slower laptop, because offering both types flattens the demand curve for each, and hence lowers the profit-maximizing markup that can be charged for the better machine. This effect, combined with a fixed cost of maintaining a product line, may push firms to offer too little variety in equilibrium.

As an empirical matter, however, things may well go the other direction. Spence’s famous product selection paper suggests that firms may produce too much variety, because they don’t take into account that part of the profit they earn from a new product is just cannibalization of other firm’s existing product lines. Is it possible to separate things out from data? Note that this question has two features that essentially require a structural setup: the variable of interest is “welfare”, a completely theoretical concept, and lots of the relevant numbers like product line fixed costs are unobservable to the econometrician, hence they must be backed out from other data via theory.

There are some nice IO tricks to get this done. Using a near-universe of laptop sales in the early 2000s, Eizenberg estimates heterogeneous household demand using standard BLP-style methods. Supply is tougher. He assumed that firms get a fixed cost per product line shock, then pick their product mix each quarter, then observe consumer demand, then finally play Nash-Bertrand differentiated product pricing. The problem is that the pricing game often has multiple equilibria (e.g., with two symmetric firms, one may offer a high-end product and the other a low-end one, or vice versa). Since the pricing game equilibria are going to be used to back out fixed costs, we are in a bit of a bind. Rather than select equilibria using some ad hoc approach (how would you even do so in the symmetric case just mentioned?), Eizenberg cleverly just partially identifies fixed costs as backed out from any possible pricing game equilibrium, using bounds in the style of Pakes, Porter, Ho and Ishii. This means that welfare effects are also only partially identified.

Throwing this model at the PC data shows that the mean consumer in the early 2000s wasn’t willing to pay any extra for a laptop, but there was a ton of heterogeneity in willingness to pay both for laptops and for faster speed on those laptops. Every year, the willingness to pay for a given computer fell $257 – technology was rapidly evolving and lots of substitute computers were constantly coming onto the market.

Eizenberg uses these estimates to investigate a particularly interesting counterfactual: what was the effect of the introduction of the lighter Pentium M mobile processor? As Pentium M was introduced, older Pentium III based laptops were, over time, no longer offered by the major notebook makers. The M raised predicted notebook sales by 5.8 to 23.8%, raised mean notebook price by $43 to $86, and lowered Pentium III share in the notebook market from 16-23% down to 7.7%. Here’s what’s especially interesting, though: total consumer surplus is higher with the M available, but all of the extra consumer surplus accrues to the 20% least price-sensitive buyers (as should be intuitive, since only those with high willingness-to-pay are buying cutting edge notebooks). What if a social planner had forced firms to keep offering the Pentium III models after the M was introduced? Net consumer plus producer surplus may have actually been positive, and the benefits would have especially accrued to those at the bottom end of the market!

Now, as a policy matter, we are (of course) not going to force firms to offer money-losing legacy products. But this result is worth keeping in mind anyway: because firms are concerned about pricing pressure, they may not be offering a socially optimal variety of products, and this may limit the “trickle-down” benefits of high tech products.

2011 working paper (No IDEAS version). Final version in ReStud 2014 (gated).

“Dynamic Constraints on the Distribution of Stochastic Choice: Drift Diffusion Implies Random Utility,” R. Webb (2013)

Neuroeconomics is a slightly odd field. It seems promising to “open up the black box” of choice using evidence from neuroscience, but despite this promise, I don’t see very many terribly interesting economic results. And perhaps this isn’t surprising – in general, economic models are deliberately abstract and do not hinge on the precise reason why decisions are made, so unsurprisingly neuro appears most successful in, e.g., selecting among behavioral models in specific circumstances.

Ryan Webb, a post-doc on the market this year, shows another really powerful use of neuroeconomic evidence: guiding our choices of the supposedly arbitrary parts of our models. Consider empirical models of random utility. Consumers make a discrete choice, such that the object chosen i is that which maximizes utility v(i). In the data, even the same consumer does not always make the same choice (I love my Chipotle burrito bowl, but I nonetheless will have a different lunch from time to time!). How, then, can we use the standard choice setup in empirical work? Add a random variable n(i) to the decision function, letting agents choose i which maximizes v(i)+n(i). As n will take different realizations, choice patterns can vary somewhat.

The question, though, is what distribution n(i) should take? Note that the probability i is chosen is just

P(v(i)+n(i)>=v(j)+n(j)) for all j

or

P(v(i)-v(j)>=n(i)-n(j)) for all j

If n are distributed independent normal, then the difference n(i)-n(j) is normal. If n are extreme value type I, the difference is logistic. Do either of those assumptions, or some alternative, make sense?

Webb shows that random utility is really just a reduced form of a well-established class of models in psychology called bounded accumulation models. Essentially, you receive a series of sensory inputs stochastically, the data adds up in your brain, and you make a decision according to some sort of stopping rule as the data accumulates in a drift diffusion. In a choice model, you might think for a bit, accumulating reasons to choose A or B, then stop at a fixed time T* and choose the object that, after the random drift, has the highest perceived “utility”. Alternatively, you might stop once the gap between the perceived utilities of different alternatives is high enough, or once one alternative has a sufficiently high perceived utility. It is fairly straightforward to show that this class of models all collapses to max v(i)+n(i), with differing implications for the distribution of n. Thus, neuroscience evidence about which types of bounded accumulation models appear most realistic can help choose among distributions of n for empirical random utility work.

How, exactly? Well, for any stopping rule, there is an implied distribution of stopping times T*. The reduced form errors n are then essentially the sample mean of random draws from an finite accretion process, and hence if the rule implies relatively short stopping times, n will be fat-tailed rather than normal. Also, consider letting the difference in underlying utility v(i)-v(j) be large. Then the stopping time under the accumulation models is relatively short, and hence the variance in the distribution of reduced form errors (again, essentially the sample mean of random draws) is relatively large. Hence, errors are heteroskedastic in the underlying v(i)-v(j). Webb gives additional results relating to the skew and correlation of n. He further shows that assuming independent normality or independent extreme value type I for the error terms can lead to mistaken inference, using a recent AER that tries to infer risk aversion parameters from choices among monetary lotteries. Quite interesting, even for a neuroecon skeptic!

2013 Working Paper (No IDEAS version).

“Identifying Technology Spillovers and Product Market Rivalry,” N. Bloom, M. Schankerman & J. Van Reenen (2013)

R&D decisions are not made in a vacuum: my firm both benefits from information about new technologies discovered by others, and is harmed when other firms create new products that steal from my firm’s existing product lines. Almost every workhorse model in innovation is concerned with these effects, but measuring them empirically, and understanding how they interact, is difficult. Bloom, Schankerman and van Reenen have a new paper with a simple but clever idea for understanding these two effects (and it will be no surprise to readers given how often I discuss their work that I think these three are doing some of the world’s best applied micro work these days).

First, note that firms may be in the same technology area but not in the same product area; Intel and Motorola work on similar technologies, but compete on very few products. In a simple model, firms first choose R&D, knowledge is produced, and then firms compete on the product market. The qualitative results of this model are as you might expect: firms in a technology space with many other firms will be more productive due to spillovers, and may or may not actually perform more R&D depending on the nature of diminishing returns in the knowledge production function. Product market rivalry is always bad for profits, does not affect productivity, and increases R&D only if research across firms is a strategic complement; this strategic complementarity could be something like a patent race model, where if firms I compete with are working hard trying to invent the Next Big Thing, then I am incentivized to do even more R&D so I can invent first.

On the empirical side, we need a measure of “product market similarity” and “technological similarity”. Let there be M product classes and N patent classes, and construct vectors for each firm of their share of sales across product classes and share of R&D across patent classes. There are many measures of the similarity of a vector, of course, including a well-known measure in innovation from Jaffe. Bloom et al, after my heart, note that we really ought use measures that have proper axiomatic microfoundations; though they do show the properties of a variety of measures of similarity, they don’t actually show the existence (or impossibility) of their optimal measure of similarity. This sounds like a quick job for a good microtheorist.

With similarity measures, all that’s left to do is run regressions of technological and product market similarity, as well as all sorts of fixed effects, on outcomes like R&D performed, productivity (measured using patents or out of a Cobb-Douglas equation) and market value (via the Griliches-style Tobin’s Q). These guys know their econometrics, so I’m omitting many details here, but I should mention that they do use the idea from Wilson’s 2009 ReSTAT of basically random changes in state R&D tax laws as an IV for the cost of R&D; this is a great technique, and very well implemented by Wilson, but getting these state-level R&D costs is really challenging and I can easily imagine a future where the idea is abused by naive implementation.

The results are actually pretty interesting. Qualitatively, the empirical results look quite like the theory, and in particular, the impact of technological similarity looks really important; having lots of firms working on similar technologies but working in different industries is really good for your firm’s productivity and profits. Looking at a handful of high-tech sectors, Bloom et al estimate that the marginal social return on R&D is on the order of 40 percentage points higher than the marginal private return of R&D, implying (with some huge caveats) that R&D in the United States might be something like 3 times smaller than it ought to be. This estimate is actually quite similar to what researchers using other methods have estimated. Interestingly, since bigger firms tend to work in more dense parts of the technology space, they tend to generate more spillovers, hence the common policy prescription of giving smaller firms higher R&D tax credits may be a mistake.

Two caveats. As far as I can tell, the model does not allow a role for absorptive capacity, where firm’s ability to integrate outside knowledge is endogenous to their existing R&D stock. Second, the estimated marginal private rate of return on R&D is something like 20 percent for the average firm; many other papers have estimated very high private benefits from research, but I have a hard time interpreting these estimates. If there really are 20% rates of return lying around, why aren’t firms cranking up their research? At least anecdotally, you hear complaints from industries like pharma about low returns from R&D. Third, there are some suggestive comments near the end about how government subsidies might be used to increase R&D given these huge social returns. I would be really cautious here, since there is quite a bit of evidence that government-sponsored R&D generates a much lower private and social rate of return that the other forms of R&D.

Final July 2013 Econometrica version (IDEAS version). Thumbs up to Nick Bloom for making the final version freely available on his website. The paper has an exhaustive appendix with technical details, as well as all of the data freely available for you to play with.

“Back to Basics: Basic Research Spillovers, Innovation Policy and Growth,” U. Akcigit, D. Hanley & N. Serrano-Velarde (2013)

Basic and applied research, you might imagine, differ in a particular manner: basic research has unexpected uses in a variety of future applied products (though it sometimes has immediate applications), while applied research is immediately exploitable but has fewer spillovers. An interesting empirical fact is that a substantial portion of firms report that they do basic research, though subject to a caveat I will mention at the end of this post. Further, you might imagine that basic and applied research are complements: success in basic research in a given area expands the size of the applied ideas pond which can be fished by firms looking for new applied inventions.

Akcigit, Hanley and Serrano-Velarde take these basic facts and, using some nice data from French firms, estimate a structural endogenous growth model with both basic and applied research. Firms hire scientists then put them to work on basic or applied research, where the basic research “increases the size of the pond” and occasionally is immediately useful in a product line. The government does “Ivory Tower” basic research which increases the size of the pond but which is never immediately applied. The authors give differential equations for this model along a balanced growth path, have the government perform research equal to .5% of GDP as in existing French data, and estimate the remaining structural parameters like innovation spillover rates, the mean “jump” in productivity from an innovation, etc.

The pretty obvious benefit of structural models as compared to estimating simple treatment effects is counterfactual analysis, particularly welfare calculations. (And if I may make an aside, the argument that structural models are too assumption-heavy and hence non-credible is nonsense. If the mapping from existing data to the actual questions of interest is straightforward, then surely we can write a straightforward model generating that external validity. If the mapping from existing data to the actual question of interest is difficult, then it is even more important to formally state what mapping you have in mind before giving policy advice. Just estimating a treatment effect off some particular dataset and essentially ignoring the question of external validity because you don’t want to take a stand on how it might operate makes me wonder why I, the policymaker, should take your treatment effect seriously in the first place. It seems to me that many in the profession already take this stance – Deaton, Heckman, Whinston and Nevo, and many others have published papers on exactly this methodological point – and therefore a decade from now, you will find it equally as tough to publish a paper that doesn’t take external validity seriously as it is to publish a paper with weak internal identification today.)

Back to the estimates: the parameters here suggest that the main distortion is not that firms perform too little R&D, but that they misallocate between basic and applied R&D; the basic R&D spills over to other firms by increasing the “size of the pond” for everybody, hence it is underperformed. This spillover, estimated from data, is of substantial quantitative importance. The problem, then, is that uniform subsidies like R&D tax credits will just increase total R&D without alleviating this misallocation. I think this is a really important result (and not only because I have a theory paper myself, coming at the question of innovation direction from the patent race literature rather than the endogenous growth literature, which generates essentially the same conclusion). What you really want to do to increase welfare is increase the amount of basic research performed. How to do this? Well, you could give heterogeneous subsidies to basic and applied research, but this would involve firms reporting correctly, which is a very difficult moral hazard problem. Alternatively, you could just do more research in academia, but if this is never immediately exploited, it is less useful than the basic research performed in industry which at least sometimes is used in products immediately (by assumption); shades of Aghion, Dewatripont and Stein (2008 RAND) here. Neither policy performs particularly well.

I have two small quibbles. First, basic research in the sense reported by national statistics following the Frascati manual is very different from basic research in the sense of “research that has spillovers”; there is a large literature on this problem, and it is particularly severe when it comes to service sector work and process innovation. Second, the authors suggest at one point that Bayh-Dole style university licensing of research is a beneficial policy: when academic basic research can now sometimes be immediately applied, we can easily target the optimal amount of basic research by increasing academic funding and allowing academics to license. But this prescription ignores the main complaint about Bayh-Dole, which is that academics begin, whether for personal or institutional reasons, to shift their work from high-spillover basic projects to low-spillover applied projects. That is, it is not obvious the moral hazard problem concerning targeting of subsidies is any easier at the academic level than at the private firm level. In any case, this paper is very interesting, and well worth a look.

September 2013 Working Paper (RePEc IDEAS version).

“X-Efficiency,” M. Perelman (2011)

Do people still read Leibenstein’s fascinating 1966 article “Allocative Efficiency vs. X-Efficiency”? They certainly did at one time: Perelman notes that in the 1970s, this article was the third-most cited paper in all of the social sciences! Leibenstein essentially made two points. First, as Harberger had previously shown, distortions like monopoly simply as a matter of mathematics can’t have large welfare impacts. Take monopoly. for instance. The deadweight loss is simply the change in price times the change in quantity supplied times .5 times the percentage of the economy run by monopolist firms. Under reasonable looking demand curves, those deadweight triangles are rarely going to be even ten percent of the total social welfare created in a given industry. If, say, twenty percent of the final goods economy is run by monopolists, then, we only get a two percent change in welfare (and this can be extended to intermediate goods with little empirical change in the final result). Why, then, worry about monopoly?

The reason to worry is Leibenstein’s second point: firms in the same industry often have enormous differences in productivity, and there is tons of empirical evidence that firms do a better job of minimizing costs when under the selection pressures of competition (Schmitz’ 2005 JPE on iron ore producers provides a fantastic demonstration of this). Hence, “X-inefficiency”, which Perelman notes is named after Tolstoy’s “X-factor” in the performance of armies from War and Peace, and not just just allocative efficiency may be important. Draw a simple supply-demand graph and you will immediately see that big “X-inefficiency rectangles” can swamp little Harberger deadweight loss triangles in their welfare implications. So far, so good. These claims, however, turned out to be incredibly controversial.

The problem is that just claiming waste is really a broad attack on a fundamental premise of economics, profit maximization. Stigler, in his well-named X-istence of X-efficiency (gated pdf), argues that we need to be really careful here. Essentially, he is suggesting that information differences, principal-agent contracting problems, and many other factors can explain dispersion in costs, and that we ought focus on those factors before blaming some nebulous concept called waste. And of course he’s correct. But this immediately suggests a shift from traditional price theory to a mechanism design based view of competition, where manager and worker incentives interact with market structure to produce outcomes. I would suggest that this project is still incomplete, that the firm is still too much of a black box in our basic models, and that this leads to a lot of misleading intuition.

For instance, most economists will agree that perfectly price discriminating monopolists have the same welfare impact as perfect competition. But this intuition is solely based on black box firms without any investigation of how those two market structures affect the incentive for managers to collect costly information of efficiency improvements, on the optimal labor contracts under the two scenarios, etc. “Laziness” of workers is an equilibrium outcome of worker contracts, management monitoring, and worker disutility of effort. Just calling that “waste” as Leibenstein does is not terribly effective analysis. It strikes me, though, that Leibenstein is correct when he implicitly suggests that selection in the marketplace is more primitive than profit maximization: I don’t need to know much about how manager and worker incentives work to understand that more competition means inefficient firms are more likely to go out of business. Even in perfect competition, we need to be careful about assuming that selection automatically selects away bad firms: it is not at all obvious that the efficient firms can expand efficiently to steal business from the less efficient, as Chad Syverson has rigorously discussed.

So I’m with Perelman. Yes, Leibenstein’s evidence for X-inefficiency was weak, and yes, he conflates many constraints with pure waste. But on the basic points – that minimized costs depend on the interaction of incentives with market structure instead of simply on technology, and that heterogeneity in measured firm productivity is critical to economic analysis – Leibenstein is far more convincing that his critics. And while Syverson, Bloom, Griffith, van Reenen and many others are opening up the firm empirically to investigate the issues Leibenstein raised, there is still great scope for us theorists to more carefully integrate price theory and mechanism problems.

Final article in JEP 2011 (RePEc IDEAS). As always, a big thumbs up to the JEP for making all of their articles ungated and free to read.

On Coase’s Two Famous Theorems

Sad news today that Ronald Coase has passed away; he was still working, often on the Chinese economy, at the incredible age of 102. Coase is best known to economists for two statements: that transaction costs explain many puzzles in the organization of society, and that pricing for durable goods presents a particular worry since even a monopolist selling a durable good needs to “compete” with its future and past selves. Both of these statements are horribly, horribly misunderstood, particularly the first.

Let’s talk first about transaction costs, as in “The Nature of the Firm” and “The Problem of Social Cost”, which are to my knowledge the most cited and the second most cited papers in economics. The Problem of Social Cost leads with its famous cattle versus crops example. A farmer wishes to grow crops, and a rancher wishes his cattle to roam where the crops grow. Should we make the rancher liable for damage to the crops (or restrain the rancher from letting his cattle roam at all!), or indeed ought we restrain the farmer from building a fence where the cattle wish to roam? Coase points out that in some sense both parties are causally responsible for the externality, that there is some socially efficient amount of cattle grazing and crop planting, and that if a bargain can be reached costlessly, then there is some set of side payments where the rancher and the farmer are both better off than having the crops eaten or the cattle fenced. Further, it doesn’t matter whether you give grazing rights to the cattle and force the farmer to pay for the “right” to fence and grow crops, or whether you give farming rights and force the rancher to pay for the right to roam his cattle.

This basic principle applies widely in law, where Coase had his largest impact. He cites a case where confectioner machines shake a doctor’s office, making it impossible for the doctor to perform certain examinations. The court restricts the ability of the confectioner to use the machine. But Coase points out that if the value of the machine to the confectioner exceeds the harm of shaking to the doctor, then there is scope for a mutually beneficial side payment whereby the machine is used (at some level) and one or the other is compensated. A very powerful idea indeed.

Powerful, but widely misunderstood. I deliberately did not mention property rights above. Coase is often misunderstood (and, to be fair, he does at points in the essay imply this misunderstanding) as saying that property rights are important, because once we have property rights, we have something that can “be priced” when bargaining. Hence property rights + externalities + no transaction costs should lead to no inefficiency if side payments can be made. Dan Usher famously argued that this is “either tautological, incoherent, or wrong”. Costless bargaining is efficient tautologically; if I assume people can agree on socially efficient bargains, then of course they will. The fact that side payments can be agreed upon is true even when there are no property rights at all. Coase says that “[i]t is necessary to know whether the damaging business is liable or not for damage since without the establishment of this initial delimitation of rights there can be no market transactions to transfer and recombine them.” Usher is correct: that statement is wrong. In the absence of property rights, a bargain establishes a contract between parties with novel rights that needn’t exist ex-ante.

But all is not lost for Coase. Because the real point of his paper begins with Section VI, not before, when he notes that the case without transaction costs isn’t the interesting one. The interesting case is when transaction costs make bargaining difficult. What you should take from Coase is that social efficiency can be enhanced by institutions (including the firm!) which allow socially efficient bargains to be reached by removing restrictive transaction costs, and particularly that the assignment of property rights to different parties can either help or hinder those institutions. One more thing to keep in mind about the Coase Theorem (which Samuelson famously argued was not a theorem at all…): Coase implicitly is referring to Pareto efficiency in his theorem, but since property rights are an endowment, we know from the Welfare Theorems that benefits exceeds costs is not sufficient for maximizing social welfare.

Let’s now consider the Coase Conjecture: this conjecture comes, I believe, from a very short 1972 paper, Durability and Monopoly. The idea is simple and clever. Let a monopolist own all of the land in the US. If there was a competitive market in land, the price per unit would be P and all Q units will be sold. Surely a monopolist will sell a reduced quantity Q2 less than Q at price P2 greater than P? But once those are sold, we are in trouble, since the monopolist still has Q-Q2 units of land. Unless the monopolist can commit to never sell that additional land, we all realize he will try to sell it sometime later, at a new maximizing price P3 which is greater than P but less than P2. He then still has some land left over, which he will sell even cheaper in the next period. Hence, why should anyone buy in the first period, knowing the price will fall (and note that the seller who discounts the future has the incentive to make the length between periods of price cutting arbitrarily short)? The monopolist with a durable good is thus unable to make rents. Now, Coase essentially never uses mathematical theorems in his papers, and you game theorists surely can see that there are many auxiliary assumptions about beliefs and the like running in the background here.

Luckily, given the importance of this conjecture to pricing strategies, antitrust, auctions, etc., there has been a ton of work on the problem since 1972. Nancy Stokey (article gated) has a famous paper written here at MEDS showing that the conjecture only holds strictly when the seller is capable of selling in continuous time and the buyers are updating beliefs continuously, though approximate versions of the conjecture hold when periods are discrete. Gul, Sonnenschein and Wilson flesh out the model more completely, generally showing the conjecture to hold in well-defined stationary equilibrium across various assumptions about the demand curve. McAfee and Wiseman show in a recent ReStud that even the tiniest amount of “capacity cost”, or a fee that must be paid in any period for X amount of capacity (i.e., the need to hire sales agents for the land), destroys the Coase reasoning. The idea is that in the final few periods, when I am selling to very few people, even a small capacity cost is large relative to the size of the market, so I won’t pay it; backward inducting, then, agents in previous periods know it is not necessarily worthwhile to wait, and hence they buy earlier at the higher price. It goes without saying that there are many more papers in the formal literature.

(Some final notes: Coase’s Nobel lecture is well worth reading, as it summarizes the most important thread in his work: “there [are] costs of using the pricing mechanism.” It is these costs that explain why, though markets in general have such amazing features, even in capitalist countries there are large firms run internally as something resembling a command state. McCloskey has a nice brief article which generally blames Stigler for the misunderstanding of Coase’s work. Also, while gathering some PDFs for this article, I was shocked to see that Ithaka, who run JSTOR, is now filing DMCA takedowns with Google against people who host some of these legendary papers (like “Problem of Social Cost”) on their academic websites. What ridiculousness from a non-profit that claims its mission is to “help the academic community use digital technologies to preserve the scholarly record.”)

Follow

Get every new post delivered to your Inbox.

Join 269 other followers

%d bloggers like this: