If you know this paper, you know it as the “Apple-Cinnamon Cheerios” paper. The question is the following: how valuable are seemingly incremental introductions of new products, such as new cereal brands? And if they are valuable, is the CPI missing something significant by aggregating prices at too high a level and therefore missing welfare gains from new goods?
Consider Apple-Cinnamon Cheerios. To calculate welfare effects of ACC, we can follow Hicks and find the “virtual price”, or the price at which demand would be zero. With that in hand, we can perform the usual price-index calculations as demand increases and price falls. That is, consider a product introduced in 1990. Give me the demand in 2000 at 2000 prices and incomes. Use Hausman (1981) or Hausman-Newey (1995) if you’re really good at differential equations, to find the expenditure function giving the minimum income necessary to purchase the actual 2000 bundle and give 2000 utility at 2000 prices. With that expenditure function in hand, calculate the spending necessary to get 2000 utility at 1990 prices, where the price of the new good is its virtual price.
That’s all well and good, but three big issues arise. First, calculating elasticities of substitution across hundreds of cereal brands, for instance, is asking a lot of the data, and some utility forms like Dixit-Stiglitz do not allow arbitrary patterns of substitutability like we might want. Second, we generally don’t see any price in the data near the virtual price, so we’re going to have to make some assumptions about the shape of the demand curve up there; Hausman uses the nice trick of also showing lower bounds for the virtual price assuming only that demand is convex. Third, pricing strategies under imperfect competition with multiproduct firms are very different from competitive pricing, since new goods can either cannibalize other products’ demand curves (say, that of regular Cheerios) or lead to increased prices of other products’ demand curves if they make those products’ demand curves steeper (say, Honey Nut Cheerios). Assuming competitive market pricing for products other than the new goods might overstate welfare gains for this reason. The section on imperfect competition is rather brief and not totally convincing, so I won’t discuss it further, but it’s definitely an important problem!
Hausman, unsurprisingly given his background, breaks out some nice econometric wizardry to try to get around these problems. He essentially assumes Gorman’s multi-stage budgeting on the part of the consumer, and therefore assuming substitutability occurs within known classes of products – in the case of Apple-Cinnamon Cheerios, substitutability only directly occurs among other “family” cereals, though of course family cereals may substitute for children’s cereals, and cereals as a whole are still allowed to substitute for other products. He assumed lowest-level demand is Deaton-Muellbauer’s Almost Ideal Demand, which allows arbitrary cross-price elasticities.
Using 137 weeks of cash register data from 7 cities, he identifies elasticities. The obvious problem here – indeed, the reason the term “econometrics” exists – is the simultaneity problem of supply and demand. That is, prices are endogenous. Hausman’s identification assumption is that though shocks to prices can occur over time within cities (say, ad campaigns), or within cities (say, differences in transport cost), there are no nationwide shocks to demand at a given time. The argument is something like “cereal ad campaigns are generally not national.” In the response by Tim Bresnahan that follows the pdf I link to below, Tim notes that if this assumption is wrong, the estimates in the paper are biased toward too-steep demand curves, and hence too big an effect. I heard through the grapevine that Bresnahan was not, and I’ll understate, terribly enthused with this paper when it was first presented.
In any case, if you buy the methodology, the introduction of Apple-Cinnamon Cheerios increased consumer welfare by 78 million dollars per year. The virtual price was approximately double the sales price (note that this is huge: this means for many consumers there was very little substitutability between Apple-Cinnamon Cheerios and pre-existing products). Since 25% of cereal demand when Hausman wrote was from new brands introduced in the prior ten years, if the virtual price was generally about twice the sales price, the price index for cereals should have gone down by roughly 25%. That is, for new cereal brands, the “price” fell by half, and for old brands there was no change.
Finally, a couple notes. First, Hausman has a great footnote on the common technique of estimating demands for individual attributes of a product independently and then summing them to find demand for a new product: “I realized the limitation of these models when I tried applying them to the choices among French champagnes. Somehow, the bubble content could never be made to come in significant.” Personally, I have no idea why people do probit-attribute estimation: what could possibly be the economic theory justification for doing so, particularly when linearity among attribute demands is assumed, as it usually is? Second, if you like this paper, you will definitely like Trajtenberg’s seminal 1990 book on CT scanners, as well as Aviv Nevo’s follow-up Econometrica on the cereal industry. Third, Greenwood and Kopecky have a nice new working paper on the value of the personal computer. They estimate parameters of the demand function through a calibration technique, which obviously is going to paper over the endogeneity of prices problem. That said, I’m working on a paper now where I’m calculating welfare effects from a product innovation, and the effects involve too many moving parts for economic theory to guide me when it comes to identification. What do you guys think of calibration exercises of the type done in Greenwood and Kopecky? I think my paper is a little easier because the supply side is more or less exogenous and because I’m doing a very-limited-assumption “increased/decreased consumer welfare” exercise in addition to a point estimate, so missing standard errors on the point estimate is probably not a big deal. That said, any general comments on calibration?
http://bpp.wharton.upenn.edu/ma….0Comment.pdf (Final NBER New Goods volume version, including Bresnahan’s comments. In dark parts of the internet, you can find some overly-personal back-and-forths by Hausman and Bresnahan that followed the publication of this volume.)
An excellent review of Hausman’s work. Greenwood and Kopecky solve the primal problem for the consumer.. Hausman takes the dual approach. Greenwood and Kopecky makes the economics of the new goods problem very transparent: How do you evaluate a person’s utility when you don’t have the new good. Hausman’s approach, which relies on the mapping from an demand curve to the expenditure function, admits utility functions that don’t have a closed-form solution. So, it may be more general. But, for general demand curves it will not be easy to work with. I.e., if you use a complicated demand equation then calculating the welfare gain for a product will not be easy because.you will have to solve a differential equation numerically. The simple linear demand equation, used by Hausman, may not be good, for reasons implicit in Greenwood and Kopecky.
The calibration issue is a red herring, You could structurally estimate either a Greenwood/Kopecky model or a Hausman model. When the price of a new good falls dramatically after its introduction it seems reasonable to assume that this is due to process innovation, or a shift in the supply curve. Therefore, the endogeneity of the price is probably not much of an issue for computers. How else could you explain a 15 to 25% price decline over a 25 year period? If you think it is, then you should estimate a model which allows for strategic interaction between firms, or builds in whatever you have in mind. It’s unclear that these things are well controlled for in other people’s papers in the new goods literature.