I’m doing some work related to social learning, and a friend passed along the present paper by a recent job market candidate. It’s quite clever, and a great use of the wealth of data now available to the empirically-minded economist.
Here’s the question: there are tons of ways products, stores and restaurants develop reputation. One of these ways is reviews. How important is that extra Michelin star, or higher Zagat rating, or better word of mouth? And how could we ever separate the effect of reputation from the underlying quality of the restaurant?
Luca scrapes restaurant review data from Yelp, which really began penetrating Seattle in 2005; Yelp data is great because it includes review dates, so you can go back in time and reconstruct, with some error due to deleted reviews, what the review profile used to look like. Luca also has, incredibly, 7 years of restaurant revenue data from the city of Seattle. Just put the two together and you can track how restaurant reviews are correlated with revenue.
But what of causality? Here’s the clever bit. He notes that Yelp aggregates reviews into a star rating. So a restaurant with average review 3.24 gets 3 stars, and one with 3.25 gets 3.5 stars. Since no one actually reads all 200, for example, reviews of a given restaurant, the star rating can be said to represent reputation, while the actual review average is the underlying restaurant quality. It’s 2011, so this calls for some regression discontinuity (apparently, some grad students at Harvard call the empirical publication gatekeepers “the identification Taliban”; at least the present paper gets the internal validity right and doesn’t seem to have too many interpretive problems with external validity).
Holding underlying quality constant, the discontinuous jump of a half star is worth a 4.5% increase in revenue in the relevant quarter. This is large, but not crazy: similar gains have been found in recent work for moving from “B” to “A” in sanitary score, or from calorie consumption after calorie info was posted in New York City. The effect is close to zero for chain stores – one way this might be interpreted is that no one Yelps restaurants they are already familiar with. I would have liked to see some sort of demographic check here also: is the “Yelp effect” stronger in neighborhoods with younger, more internet-savvy consumers, as you might expect? Also, you may wonder whether there is manipulation by restaurant owners, given the large gains from a tiny jump in star rating. A quick and dirty distributional check doesn’t find any problem with manipulation, but that may change after this paper gets published!
You may also be wondering why reputation matters at all: why don’t I just go to a good restaurant? The answer is social learning plus costs of experimentation. The paper I’m working on now follows this line of thought toward what I think is a rather surprising policy implication: more on this at a future date.
http://people.bu.edu/mluca/JMP.pdf (Working paper version – Luca was hired at HBS, so savvy use of a great dataset pays off!)