“Reviews, Reputation and Revenue: The Case of Yelp.com,” M. Luca (2010)

I’m doing some work related to social learning, and a friend passed along the present paper by a recent job market candidate. It’s quite clever, and a great use of the wealth of data now available to the empirically-minded economist.

Here’s the question: there are tons of ways products, stores and restaurants develop reputation. One of these ways is reviews. How important is that extra Michelin star, or higher Zagat rating, or better word of mouth? And how could we ever separate the effect of reputation from the underlying quality of the restaurant?

Luca scrapes restaurant review data from Yelp, which really began penetrating Seattle in 2005; Yelp data is great because it includes review dates, so you can go back in time and reconstruct, with some error due to deleted reviews, what the review profile used to look like. Luca also has, incredibly, 7 years of restaurant revenue data from the city of Seattle. Just put the two together and you can track how restaurant reviews are correlated with revenue.

But what of causality? Here’s the clever bit. He notes that Yelp aggregates reviews into a star rating. So a restaurant with average review 3.24 gets 3 stars, and one with 3.25 gets 3.5 stars. Since no one actually reads all 200, for example, reviews of a given restaurant, the star rating can be said to represent reputation, while the actual review average is the underlying restaurant quality. It’s 2011, so this calls for some regression discontinuity (apparently, some grad students at Harvard call the empirical publication gatekeepers “the identification Taliban”; at least the present paper gets the internal validity right and doesn’t seem to have too many interpretive problems with external validity).

Holding underlying quality constant, the discontinuous jump of a half star is worth a 4.5% increase in revenue in the relevant quarter. This is large, but not crazy: similar gains have been found in recent work for moving from “B” to “A” in sanitary score, or from calorie consumption after calorie info was posted in New York City. The effect is close to zero for chain stores – one way this might be interpreted is that no one Yelps restaurants they are already familiar with. I would have liked to see some sort of demographic check here also: is the “Yelp effect” stronger in neighborhoods with younger, more internet-savvy consumers, as you might expect? Also, you may wonder whether there is manipulation by restaurant owners, given the large gains from a tiny jump in star rating. A quick and dirty distributional check doesn’t find any problem with manipulation, but that may change after this paper gets published!

You may also be wondering why reputation matters at all: why don’t I just go to a good restaurant? The answer is social learning plus costs of experimentation. The paper I’m working on now follows this line of thought toward what I think is a rather surprising policy implication: more on this at a future date.

http://people.bu.edu/mluca/JMP.pdf (Working paper version – Luca was hired at HBS, so savvy use of a great dataset pays off!)

Advertisements

2 thoughts on ““Reviews, Reputation and Revenue: The Case of Yelp.com,” M. Luca (2010)

  1. Dalek says:

    nice review. i agree that is a fairly straightforward paper once you put 2 and 2 together as Luca did — but I think its hard to see what theoretical debate he’s solving. The framing is “unclear whether consumer reviews will significantly affect markets for experience goods”, but that seems like a stretch. I’m afraid this must go in the “interesting but not important” bin. Any guesses on where this might end up being published?

    • afinetheorem says:

      Actually, I’m not so down on the results. *Even if* there is no external validity at all, the question “How important are Yelp reviews for small businesses” is actually an important one in and of itself. I’ve usually got pretty good spies to find out where papers wind up under submission, but I don’t know for this paper. If I were reviewing it for something like AEJ: Applied, though, I would recommend acceptance subject to a few caveats. It’s a nice paper.

      That said, I agree totally that you can do tons more with the type of data he uses. I’ll post (probably next year) once I have a draft of the project I’m working on, but it will be up your alley: very similar data, a nice (pretty advanced, I think) theoretical model, with clear, externally valid, policy implications. You never know how a project will work out, but I’m excited about this one.

Comments are closed.

%d bloggers like this: