Category Archives: Group Effects

“Collaborating,” A. Bonatti & J. Horner (2011)

(Apologies for the long delay since the last post. I’ve been in that tiniest of Southeast Asian backwaters, East Timor, talking to UN and NGO folks about how the new democracy is coming along. The old rule of thumb is that you need 25 years of free and fair elections before society consolidates a democracy, but we still have a lot to learn about how that process takes place. I have some theoretical ideas about how to avoid cozy/corrupt links between government ministers and the private sector in these unconsolidated democracies, and I wanted to get some anecdotes which might guide that theory. And in case you’re wondering: I would give pretty high odds that, for a variety of reasons, the Timorese economy is going absolutely nowhere fast. Now back to the usual new research summaries…)

Teamwork is essential, you’re told from kindergarten on. But teamwork presents a massive moral hazard problem: how do I make sure the other guy does his share? In the static setting, Alchain-Demsetz (1972) and a series of papers by Holmstrom (May He Win His Deserved Nobel) have long ago discussed why people will free ride when their effort is hidden, and what contracts can be written to avoid this problem. Bonatti and Horner make the problem dynamic, and with a few pretty standard tricks from optimal control develop some truly counterintuitive results.

The problem is the following. N agents are engaged in working on a project which is “good” with probability p. Agents exert costly effort continuously over time. Depending on the effort exerted by agents at any given time, a breakthrough occurs with some probability if the project is good, but never occurs if the project is bad. Over time, given effort along the equilibrium path, agents become more and more pessimistic about the project being good if no breakthrough occurs. The future is discounted. Agents only observe their own effort choice (but have correct beliefs about the effort of others in equilibrium). This means that off-path, beliefs of effort exertion are not common knowledge: if I deviate and work harder now, and no breakthrough occurs, then I am more pessimistic than others about the goodness of the project since I know, and they don’t, that a higher level of effort was put in.

In this setting, not only do agents shirk (hoping the other agents will pick up the slack), but they also procrastinate. Imagine a two-period world. In a two period world, I can shift some effort to period 2, in the hope that the other agent’s period 1 effort will lead to a success. I don’t want to work extremely hard in period 1 when all that this leads to is wasted effort because my teammate has already solved the problem in that period. Note that this procrastination motive is not optimal when the team is of size 1: you need a coauthor to justify your slacking! Better monitoring here does not help, surprisingly. If I can see how much effort my opponent puts in each period, then what happens? If I decrease my period 1 effort, and this is observable by both agents, then my teammate will not be so pessimistic about the success of the project in period 2. Hence, she will work harder in period 2. Hence, each agent has an incentive to work less in period 1 vis-a-vis the hidden action case. (Of course, you may wonder why this is an equilibrium; that is, why doesn’t the teammate play grim trigger and punish me for shirking? It turns out there are a number of reasonable equilibria in the case with observable actions, some of which give higher welfare and some of which give lower welfare than under hidden action. The point is just that allowing observability doesn’t necessarily help things.)

So what have we learned? Three things in particular. First, work in teams gives extra incentive to procrastinate compared to solo work. Second, this means that setting binding deadlines can be welfare improving; the authors further show that the larger the team, the tighter the deadline necessary. Third, letting teams observe how hard the other is working is not necessarily optimal. Surely observability by a principal would be welfare-enhancing – the contract could be designed to look like dynamic Holmstrom – but observability between the agents is not necessarily so. Interesting stuff.

http://cowles.econ.yale.edu/P/cd/d16b/d1695.pdf (Final Cowles Foundation WP – paper published in April 2011 AER)

“Reviews, Reputation and Revenue: The Case of Yelp.com,” M. Luca (2010)

I’m doing some work related to social learning, and a friend passed along the present paper by a recent job market candidate. It’s quite clever, and a great use of the wealth of data now available to the empirically-minded economist.

Here’s the question: there are tons of ways products, stores and restaurants develop reputation. One of these ways is reviews. How important is that extra Michelin star, or higher Zagat rating, or better word of mouth? And how could we ever separate the effect of reputation from the underlying quality of the restaurant?

Luca scrapes restaurant review data from Yelp, which really began penetrating Seattle in 2005; Yelp data is great because it includes review dates, so you can go back in time and reconstruct, with some error due to deleted reviews, what the review profile used to look like. Luca also has, incredibly, 7 years of restaurant revenue data from the city of Seattle. Just put the two together and you can track how restaurant reviews are correlated with revenue.

But what of causality? Here’s the clever bit. He notes that Yelp aggregates reviews into a star rating. So a restaurant with average review 3.24 gets 3 stars, and one with 3.25 gets 3.5 stars. Since no one actually reads all 200, for example, reviews of a given restaurant, the star rating can be said to represent reputation, while the actual review average is the underlying restaurant quality. It’s 2011, so this calls for some regression discontinuity (apparently, some grad students at Harvard call the empirical publication gatekeepers “the identification Taliban”; at least the present paper gets the internal validity right and doesn’t seem to have too many interpretive problems with external validity).

Holding underlying quality constant, the discontinuous jump of a half star is worth a 4.5% increase in revenue in the relevant quarter. This is large, but not crazy: similar gains have been found in recent work for moving from “B” to “A” in sanitary score, or from calorie consumption after calorie info was posted in New York City. The effect is close to zero for chain stores – one way this might be interpreted is that no one Yelps restaurants they are already familiar with. I would have liked to see some sort of demographic check here also: is the “Yelp effect” stronger in neighborhoods with younger, more internet-savvy consumers, as you might expect? Also, you may wonder whether there is manipulation by restaurant owners, given the large gains from a tiny jump in star rating. A quick and dirty distributional check doesn’t find any problem with manipulation, but that may change after this paper gets published!

You may also be wondering why reputation matters at all: why don’t I just go to a good restaurant? The answer is social learning plus costs of experimentation. The paper I’m working on now follows this line of thought toward what I think is a rather surprising policy implication: more on this at a future date.

http://people.bu.edu/mluca/JMP.pdf (Working paper version – Luca was hired at HBS, so savvy use of a great dataset pays off!)

“Who Will Monitor the Monitor?,” D. Rahman (2010)

In any organization, individuals can shirk by taking advantage of the fact that their actions are private; only a stochastic signal of effort can be observed, for instance. Because of this, firms and governments hire monitors to watch, imperfectly, what workers are doing, and to punish the workers if it is believed that the workers are taking actions contrary to what the bosses desire. Even if the monitor observed signals that are not available to the bosses, as long as that observation is free, the monitor has no incentive to lie. But what if monitoring is costly? How can we ensure the monitor has the right incentives to do his job? That is, who shall monitor the monitor? The answer, clearly, isn’t a third level of monitors, since this just pushes the problem back one more level.

In a very interesting new paper, David Rahman extends Holmstrom’s (who should share the next Nobel with Milgrom; it’s nuts they both haven’t won yet!) group incentives. The idea of group incentives is simple, and it works when monitor’s statements are verifiable. Say it costs 1 to monitor and the agent’s disutility from work is also 1. The principle doesn’t mind an equilibrium of (monitor, work), but better would be the equilibrium (don’t monitor, work), since then I don’t need to pay a monitor to watch my workers. The worker will just shirk if no one watches him, though. Group penalties fix this. Tell the monitor to check only one percent of the time. If he reports (verifiably) that the worker shirked, nobody gets paid. If he reports (verifiably) that the worker worked, the monitor gets $1.02 and the worker gets $100. By increasing the payment to the worker for “good news”, the firm can get arbitrarily close to the payoffs from the “never monitor, work” equilibrium.

That’s all well and good, but what about when the monitor’s reports are not verifiable? In that case, the monitor would never actually check but would just report that the worker worked, and the worker would always shirk. We can use the same idea as in Holmstrom, though, and sometimes ask the worker to shirk. Make payments still have group penalties, but pay the workers only when the report matches the recommended action – that is, pay for “monitor/shirk” and “monitor/work”. For the same reason as in the above example, the frequency of monitoring and shirking can both be made arbitrarily small, with the contract still incentive compatible (assuming risk neutrality, of course).

More generally, a nice use of the Minimax theorem shows that we check for deviations from the bosses’ recommended actions for the monitor and the agent one by one – that is, we needn’t check for all deviations simultaneously. So-called “detectable” deviations are shut down by contracts like the one in the example above. Undetectable deviations by the monitor still fulfill the monitoring role – by virtue of being undetectable, the agent won’t notice the deviation either – but it turns out that finiteness of the action space is enough to save us from an infinite regress of profitable undetectable deviations, and therefore a strategy like the one in the example above does allow for “almost” optimal costly and unverifiable monitoring.

Two quick notes: First, collusion, as Rahman notes, can clearly take place in this model (each agent just tells the other when he is told to monitor or to shirk), so it really speaks only to situations where we don’t expect such collusion. Second, this model is quite nice because it clarifies, again, that monitoring power needn’t be vested in a principal. That is, the monitor here collects no residual profits or anything of that sort – he is merely a “security guard”. Separating the monitoring role of agents in a firm from the management role is particularly important when we talk about more complex organizational forms, and I think it’s clear that the question of how to do so is far from being completely answered.

http://www.econ.umn.edu/~dmr/monitor.pdf (WP – currently R&R at AER and presumably will wind up there…)

“Group Size and Incentives to Contribute: A Natural Experiment at Chinese Wikipedia,” Xiaoquan Zhang and Feng Zhu (2009)

Why do people give? Is giving a purely altruistic act, or is some utility received when those we give to receive utility as a result of our actions? A particularly salient question is whether so-called social effects, or group size effects, can be explained by such a “warm glow” motive. That is, does an individual propensity to give or contribute to a public good depend on the number of people who will be helped by, or will consume, that public good?

Zhang and Zhu consider an interesting natural experiment. Beginning in late 2005, Wikipedia was blocked in mainland China for over a year. Because all changes to wikipedia pages are saved, if we knew who was posting from China and who was posting from other Chinese-speaking locations (say, Taiwan or Singapore), we could investigate the effect of a massively decreased readership on the willingness to contribute.

The authors identify non-mainland users by checking who uses traditional Chinese script, common in Singapore and China, but not in Taiwan and Hong Kong, and by checking who posted both before and after the block, since presumably mainland users would not be able to post after the block went into effect. After controlling for how long the user had been posting on Wikipedia (since posts are most frequent soon after the first post is made). They identify a decrease in propensity to post of more than 40%.

Robustness checks found a handful of other interesting results. Removing politically sensitive topics does not change the result (so the decrease isn’t simply reflecting fewer back-and-forth battles over whether Taiwan should be listed as part of China). Those who post most on “Talk” pages, where potential revisions of Wiki pages can be discussed, were most likely to decrease their posting; presumably these posters were the ones for whom social effects were most important. Only examining new pages, rather than revisions of old pages, gives similar results.

At this point, I have to bring up the obvious critique that any non-empirical person will have about papers of this kind: external validity. To the extent that we care about social effects, we care about how they will manifest themselves on important social questions – for instance, what will happen to volunteer rates after some public policy change – and not about Wikipedia per se. Without some sort of structural or theoretical model, I have no idea how to apply the results of this paper to other, related questions. Even lab experiments, of which I’m also skeptical, provide some sort of gesture toward external validity. Note that this isn’t my critique by any stretch: the “why do we care about sumo wrestling match fixes?” critique has been made by many, many theorists and structural empiricists, and it strikes me as wholly valid.

http://blog.mikezhang.com/files/chinesewikipedia.pdf

Follow

Get every new post delivered to your Inbox.

Join 174 other followers

%d bloggers like this: