Category Archives: Group Effects

“Valuing Diversity,” G. Loury & R. Fryer (2013)

The old chair of my alma mater’s economics department, Glenn Loury is, somehow, wrapped up in a kerfuffle related to the student protests that have broken out across the United States. Loury, who is now at Brown, wrote an op-ed in the student paper which to an economist just says that the major racial problem in the United States is statistical discrimination not taste-based discrimination, and hence the types of protests and desired recourse of the student protesters is wrongheaded. After being challenged about “what type of a black scholar” he is, Loury wrote a furious response pointing out that he is, almost certainly, the world’s most prominent scholar on the topic of racial discrimination and potential remedies, and has been thinking about how policy can remedy racial injustice since before the student’s parents were even born.

An important aspect of his work is that, under statistical discrimination, there is huge scope for perverse and unintended effects of policies. This idea has been known since Ken Arrow’s famous 1973 paper, but Glenn Loury and Stephen Coate in 1993 worked it out in greater detail. Imagine there are black and white workers, and high-paid good jobs, which require skill, and low-paid bad jobs which do not. Workers make an unobservable investment in skill, where the firm only sees a proxy: sometimes unskilled workers “look like” skilled workers, sometimes skilled workers “look like” unskilled workers, and sometimes we aren’t sure. As in Arrow’s paper, there can be multiple equilibria: when firms aren’t sure of a worker’s skill, if they assume all of those workers are unskilled, then in equilibrium investment in skill will be such that the indeterminate workers can’t profitably be placed in skilled jobs, but if the firms assume all indeterminate workers are skilled, then there is enough skill investment to make it worthwhile for firms to place those workers in high-skill, high-wage jobs. Since there are multiple equilibria, if race or some other proxy is observable, we can be in the low-skill-job, low-investment equilibrium for one group, and the high-skill-job, high-investment equilibrium for a different group. That is, even with no ex-ante difference across groups and no taste-based bias, we still wind up with a discriminatory outcome.

The question Coate and Loury ask is whether affirmative action can fix this negative outcome. Let an affirmative action rule state that the proportion of all groups assigned to the skilled job must be equal. Ideally, affirmative action would generate equilibrium beliefs by firms about workers that are the same no matter what group those workers come from, and hence skill investment across groups that is equal. Will this happen? Not necessarily. Assume we are in the equilibrium where one group is assumed low-skill when their skill in indeterminate, and the other group is assumed high-skill.

In order to meet the affirmative action rule, either more of the discriminated group needs to be assigned to the high-skill job, or more of the favored group need to be assigned to the low-skill job. Note that in the equilibrium without affirmative action, the discriminated group invests less in skills, and hence the proportion of the discriminated group that tests as unskilled is higher than the proportion of the favored group that does so. The firms can meet the affirmative action rule, then, by keeping the assignment rule for favored groups as before, and by assigning all proven-skilled and indeterminate discriminated workers as well as some random proportion of proven-unskilled discriminated workers, to the skilled task. This rule decreases the incentive to invest in skills for the discriminated group, and hence it is no surprise that not only can it be an equilibrium, but that Coate and Loury can show the dynamics of this policy lead to fewer and fewer discriminated workers investing in skills over time: despite identical potential at birth, affirmative action policies can lead to “patronizing equilibria” that exacerbate, rather than fix, differences across groups. The growing skill difference between previously-discriminated-against “Bumiputra” Malays and Chinese Malays following affirmative action policies in the 1970s fits this narrative nicely.

The broader point here, and one that comes up in much of Loury’s theoretical work, is that because policies affect beliefs even of non-bigoted agents, statistical discrimination is a much harder problem to solve than taste-based or “classical” bias. Consider the job market for economists. If women or minorities have trouble finding jobs because of an “old boys’ club” that simply doesn’t want to hire those groups, then the remedy is simple: require hiring quotas and the like. If, however, the problem is that women or minorities don’t enter economics PhD programs because of a belief that it will be hard to be hired, and that difference in entry leads to fewer high-quality women or minorities come graduation, then remedies like simple quotas may lead to perverse incentives.

Moving beyond perverse incentives, there is also the question of how affirmative action programs should be designed if we want to equate outcomes across groups that face differential opportunities. This question is taken up in “Valuing Diversity”, a recent paper Loury wrote with recent John Bates Clark medal winner Roland Fryer. Consider dalits in India or African-Americans: for a variety of reasons, from historic social network persistence to neighborhood effects, the cost of increasing skill may be higher for these groups. We have an opportunity which is valuable, such as slots at a prestigious college. Simply providing equal opportunity may not be feasible because the social reasons why certain groups face higher costs of increasing skill are very difficult to solve. Brown University, or even the United States government as a whole, may be unable to fix the persistence social difference in upbringing among blacks and whites. So what to do?

There are two natural fixes. We can provide a lower bar for acceptance for the discriminated group at the prestigious college, or subsidize skill acquisition for the discriminated group by providing special summer programs, tutoring, etc. If policy can be conditioned on group identity, then the optimal policy is straightforward. First, note that in a laissez faire world, individuals invest in skill until the cost of investment for the marginal accepted student exactly equates to the benefit the student gets from attending the fancy college. That is, the equilibrium is efficient: students with the lowest cost of acquiring skill are precisely the ones who invest and are accepted. But precisely that weighing of marginal benefit and costs holds within group if the acceptance cutoff differs by group identity, so if policy can condition on group identity, we can get whatever mix of students from different groups we want while still ensuring that the students within each group with the lowest cost of upgrading their skill are precisely the ones who invest and are accepted. The policy change itself, by increasing the quota of slots for the discriminated group, will induce marginal students from that group to upgrade their skills in order to cross the acceptance threshold; that is, quotas at the assignment stage implicitly incentivize higher investment by the discriminated group.

The trickier problem is when policy cannot condition on group identity, as is the case in the United States under current law. I would like somehow to accept more students from the discriminated against group, and to ensure that those students invest in their skill, but the policy I set needs to treat the favored and discriminated against groups equally. Since discriminated-against students make up a bigger proportion of those with a high cost of skill acquisition compared to students with a low cost of skill acquisition, any “blind” policy that does condition on group identity will induce identical investment activity and acceptance probability among agents with identical costs of skill upgrading. Hence any blind policy that induces more discriminated-students to attend college must somehow be accepting students with higher costs of skill acquisition than the marginal accepted student under laissez faire, and must not be accepting students whose costs of skill acquisition were at the laissez faire margin. Fryer and Loury show, by solving the relevant linear program, that we can best achieve this by allowing the most productive students to buy their slots, and then randomly assigning slots to everyone else.

Under that policy, very low cost of effort students still invest so that their skill is high enough that buying a guaranteed slot is worth it. I then use either a tax or subsidy on skill investment in order to affect how many people find it worth investing in skill and then buying the guaranteed slot, and hence in conjunction with the randomized slot assignment, ensuring that the desired mixture across groups that are accepted is achieved.

This result resembles certain results in dynamic pricing. How do I get people to pay a high price for airplane tickets while still hoping to sell would-be-empty seats later at a low price? The answer is that I make high-value people worried that if they don’t buy early, the plane may sell out. The high-value people then trade off paying a high price and getting a seat with probability 1 versus waiting for a low price by maybe not getting on the plane at all. Likewise, how do I induce people to invest in skills even when some lower-skill people will be admitted? Ensure that lower-skill people are only admitted with some randomness. The folks who can get perfect grades and test scores fairly easily will still exert effort to do so, ensuring they get into their top choice college guaranteed rather than hoping to be admitted subject to some random luck. This type of intuition is non-obvious, which is precisely Loury’s point: racial and other forms of injustice are often due to factors much more subtle than outright bigotry, and the optimal response to these more subtle causes do not fit easily on a placard or a bullhorn slogan.

Final working paper (RePEc IDEAS version), published in the JPE, 2013. Hanming Fang and Andrea Moro have a nice handbook chapter on theoretical explorations of discrimination. On the recent protests, Loury and John McWhorter have an interesting and provocative dialog on the recent student protests at Bloggingheads.

Advertisement

“Collaborating,” A. Bonatti & J. Horner (2011)

(Apologies for the long delay since the last post. I’ve been in that tiniest of Southeast Asian backwaters, East Timor, talking to UN and NGO folks about how the new democracy is coming along. The old rule of thumb is that you need 25 years of free and fair elections before society consolidates a democracy, but we still have a lot to learn about how that process takes place. I have some theoretical ideas about how to avoid cozy/corrupt links between government ministers and the private sector in these unconsolidated democracies, and I wanted to get some anecdotes which might guide that theory. And in case you’re wondering: I would give pretty high odds that, for a variety of reasons, the Timorese economy is going absolutely nowhere fast. Now back to the usual new research summaries…)

Teamwork is essential, you’re told from kindergarten on. But teamwork presents a massive moral hazard problem: how do I make sure the other guy does his share? In the static setting, Alchain-Demsetz (1972) and a series of papers by Holmstrom (May He Win His Deserved Nobel) have long ago discussed why people will free ride when their effort is hidden, and what contracts can be written to avoid this problem. Bonatti and Horner make the problem dynamic, and with a few pretty standard tricks from optimal control develop some truly counterintuitive results.

The problem is the following. N agents are engaged in working on a project which is “good” with probability p. Agents exert costly effort continuously over time. Depending on the effort exerted by agents at any given time, a breakthrough occurs with some probability if the project is good, but never occurs if the project is bad. Over time, given effort along the equilibrium path, agents become more and more pessimistic about the project being good if no breakthrough occurs. The future is discounted. Agents only observe their own effort choice (but have correct beliefs about the effort of others in equilibrium). This means that off-path, beliefs of effort exertion are not common knowledge: if I deviate and work harder now, and no breakthrough occurs, then I am more pessimistic than others about the goodness of the project since I know, and they don’t, that a higher level of effort was put in.

In this setting, not only do agents shirk (hoping the other agents will pick up the slack), but they also procrastinate. Imagine a two-period world. In a two period world, I can shift some effort to period 2, in the hope that the other agent’s period 1 effort will lead to a success. I don’t want to work extremely hard in period 1 when all that this leads to is wasted effort because my teammate has already solved the problem in that period. Note that this procrastination motive is not optimal when the team is of size 1: you need a coauthor to justify your slacking! Better monitoring here does not help, surprisingly. If I can see how much effort my opponent puts in each period, then what happens? If I decrease my period 1 effort, and this is observable by both agents, then my teammate will not be so pessimistic about the success of the project in period 2. Hence, she will work harder in period 2. Hence, each agent has an incentive to work less in period 1 vis-a-vis the hidden action case. (Of course, you may wonder why this is an equilibrium; that is, why doesn’t the teammate play grim trigger and punish me for shirking? It turns out there are a number of reasonable equilibria in the case with observable actions, some of which give higher welfare and some of which give lower welfare than under hidden action. The point is just that allowing observability doesn’t necessarily help things.)

So what have we learned? Three things in particular. First, work in teams gives extra incentive to procrastinate compared to solo work. Second, this means that setting binding deadlines can be welfare improving; the authors further show that the larger the team, the tighter the deadline necessary. Third, letting teams observe how hard the other is working is not necessarily optimal. Surely observability by a principal would be welfare-enhancing – the contract could be designed to look like dynamic Holmstrom – but observability between the agents is not necessarily so. Interesting stuff.

http://cowles.econ.yale.edu/P/cd/d16b/d1695.pdf (Final Cowles Foundation WP – paper published in April 2011 AER)

“Reviews, Reputation and Revenue: The Case of Yelp.com,” M. Luca (2010)

I’m doing some work related to social learning, and a friend passed along the present paper by a recent job market candidate. It’s quite clever, and a great use of the wealth of data now available to the empirically-minded economist.

Here’s the question: there are tons of ways products, stores and restaurants develop reputation. One of these ways is reviews. How important is that extra Michelin star, or higher Zagat rating, or better word of mouth? And how could we ever separate the effect of reputation from the underlying quality of the restaurant?

Luca scrapes restaurant review data from Yelp, which really began penetrating Seattle in 2005; Yelp data is great because it includes review dates, so you can go back in time and reconstruct, with some error due to deleted reviews, what the review profile used to look like. Luca also has, incredibly, 7 years of restaurant revenue data from the city of Seattle. Just put the two together and you can track how restaurant reviews are correlated with revenue.

But what of causality? Here’s the clever bit. He notes that Yelp aggregates reviews into a star rating. So a restaurant with average review 3.24 gets 3 stars, and one with 3.25 gets 3.5 stars. Since no one actually reads all 200, for example, reviews of a given restaurant, the star rating can be said to represent reputation, while the actual review average is the underlying restaurant quality. It’s 2011, so this calls for some regression discontinuity (apparently, some grad students at Harvard call the empirical publication gatekeepers “the identification Taliban”; at least the present paper gets the internal validity right and doesn’t seem to have too many interpretive problems with external validity).

Holding underlying quality constant, the discontinuous jump of a half star is worth a 4.5% increase in revenue in the relevant quarter. This is large, but not crazy: similar gains have been found in recent work for moving from “B” to “A” in sanitary score, or from calorie consumption after calorie info was posted in New York City. The effect is close to zero for chain stores – one way this might be interpreted is that no one Yelps restaurants they are already familiar with. I would have liked to see some sort of demographic check here also: is the “Yelp effect” stronger in neighborhoods with younger, more internet-savvy consumers, as you might expect? Also, you may wonder whether there is manipulation by restaurant owners, given the large gains from a tiny jump in star rating. A quick and dirty distributional check doesn’t find any problem with manipulation, but that may change after this paper gets published!

You may also be wondering why reputation matters at all: why don’t I just go to a good restaurant? The answer is social learning plus costs of experimentation. The paper I’m working on now follows this line of thought toward what I think is a rather surprising policy implication: more on this at a future date.

http://people.bu.edu/mluca/JMP.pdf (Working paper version – Luca was hired at HBS, so savvy use of a great dataset pays off!)

“Who Will Monitor the Monitor?,” D. Rahman (2010)

In any organization, individuals can shirk by taking advantage of the fact that their actions are private; only a stochastic signal of effort can be observed, for instance. Because of this, firms and governments hire monitors to watch, imperfectly, what workers are doing, and to punish the workers if it is believed that the workers are taking actions contrary to what the bosses desire. Even if the monitor observed signals that are not available to the bosses, as long as that observation is free, the monitor has no incentive to lie. But what if monitoring is costly? How can we ensure the monitor has the right incentives to do his job? That is, who shall monitor the monitor? The answer, clearly, isn’t a third level of monitors, since this just pushes the problem back one more level.

In a very interesting new paper, David Rahman extends Holmstrom’s (who should share the next Nobel with Milgrom; it’s nuts they both haven’t won yet!) group incentives. The idea of group incentives is simple, and it works when monitor’s statements are verifiable. Say it costs 1 to monitor and the agent’s disutility from work is also 1. The principle doesn’t mind an equilibrium of (monitor, work), but better would be the equilibrium (don’t monitor, work), since then I don’t need to pay a monitor to watch my workers. The worker will just shirk if no one watches him, though. Group penalties fix this. Tell the monitor to check only one percent of the time. If he reports (verifiably) that the worker shirked, nobody gets paid. If he reports (verifiably) that the worker worked, the monitor gets $1.02 and the worker gets $100. By increasing the payment to the worker for “good news”, the firm can get arbitrarily close to the payoffs from the “never monitor, work” equilibrium.

That’s all well and good, but what about when the monitor’s reports are not verifiable? In that case, the monitor would never actually check but would just report that the worker worked, and the worker would always shirk. We can use the same idea as in Holmstrom, though, and sometimes ask the worker to shirk. Make payments still have group penalties, but pay the workers only when the report matches the recommended action – that is, pay for “monitor/shirk” and “monitor/work”. For the same reason as in the above example, the frequency of monitoring and shirking can both be made arbitrarily small, with the contract still incentive compatible (assuming risk neutrality, of course).

More generally, a nice use of the Minimax theorem shows that we check for deviations from the bosses’ recommended actions for the monitor and the agent one by one – that is, we needn’t check for all deviations simultaneously. So-called “detectable” deviations are shut down by contracts like the one in the example above. Undetectable deviations by the monitor still fulfill the monitoring role – by virtue of being undetectable, the agent won’t notice the deviation either – but it turns out that finiteness of the action space is enough to save us from an infinite regress of profitable undetectable deviations, and therefore a strategy like the one in the example above does allow for “almost” optimal costly and unverifiable monitoring.

Two quick notes: First, collusion, as Rahman notes, can clearly take place in this model (each agent just tells the other when he is told to monitor or to shirk), so it really speaks only to situations where we don’t expect such collusion. Second, this model is quite nice because it clarifies, again, that monitoring power needn’t be vested in a principal. That is, the monitor here collects no residual profits or anything of that sort – he is merely a “security guard”. Separating the monitoring role of agents in a firm from the management role is particularly important when we talk about more complex organizational forms, and I think it’s clear that the question of how to do so is far from being completely answered.

http://www.econ.umn.edu/~dmr/monitor.pdf (WP – currently R&R at AER and presumably will wind up there…)

“Group Size and Incentives to Contribute: A Natural Experiment at Chinese Wikipedia,” Xiaoquan Zhang and Feng Zhu (2009)

Why do people give? Is giving a purely altruistic act, or is some utility received when those we give to receive utility as a result of our actions? A particularly salient question is whether so-called social effects, or group size effects, can be explained by such a “warm glow” motive. That is, does an individual propensity to give or contribute to a public good depend on the number of people who will be helped by, or will consume, that public good?

Zhang and Zhu consider an interesting natural experiment. Beginning in late 2005, Wikipedia was blocked in mainland China for over a year. Because all changes to wikipedia pages are saved, if we knew who was posting from China and who was posting from other Chinese-speaking locations (say, Taiwan or Singapore), we could investigate the effect of a massively decreased readership on the willingness to contribute.

The authors identify non-mainland users by checking who uses traditional Chinese script, common in Singapore and China, but not in Taiwan and Hong Kong, and by checking who posted both before and after the block, since presumably mainland users would not be able to post after the block went into effect. After controlling for how long the user had been posting on Wikipedia (since posts are most frequent soon after the first post is made). They identify a decrease in propensity to post of more than 40%.

Robustness checks found a handful of other interesting results. Removing politically sensitive topics does not change the result (so the decrease isn’t simply reflecting fewer back-and-forth battles over whether Taiwan should be listed as part of China). Those who post most on “Talk” pages, where potential revisions of Wiki pages can be discussed, were most likely to decrease their posting; presumably these posters were the ones for whom social effects were most important. Only examining new pages, rather than revisions of old pages, gives similar results.

At this point, I have to bring up the obvious critique that any non-empirical person will have about papers of this kind: external validity. To the extent that we care about social effects, we care about how they will manifest themselves on important social questions – for instance, what will happen to volunteer rates after some public policy change – and not about Wikipedia per se. Without some sort of structural or theoretical model, I have no idea how to apply the results of this paper to other, related questions. Even lab experiments, of which I’m also skeptical, provide some sort of gesture toward external validity. Note that this isn’t my critique by any stretch: the “why do we care about sumo wrestling match fixes?” critique has been made by many, many theorists and structural empiricists, and it strikes me as wholly valid.

http://blog.mikezhang.com/files/chinesewikipedia.pdf

%d bloggers like this: