Here’s one of those papers that makes you wonder, why didn’t I think of that? Particularly now that it’s been published in Econometrica! Benoit and Dubra noticed that many, many psychological and experimental papers make statements like “More than 50% of the subjects believed their skill at X better than the median” and then try to explain such irrational overconfidence. Further, a handful of papers have noted that for some difficult and rare tasks (like living to 100), people often underrate their chances of being successful. Irrational underconfidence, perhaps?
Not necessarily. While it is true that only, by definition, fifty percent of people can be better than the median, it is not true that only fifty percent of people can rationally believe themselves better than the median. Here’s a quick example from the paper. In 1990, there was about 1 car crash for every 10 young people in the US. Assume car crashes follow some sort of 80-20 rule of thumb, where 80 percent of crashes are caused by 20 percent of the population. That implies “good” drivers have a 2.5% chance, and bad drivers at 40% chance, of being in an accident in any given year. Assume no one knows when they start to drive whether they are good or bad, but simply Bayesian updates depending on whether they crashed in a given year or not. Working the numbers out for three years, 79% of young drivers will have beliefs about themselves that first order stochastically dominate the population distribution. And they will have these beliefs rationally!
In particular, given survey answers about how we compare to the median, or what decile of the distribution we think we are in, which answers are firm evidence of irrationality, and which can be explained by Bayesian updating the population distribution based on events? Benoit and Dubra construct these bounds, and call the latter explanation median-rationalizing. They show that “rare success” population distributions can lead to underconfidence rationally. They then give examples from psychological studies. A Swedish study of driver confidence which asked drivers to rate the decile in which they believe their driving to fall is median-rationalizable even though only 5.7% of drivers put themselves in the bottom 30 percent of the distribution. A similar study in America found 46% of drivers putting themselves in the top 20%, which is not median-rationalizable (and thus is evidence of overconfidence), since the upper bound on the number of drivers who can believe they are in the top two deciles is only 40%.
I also really like the conclusion. The authors are not claiming that the Swedish data shows no evidence of overconfidence and the American data does. Rather, they are providing “a proper framework with which to analyze” the data. In that framework, the Swedish data may not be evidence of overconfidence. Complaints that the approach in the present paper is nonsense because, for instance, individuals do not use Bayes’ rule are insufficient. If you buy that argument, then the psychological papers may just be evidence that people are bad at math, not that they are overconfident.
(One last note: as is the case with 99% of economics papers published today, this one is too long. I imagine that Samuelson would have written this in 10 pages, proofs included. Would that editors become firmer with their chopping blocks!)
http://www2.um.edu.uy/dubraj/documentos/Apparentfinal.pdf (Final Econometrica version – big thumbs up to Juan Dubra for putting final published versions of his papers on his personal website!)
I know why I didn’t come up with this idea: I am not that smart. Seriously, when I read the example in the introduction I get it, but I can also feel that I could not “discover” it. Oh well…
I really like your blog, by the way. If it weren’t for you, I wouldn’t understand this paper.