Neuroeconomics is a slightly odd field. It seems promising to “open up the black box” of choice using evidence from neuroscience, but despite this promise, I don’t see very many terribly interesting economic results. And perhaps this isn’t surprising – in general, economic models are deliberately abstract and do not hinge on the precise reason why decisions are made, so unsurprisingly neuro appears most successful in, e.g., selecting among behavioral models in specific circumstances.
Ryan Webb, a post-doc on the market this year, shows another really powerful use of neuroeconomic evidence: guiding our choices of the supposedly arbitrary parts of our models. Consider empirical models of random utility. Consumers make a discrete choice, such that the object chosen i is that which maximizes utility v(i). In the data, even the same consumer does not always make the same choice (I love my Chipotle burrito bowl, but I nonetheless will have a different lunch from time to time!). How, then, can we use the standard choice setup in empirical work? Add a random variable n(i) to the decision function, letting agents choose i which maximizes v(i)+n(i). As n will take different realizations, choice patterns can vary somewhat.
The question, though, is what distribution n(i) should take? Note that the probability i is chosen is just
P(v(i)+n(i)>=v(j)+n(j)) for all j
P(v(i)-v(j)>=n(i)-n(j)) for all j
If n are distributed independent normal, then the difference n(i)-n(j) is normal. If n are extreme value type I, the difference is logistic. Do either of those assumptions, or some alternative, make sense?
Webb shows that random utility is really just a reduced form of a well-established class of models in psychology called bounded accumulation models. Essentially, you receive a series of sensory inputs stochastically, the data adds up in your brain, and you make a decision according to some sort of stopping rule as the data accumulates in a drift diffusion. In a choice model, you might think for a bit, accumulating reasons to choose A or B, then stop at a fixed time T* and choose the object that, after the random drift, has the highest perceived “utility”. Alternatively, you might stop once the gap between the perceived utilities of different alternatives is high enough, or once one alternative has a sufficiently high perceived utility. It is fairly straightforward to show that this class of models all collapses to max v(i)+n(i), with differing implications for the distribution of n. Thus, neuroscience evidence about which types of bounded accumulation models appear most realistic can help choose among distributions of n for empirical random utility work.
How, exactly? Well, for any stopping rule, there is an implied distribution of stopping times T*. The reduced form errors n are then essentially the sample mean of random draws from an finite accretion process, and hence if the rule implies relatively short stopping times, n will be fat-tailed rather than normal. Also, consider letting the difference in underlying utility v(i)-v(j) be large. Then the stopping time under the accumulation models is relatively short, and hence the variance in the distribution of reduced form errors (again, essentially the sample mean of random draws) is relatively large. Hence, errors are heteroskedastic in the underlying v(i)-v(j). Webb gives additional results relating to the skew and correlation of n. He further shows that assuming independent normality or independent extreme value type I for the error terms can lead to mistaken inference, using a recent AER that tries to infer risk aversion parameters from choices among monetary lotteries. Quite interesting, even for a neuroecon skeptic!
2013 Working Paper (No IDEAS version).