Paul Samuelson, the greatest economist of the past century (if not of all time), famously proved a “fallacy of large numbers”. Consider a single bet which is rejected (say, 50/50 chance of winning $11 or losing $10). If an agent has utility that rejects that bet at any wealth level, then he must also reject any finite string of such bets. Many people intuitively believe that they can accept the string of bets because, by the law of large numbers, the probability of losing money goes to 0 as the number of bets goes to infinity. This is true, but note that concavity of utility, in general, will punish you for increasing the variance of the bet, and clearly variance also increases as n goes to infinity. It is simple to prove by induction from the last bet that any such string must be rejected. Samuelson conjectures that the reason people accept the string and reject the single bet is because they misunderstand how insurance works: insurance is a service that divides risk among many buyers; pooling many risks alone does not make risk averse agents better off for precisely the reasons in the fallacy of large numbers.
That’s all well and good, but in fact, there is a hidden assumption. It is not the case that a risk-averse agent necessarily prefers a bet with lower variance over a similar bet with the same mean yet higher variance (this is only true if the utility can be described using only those two moments). Further, Ross shows Samuelson’s caveat that a bet must be rejected at all wealth levels is very strong. Note that for local bet x and wealth w, E[U(w+x)]<=U(w) iff mU'(w)+U''(w)[s/2]=2u/s. The left hand side is the coefficient of absolute risk aversion, so only the linear utility u(x)=x and the CARA utility u(x)=-e^-Ax satisfy that condition for any wealth level.
If utility is concave, and the bets satisfy some technical conditions, the necessary and sufficient condition for a string of gambles to eventually be accepted even when one is rejected is that u(x)e^gx goes to zero and x goes to negative infinity, for all g>0. This essentially ensures that as x decreases, utility falls less quickly than the exponential, ensuring that the negative expected utility from higher variance in the string of bets is not enough to counter the higher expected value from that string. Many perfectly normal expected utility functions can satisfy that condition.
http://www.jstor.org/stable/2676262 (Link to gated JSTOR – I cannot find an ungated version. Senior academics, please work to free academic work! It’s madness to spend so much time working on your research only to give it away to companies like Elsevier whose only “value added” is in making it harder for consumers to view your hard work!)
a strong argument can be made for Friedman as equally great during the 20th. As far as for all time, its far to soon to tell.