Category Archives: Behavioral Economics

Reinhard Selten and the making of modern game theory

Reinhard Selten, it is no exaggeration, is a founding father of two massive branches of modern economics: experiments and industrial organization. He passed away last week after a long and idiosyncratic life. Game theory as developed by the three co-Nobel laureates Selten, Nash, and Harsanyi is so embedded in economic reasoning today that, to a great extent, it has replaced price theory as the core organizing principle of our field. That this would happen was not always so clear, however.

Take a look at some canonical papers before 1980. Arrow’s Possibility Theorem simply assumed true preferences can be elicited; not until Gibbard and Satterthwaite do we answer the question of whether there is even a social choice rule that can elicit those preferences truthfully! Rothschild and Stiglitz’s celebrated 1976 essay on imperfect information in insurance markets defines equilibria in terms of a individual rationality, best responses in the Cournot sense, and free entry. How odd this seems today – surely the natural equilibrium in an insurance market depends on beliefs about the knowledge held by others, and beliefs about those beliefs! Analyses of bargaining before Rubinstein’s 1982 breakthrough nearly always rely on axioms of psychology rather than strategic reasoning. Discussions of predatory pricing until the 1970s, at the very earliest, relied on arguments that we now find unacceptably loose in their treatment of beliefs.

What happened? Why didn’t modern game-theoretic treatment of strategic situations – principally those involve more than one agent but less than an infinite number, although even situations of perfect competition now often are motivated game theoretically – arrive soon after the proofs of von Neumann, Morganstern, and Nash? Why wasn’t the Nash program, of finding justification in self-interested noncooperative reasoning for cooperative or axiom-driven behavior, immediately taken up? The problem was that the core concept of the Nash equilibrium simply permits too great a multiplicity of outcomes, some of which feel natural and others of which are less so. As such, a long search, driven essentially by a small community of mathematicians and economists, attempted to find the “right” refinements of Nash. And a small community it was: I recall Drew Fudenberg telling a story about a harrowing bus ride at an early game theory conference, where a fellow rider mentioned offhand that should they crash, the vast majority of game theorists in the world would be wiped out in one go!

Selten’s most renowned contribution came in the idea of perfection. The concept of subgame perfection was first proposed in a German-language journal in 1965 (making it one of the rare modern economic classics inaccessible to English speakers in the original, alongside Maurice Allais’ 1953 French-language paper in Econometrica which introduces the Allais paradox). Selten’s background up to 1965 is quite unusual. A young man during World War II, raised Protestant but with one Jewish parent, Selten fled Germany to work on farms, and only finished high school at 20 and college at 26. His two interests were mathematics, for which he worked on the then-unusual extensive form game for his doctoral degree, and experimentation, inspired by the small team of young professors at Frankfurt trying to pin down behavior in oligopoly through small lab studies.

In the 1965 paper, on demand inertia (paper is gated), Selten wrote a small game theoretic model to accompany the experiment, but realized there were many equilibria. The term “subgame perfect” was not introduced until 1974, also by Selten, but the idea itself is clear in the ’65 paper. He proposed that attention should focus on equilibria where, after every action, each player continues to act rationally from that point forward; that is, he proposed that in every “subgame”, or every game that could conceivably occur after some actions have been taken, equilibrium actions must remain an equilibrium. Consider predatory pricing: a firm considers lowering price below cost today to deter entry. It is a Nash equilibrium for entrants to believe the price would continue to stay low should they enter, and hence to not enter. But it is not subgame perfect: the entrant should reason that after entering, it is not worthwhile for the incumbent to continue to lose money once the entry has already occurred.

Complicated strings of deductions which rule out some actions based on faraway subgames can seem paradoxical, of course, and did even to Selten. In his famous Chain Store paradox, he considers a firm with stores in many locations choosing whether to price aggressively to deter entry, with one potential entrant in each town choosing one at a time whether to enter. Entrants prefer to enter if pricing is not aggressive, but prefer to remain out otherwise; incumbents prefer to price nonaggressively either if entry occurs or not. Reasoning backward, in the final town we have the simple one-shot predatory pricing case analyzed above, where we saw that entry is the only subgame perfect equilibria. Therefore, the entrant in the second-to-last town knows that the incumbent will not fight entry aggressively in the final town, hence there is no benefit to doing so in the second-to-last town, hence entry occurs again. Reasoning similarly, entry occurs everywhere. But if the incumbent could commit in advance to pricing aggressively in, say, the first 10 towns, it would deter entry in those towns and hence its profits would improve. Such commitment may not possible, but what if the incumbent’s reasoning ability is limited, and it doesn’t completely understand why aggressive pricing in early stages won’t deter the entrant in the 16th town? And what if entrants reason that the incumbent’s reasoning ability is not perfectly rational? Then aggressive pricing to deter entry can occur.

That behavior may not be perfectly rational but rather bounded had been an idea of Selten’s since he read Herbert Simon as a young professor, but in his Nobel Prize biography, he argues that progress on a suitable general theory of bounded rationality has been hard to come by. The closest Selten comes to formalizing the idea is in his paper on trembling hand perfection in 1974, inspired by conversations with John Harsanyi. The problem with subgame perfection had been noted: if an opponent takes an action off the equilibrium path, it is “irrational”, so why should rationality of the opponent be assumed in the subgame that follows? Harsanyi assumes that tiny mistakes can happen, putting even rational players into subgames. Taking the limit as mistakes become infinitesimally rare produces the idea of trembling-hand perfection. The idea of trembles implicitly introduces the idea that players have beliefs at various information sets about what has happened in the game. Kreps and Wilson’s sequential equilibrium recasts trembles as beliefs under uncertainty, and showed that a slight modification of the trembling hand leads to an easier decision-theoretic interpretation of trembles, an easier computation of equilibria, and an outcome that is nearly identical to Selten’s original idea. Sequential equilibria, of course, goes on to become to workhorse solution concept in dynamic economics, a concept which underscores essentially all of modern industrial organization.

That Harsanyi, inventor of the Bayesian game, is credited by Selten for inspiring the trembling hand paper is no surprise. The two had met at a conference in Jerusalem in the mid-1960s, and they’d worked together both on applied projects for the US military, and on pure theory research while Selten visiting Berkeley. A classic 1972 paper of theirs on Nash bargaining with incomplete information (article is gated) begins the field of cooperative games with incomplete information. And this was no minor field: Roger Myerson, in his paper introducing mechanism design under incomplete information – the famous Bayesian revelation principle paper – shows that there exists a unique Selten-Harsanyi bargaining solution under incomplete information which is incentive compatible.

Myerson’s example is amazing. Consider building a bridge which costs $100. Two people will use the bridge. One values the bridge at $90. The other values the bridge at $90 with probability .9, and $30 with probability p=.1, where that valuation is the private knowledge of the second person. Note that in either case, the bridge is worth building. But who should pay? If you propose a 50/50 split, the bridge will simply not be built 10% of the time. If you propose an 80/20 split, where even in their worst case situation each person gets a surplus value of ten dollars, the outcome is unfair to player one 90% of the time (where “unfair” will mean, violates certain principles of fairness that Nash, and later Selten and Harsanyi, set out axiomatically). What of the 53/47 split that gives each party, on average, the same split? Again, this is not “interim incentive compatible”, in that player two will refuse to pay in the case he is the type that values the bridge only at $30. Myerson shows mathematically that both players will agree once they know their private valuations to the following deal, and that the deal satisfies the Selten-Nash fairness axioms: when player 2 claims to value at $90, the payment split is 49.5/50.5 and the bridge is always built, but when player 2 claims to value at $30, the entire cost is paid by player 1 but the bridge is built with only probability .439. Under this split, there are correct incentives for player 2 to always reveal his true willingness to pay. The mechanism means that there is a 5.61 percent chance the bridge isn’t built, but the split of surplus from the bridge nonetheless does better than any other split which satisfies all of Harsanyi and Selten’s fairness axioms.

Selten’s later work is, it appears to me, more scattered. His attempt with Harsanyi to formalize “the” equilibrium refinement, in a 1988 book, was a valiant but in the end misguided attempt. His papers on theoretical biology, inspired by his interest in long walks among the wildflowers, are rather tangential to his economics. And what of his experimental work? To understand Selten’s thinking, read this fascinating dialogue with himself that Selten gave as a Schwartz Lecture at Northwestern MEDS. In this dialogue, he imagines a debate between a Bayesian economist, experimentalist, and an evolutionary biologist. The economist argues that “theory without theorems” is doomed to fail, that Bayesianism is normatively “correct”, and the Bayesian reasoning can easily be extended to include costs of reasoning or reasoning mistakes. The experimentalist argues that ad hoc assumptions are better than incorrect ones: just as human anatomy is complex and cannot be reduced to a few axioms, neither can social behavior. The biologist argues that learning a la Nelson and Winter is descriptively accurate as far as how humans behave, whereas high level reasoning is not. The “chairman”, perhaps representing Selten himself, sums up the argument as saying that experiments which simply contradict Bayesianism are a waste of time, but that human social behavior surely depends on bounded rationality and hence empirical work ought be devoted to constructing a foundation for such a theory (shall we call this the “Selten program”?). And yet, this essay was from 1990, and we seem no closer to having such a theory, nor does it seem to me that behavioral research has fundamentally contradicted most of our core empirical understanding derived from theories with pure rationality. Selten’s program, it seems, remains not only incomplete, but perhaps not even first order; the same cannot be said of his theoretical constructs, as without perfection a great part of modern economics simply could not exist.

Nobel Prize 2014: Jean Tirole

A Nobel Prize for applied theory – now this something I can get behind! Jean Tirole’s prize announcement credits him for his work on market power and regulation, and there is no question that he is among the leaders, if not the world leader, in the application of mechanism design theory to industrial organization; indeed, the idea of doing IO in the absence of this theoretical toolbox seems so strange to me that it’s hard to imagine anyone had ever done it! Economics is sometimes defined by a core principle that agents – people or firms – respond to incentives. Incentives are endogenous; how my bank or my payment processor or my lawyer wants to act depends on how other banks or other processors or other prosecutors act. Regulation is therefore a game. Optimal regulation is therefore a problem of mechanism design, and we now have mathematical tools that allow investigation across the entire space of potential regulating mechanisms, even those that our counterfactual. That is an incredibly powerful methodological advance, so powerful that there will be at least one more Nobel (Milgrom and Holmstrom?) based on this literature.

Because Tirole’s toolbox is theoretical, he has written an enormous amount of “high theory” on the implications of the types of models modern IO economists use. I want to focus in this post on a particular problem where Tirole has stood on both sides of the divide: that of the seemingly obscure question of what can be contracted on.

This literature goes back to a very simple question: what is a firm, and why do they exist? And when they exist, why don’t they grow so large that they become one giant firm a la Schumpeter’s belief in Capitalism, Socialism, and Democracy? One answer is that given by Coase and updated by Williamson, among many others: transaction costs. There are some costs of haggling or similar involved in getting things done with suppliers or independent contractors. When these costs are high, we integrate that factor into the firm. When they are low, we avoid the bureaucratic costs needed to manage all those factors.

For a theorist trained in mechanism design, this is a really strange idea. For one, what exactly are these haggling or transaction costs? Without specifying what precisely is meant, it is very tough to write a model incorporating them and exploring the implications of them. But worse, why would we think these costs are higher outside the firm than inside? A series of papers by Sandy Grossman, Oliver Hart and John Moore point out, quite rightly, that firms cannot make their employees do anything. They can tell them to do something, but the employees will respond to incentives like anyone else. Given that, why would we think the problem of incentivizing employees within an organization is any easier or harder than incentivizing them outside the organization? The solution they propose is the famous Property Rights Theory of the firm (which could fairly be considered the most important paper ever published in the illustrious JPE). This theory says that firms are defined by the assets they control. If we can contract on every future state of the world, then this control shouldn’t matter, but when unforeseen contingencies arise, the firm still has “residual control” of its capital. Therefore, efficiency depends on the allocation of scarce residual control rights, and hence the allocation of these rights inside or outside of a firm are important. Now that is a theory of the firm – one well-specified and based on incentives – that I can understand. (An interesting sidenote: when people think economists don’t really understand the economy because, hey, they’re not rich, we can at least point to Sandy Grossman. Sandy, a very good theorist, left academia to start his own firm, and as far as I know, he is now a billionaire!)

Now you may notice one problem with Grossman, Hart and Moore’s papers. As there was an assumption of nebulous transaction costs in Coase and his followers, there is a nebulous assumption of “incomplete contracts” in GHM. This seems reasonable at first glance: there is no way we could possibly write a contract that covers every possible contingency or future state of the world. I have to imagine everyone that has ever rented an apartment or leased a car or ran a small business has first-hand experience with the nature of residual control rights when some contingency arises. Here is where Tirole comes in. Throughout the 80s and 90s, Tirole wrote many papers using incomplete contracts: his 1994 paper with Aghion on contracts for R&D is right within this literature. In complete contracting, the courts can verify and enforce any contract that relies on observable information, though adverse selection (hidden information by agents) or moral hazard (unverifiable action by agents) may still exist. Incomplete contracting further restricts the set of contracts to a generally simple set of possibilities. In the late 1990s, however, Tirole, along with his fellow Nobel winner Eric Maskin, realized in an absolute blockbuster of a paper that there is a serious problem with these incomplete contracts as usually modeled.

Here is why: even if we can’t ex-ante describe all the future states of the world, we may still ex-post be able to elicit information about the payoffs we each get. As Tirole has noted, firms do not care about indescribable contingencies per se; they only care about how those contingencies affect their payoffs. That means that, at an absolute minimum, the optimal “incomplete contract” better be at least as good as the optimal contract which conditions on elicited payoffs. These payoffs may be stochastic realizations of all of our actions, of course, and hence this insight might not actually mean we can first-best efficiency when the future is really hard to describe. Maskin and Tirole’s 1999 paper shows, incredibly, that indescribability of states is irrelevant, and that even if we can’t write down a contract on states of the world, we can contract on payoff realizations in a way that is just as good as if we could actually write the complete contract.

How could this be? Imagine (here via a simple example of Maskin’s) two firms contracting for R&D. Firm 1 exerts effort e1 and produces a good with value v(e1). Firm 2 invests in some process that will lower the production cost of firm 1’s new good, investing e2 to make production cost equal to c(e2). Payoffs, then, are u1(p-c(e2)-e1) and u2(v(e1)-p-e2). If we knew u1 and u2 and could contract upon it, then the usual Nash implementation literature tells us how to generate efficient levels of e1 and e2 (call them e1*, e2*) by writing a contract: if the product doesn’t have the characteristics of v(e1*) and the production process doesn’t have the characteristics of c(e2*), then we fine the person who cheated. If effort generated stochastic values rather than absolute ones, the standard mechanism design literature tells us exactly when we can still get the first best.

Now, what if v and c are state-dependent, and there are huge number of states of the world? That is, efficient e1* and e2* are now functions of the state of the world realized after we write the initial contract. Incomplete contracting assumed that we cannot foresee all the possible v and c, and hence won’t write a contract incorporating all of them. But, aha!, we can still write a contract that says, look, whatever happens tomorrow, we are going to play a game tomorrow where I say what my v is and you say what your c is. It turns out that there exists such a game which generates truthful revelation of v and c (Maskin and Tirole do this using an idea similar to that of the subgame implementation literature, but the exact features are not terribly important for our purposes). Since the only part of the indescribable state I care about is the part that affects my payoffs, we are essentially done: no matter how many v and c’s there could be in the future, as long as I can write a contract specifying how we each get other to truthfully say what those parameters are, this indescribability doesn’t matter.

Whoa. That is a very, very, very clever insight. Frankly, it is convincing enough that the only role left for property rights theories of the firm are some kind of behavioral theory which restricts even contracts of the Maskin-Tirole sense – and since these contracts are quite a bit simpler in some way than the hundreds of pages of legalese which we see in a lot of real-world contracts on important issues, it’s not clear that bounded rationality or similar theories will get us far.

Where to go from here? Firms, and organizations more generally, exist. I am sure the reason has to do with incentives. But exactly why – well, we still have a lot of work to do in explaining why. And Tirole has played a major role in explaining why.

Tirole’s Walras-Bowley lecture, published in Econometrica in 1999, is a fairly accessible introduction to his current view of incomplete contracts. He has many other fantastic papers, across a wide variety of topics. I particularly like his behavioral theory written mainly with Roland Benabou; see, for instance, their 2003 ReStud on when monetary rewards are bad for incentives.

“Seeking the Roots of Entrepreneurship: Insights from Behavioral Economics,” T. Astebro, H. Herz, R. Nanda & R. Weber (2014)

Entrepreneurship is a strange thing. Entrepreneurs work longer hours, make less money in expectation, and have higher variance earnings than those working for firms; if anyone knows of solid evidence to the contrary, I would love to see the reference. The social value of entrepreneurship through greater product market competition, new goods, etc., is very high, so as a society the strange choice of entrepreneurs may be a net benefit. We even encourage it here at UT! Given these facts, why does anyone start a company anyway?

Astebro and coauthors, as part of a new JEP symposium on entrepreneurship, look at evidence from behavioral economics. The evidence isn’t totally conclusive, but it appears entrepreneurs are not any more risk-loving or ambiguity-loving than the average person. Though they are overoptimistic, you still see entrepreneurs in high-risk, low-performance firms even ten years after they are founded, at which point surely any overoptimism must have long since been beaten out of them.

It is, however, true that entrepreneurship is much more common among the well-off. If risk aversion can’t explain things, then perhaps entrepreneurship is in some sense consumption: the founders value independence and control. Experimental evidence provides fairly strong evidence for this hypothesis. For many entrepreneurs, it is more about not having a boss than about the small chance of becoming very rich.

This leads to a couple questions: why so many immigrant entrepreneurs, and what are we make of the declining rate of firm formation in the US? Pardon me if I speculate a bit here. The immigrant story may just be selection; almost by definition, those who move across borders, especially those who move for graduate school, tend to be quite independent! The declining rate of firm formation may be tied with inequality changes; to the extent that entrepreneurship involves consumption of a luxury good (control over one’s working life) in addition to standard risk-adjusted cost-benefit analysis, then changes in the income distribution will change that consumption pattern. More work is needed on these questions.

Summer 2014 JEP (RePEc IDEAS). As always, a big thumbs up to the JEP for being free to read! It is also worth checking out the companion articles by Bill Kerr and coauthors on experimentation, with some amazing stats using internal VC project evaluation data for which ex-ante projections were basically identical for ex-post failures and ex-post huge successes, and one by Haltiwanger and coauthors documenting the important role played by startups in job creation, the collapse in startup formation and job churn which began well before 2008, and the utter mystery about what is causing this collapse (which we can see across regions and across industries).

“Incentives for Unaware Agents,” E.L. von Thadden & X. Zhao (2012)

There is a paradox that troubles a lot of applications of mechanism design: complete contracts (or, indeed, conditional contracts of any kind!) appear to be quite rare in the real world. One reason for this may be that agents are simply unaware of what they can do, an idea explored by von Thadden and Zhao in this article as well as by Rubinstein and Glazer in a separate 2012 paper in the JPE. I like the opening example in Rubinstein and Glazer:

“I went to a bar and was told it was full. I asked the bar hostess by what time one should arrive in order to get in. She said by 12 PM and that once the bar is full you can only get in if you are meeting a friend who is already inside. So I lied and said that my friend was already inside. Without having been told, I would not have known which of the possible lies to tell in order to get in.”

The contract itself gave the agent the necessary information. If I don’t specify the rule that patrons whose friend is inside are allowed entry, then only those who are aware of that possibility will ask. Of course, some patrons who I do wish to allow in, because their friend actually is inside, won’t know to ask unless I tell them. If the harm to the bar from previously unaware people learning and then lying overwhelms the gain from allowing unaware friends in, then the bar is better off not giving an explicit “contract”. Similar problems occur all the time. There are lots of behavioral explanations (recall the famous Israeli daycare which was said to have primed people into an “economic relationship” state of mind by setting a fine for picking kids up late, leading to more lateness, not less). But the bar story above relies on no behavioral action aside from agents having a default (ask about the friend clause if aware, or don’t ask if unaware) which can be removed if agents are informed about their real possible actions when given a contract.

When all agents are unaware, the tradeoff is simple, as above: I make everyone aware of their true actions if the cost of providing incentive rents is exceeded by the benefit of agents switching to actions I prefer more. Imagine that agents can not clean, partially clean, or fully clean their tools at the end of the workday (giving some stochastic output of cleanliness). They get no direct utility out of cleaning, and indeed get disutility the more time they spend cleaning. If there is no contract, they default to partially cleaning. If there is a contract, then if all cleaning pays the same the agent will exert zero effort and not clean. The only reason I might offer high-powered incentives, then, is if the benefit of getting agents to fully clean their tools exceeds the IC rents I will have to pay them once the contract is in place.

More interesting is the case with aware and unaware agents, when I don’t know which agent is which. The unaware agents gets contracts that pay the same wage no matter what their output, and the aware agents can get high-powered incentives. Solving the contracting problem involves a number of technical difficulties (standard envelope theorem arguments won’t work), but the solution is fairly intuitive. Offer two incomplete wage contracts w(x) and v(x). Let v(x) just fully insure: no matter what the output, the wage is the same. Let w(x) increase with better outputs. Choose the full insurance wage v low enough that the unaware agents’ participation constraint just binds. Then offer just enough rents in w(x) that the aware agents, who can take any action they want, actually take the planner preferred action. Unlike in a standard screening problem, I can manipulate this problem by just telling unaware agents about their possible actions: it turns out that profits only increase by making these agents aware if there are sufficiently few unaware agents in the population.

Some interesting sidenotes. Unawareness is “stable” in the sense that unaware agents will never be told they are unaware, and hence if we played this game for two periods, they would remain unaware. It is not optimal for aware agents to make unaware agents aware, since the aware earn information rents as a result of that unawareness. It is not optimal for the planner to make unaware agents aware: the firm is maximizing total profit, announcements strictly decrease wages of aware agents (by taking their information rents), and don’t change unaware agents rents (they get zero since their wage is always chosen to make their PC bind, as is usual for “low types” in screening problems). Interesting.

2009 working paper (IDEAS). Final version in REStud 2012. The Rubinstein/Glazer paper takes a slightly different tack. Roughly, it says that contract designers can write a codex of rules, where you are accepted if you satisfy all the rules. An agent made aware of the rules can figure out how to lie if it involves only lying about one rule. A patient, for instance, may want a painkiller prescription. He can lie about any (unverifiable) condition, but he is only smart enough to lie once. The question is, which codices are not manipulable?

“Until the Bitter End: On Prospect Theory in a Dynamic Context,” S. Ebert & P. Strack (2012)

Let’s kick off job market season with an interesting paper by Sebastian Ebert, a post-doc at Bonn, and Philipp Strack, who is on the job market from Bonn (though this doesn’t appear to be his main job market paper). The paper concerns the implications of Tversky and Kahneman’s prospect theory is its 1992 form. This form of utility is nothing obscure: the 1992 paper has over 5,000 citations, and the original prospect theory paper has substantially more. Roughly, cumulative prospect theory (CPT) says that agents have utility which is concave above a reference point, convex below it, with big losses and gains that occur with small probability weighed particularly heavily. Such loss aversion is thought to explain, for example, the simultaneous existence of insurance and gambling, or the difference in willingness to pay for objects you possess versus objects you don’t possess.

As Machina, among others, pointed out a couple decades ago, once you leave expected utility, you are definitely going to be writing down preferences that generate strange behavior at least somewhere. This is a direct result of Savage’s theorem. If you are not an EU-maximizer, then you are violating at least one of Savage’s axioms, and those axioms in their totality are proven to avoid many types of behavior that we find normatively unappealing such as falling for the sunk cost fallacy. Ebert and Strack write down a really general version of CPT, even more general than the rough definition I gave above. They then note that loss aversion means I can always construct a right-skewed gamble with negative expected payout that the loss averse agent will accept. Why? Agents like big gains that occur with small probability. Right-skew the gamble so that a big gain occurs with a tiny amount of probability, and otherwise the agent loses a tiny amount. An agent with CPT preferences will accept this gamble. Such a gamble exists at any wealth level, no matter what the reference point. Likewise, there is a left-skewed, positive expected payoff gamble that is rejected at any wealth level.

If you take a theory-free definition of risk aversion to mean “Risk-averse agents never accept gambles with zero expected payoff” and “Risk-loving agents always accept a risk with zero expected payoff”, then the theorem in the previous paragraph means that CPT agents are neither risk-averse, nor risk-loving, at any wealth level. This is interesting because a naive description of the loss averse utility function is that CPT agents are “risk-averse above the reference point, and risk-loving below it”. But the fact that small probability events are given more weight, in Ebert and Strack’s words, dominates whatever curvature the utility function possesses when it comes to some types of gambles.

So what does this mean, then? Let’s take CPT agents into a dynamic framework, and let them be naive about their time inconsistency (since they are non EU-maximizers, they will be time inconsistent). Bring them to a casino where a random variable moves with negative drift. Give them an endowment of money and any reference point. The CPT agent gambles at any time t as long as she has some strategy which (naively) increases her CPT utility. By the skewness result above, we know she can, at the very least, gamble a very small amount, plan to stop if I lose, and plan to keep gambling if I win. There is always such a bet. If I do lose, then tomorrow I will bet again, since there is a gamble with positive expected utility gain no matter my wealth level. Since the process has negative drift, I will continue gambling until I go bankrupt. This result isn’t relying on any strange properties of continuous time or infinite state spaces; the authors construct an example on a 37-number roulette wheel using the original parameterization of Kahneman and Tversky which has the CPT agent bet all the way to bankruptcy.

What do we learn? Two things. First, a lot of what is supposedly explained by prospect theory may, in fact, be explained by the skewness preference which the heavy weighting on low probability events in CPT, a fact mentioned by a number of papers the authors cite. Second, not to go all Burke on you, but when dealing with qualitative models, we have good reason to stick to the orthodoxy in many cases. The logical consequences of orthodox models will generally have been explored in great depth. The logical consequences of alternatives will not have been explored in the same way. All of our models of dynamic utility are problematic: expected utility falls in the Rabin critique, ambiguity aversion implies sunk cost fallacies, and prospect theory is vulnerable in the ways described here. But any theory which has been used for a long time will have its flaws shown more visibly than newer, alternative theories. We shouldn’t mistake the lack of visible flaws for their lack more generally.

SSRN Feb. 2012 working paper (no IDEAS version).

%d bloggers like this: