Category Archives: Behavioral Economics

“Seeking the Roots of Entrepreneurship: Insights from Behavioral Economics,” T. Astebro, H. Herz, R. Nanda & R. Weber (2014)

Entrepreneurship is a strange thing. Entrepreneurs work longer hours, make less money in expectation, and have higher variance earnings than those working for firms; if anyone knows of solid evidence to the contrary, I would love to see the reference. The social value of entrepreneurship through greater product market competition, new goods, etc., is very high, so as a society the strange choice of entrepreneurs may be a net benefit. We even encourage it here at UT! Given these facts, why does anyone start a company anyway?

Astebro and coauthors, as part of a new JEP symposium on entrepreneurship, look at evidence from behavioral economics. The evidence isn’t totally conclusive, but it appears entrepreneurs are not any more risk-loving or ambiguity-loving than the average person. Though they are overoptimistic, you still see entrepreneurs in high-risk, low-performance firms even ten years after they are founded, at which point surely any overoptimism must have long since been beaten out of them.

It is, however, true that entrepreneurship is much more common among the well-off. If risk aversion can’t explain things, then perhaps entrepreneurship is in some sense consumption: the founders value independence and control. Experimental evidence provides fairly strong evidence for this hypothesis. For many entrepreneurs, it is more about not having a boss than about the small chance of becoming very rich.

This leads to a couple questions: why so many immigrant entrepreneurs, and what are we make of the declining rate of firm formation in the US? Pardon me if I speculate a bit here. The immigrant story may just be selection; almost by definition, those who move across borders, especially those who move for graduate school, tend to be quite independent! The declining rate of firm formation may be tied with inequality changes; to the extent that entrepreneurship involves consumption of a luxury good (control over one’s working life) in addition to standard risk-adjusted cost-benefit analysis, then changes in the income distribution will change that consumption pattern. More work is needed on these questions.

Summer 2014 JEP (RePEc IDEAS). As always, a big thumbs up to the JEP for being free to read! It is also worth checking out the companion articles by Bill Kerr and coauthors on experimentation, with some amazing stats using internal VC project evaluation data for which ex-ante projections were basically identical for ex-post failures and ex-post huge successes, and one by Haltiwanger and coauthors documenting the important role played by startups in job creation, the collapse in startup formation and job churn which began well before 2008, and the utter mystery about what is causing this collapse (which we can see across regions and across industries).

“Incentives for Unaware Agents,” E.L. von Thadden & X. Zhao (2012)

There is a paradox that troubles a lot of applications of mechanism design: complete contracts (or, indeed, conditional contracts of any kind!) appear to be quite rare in the real world. One reason for this may be that agents are simply unaware of what they can do, an idea explored by von Thadden and Zhao in this article as well as by Rubinstein and Glazer in a separate 2012 paper in the JPE. I like the opening example in Rubinstein and Glazer:

“I went to a bar and was told it was full. I asked the bar hostess by what time one should arrive in order to get in. She said by 12 PM and that once the bar is full you can only get in if you are meeting a friend who is already inside. So I lied and said that my friend was already inside. Without having been told, I would not have known which of the possible lies to tell in order to get in.”

The contract itself gave the agent the necessary information. If I don’t specify the rule that patrons whose friend is inside are allowed entry, then only those who are aware of that possibility will ask. Of course, some patrons who I do wish to allow in, because their friend actually is inside, won’t know to ask unless I tell them. If the harm to the bar from previously unaware people learning and then lying overwhelms the gain from allowing unaware friends in, then the bar is better off not giving an explicit “contract”. Similar problems occur all the time. There are lots of behavioral explanations (recall the famous Israeli daycare which was said to have primed people into an “economic relationship” state of mind by setting a fine for picking kids up late, leading to more lateness, not less). But the bar story above relies on no behavioral action aside from agents having a default (ask about the friend clause if aware, or don’t ask if unaware) which can be removed if agents are informed about their real possible actions when given a contract.

When all agents are unaware, the tradeoff is simple, as above: I make everyone aware of their true actions if the cost of providing incentive rents is exceeded by the benefit of agents switching to actions I prefer more. Imagine that agents can not clean, partially clean, or fully clean their tools at the end of the workday (giving some stochastic output of cleanliness). They get no direct utility out of cleaning, and indeed get disutility the more time they spend cleaning. If there is no contract, they default to partially cleaning. If there is a contract, then if all cleaning pays the same the agent will exert zero effort and not clean. The only reason I might offer high-powered incentives, then, is if the benefit of getting agents to fully clean their tools exceeds the IC rents I will have to pay them once the contract is in place.

More interesting is the case with aware and unaware agents, when I don’t know which agent is which. The unaware agents gets contracts that pay the same wage no matter what their output, and the aware agents can get high-powered incentives. Solving the contracting problem involves a number of technical difficulties (standard envelope theorem arguments won’t work), but the solution is fairly intuitive. Offer two incomplete wage contracts w(x) and v(x). Let v(x) just fully insure: no matter what the output, the wage is the same. Let w(x) increase with better outputs. Choose the full insurance wage v low enough that the unaware agents’ participation constraint just binds. Then offer just enough rents in w(x) that the aware agents, who can take any action they want, actually take the planner preferred action. Unlike in a standard screening problem, I can manipulate this problem by just telling unaware agents about their possible actions: it turns out that profits only increase by making these agents aware if there are sufficiently few unaware agents in the population.

Some interesting sidenotes. Unawareness is “stable” in the sense that unaware agents will never be told they are unaware, and hence if we played this game for two periods, they would remain unaware. It is not optimal for aware agents to make unaware agents aware, since the aware earn information rents as a result of that unawareness. It is not optimal for the planner to make unaware agents aware: the firm is maximizing total profit, announcements strictly decrease wages of aware agents (by taking their information rents), and don’t change unaware agents rents (they get zero since their wage is always chosen to make their PC bind, as is usual for “low types” in screening problems). Interesting.

2009 working paper (IDEAS). Final version in REStud 2012. The Rubinstein/Glazer paper takes a slightly different tack. Roughly, it says that contract designers can write a codex of rules, where you are accepted if you satisfy all the rules. An agent made aware of the rules can figure out how to lie if it involves only lying about one rule. A patient, for instance, may want a painkiller prescription. He can lie about any (unverifiable) condition, but he is only smart enough to lie once. The question is, which codices are not manipulable?

“Until the Bitter End: On Prospect Theory in a Dynamic Context,” S. Ebert & P. Strack (2012)

Let’s kick off job market season with an interesting paper by Sebastian Ebert, a post-doc at Bonn, and Philipp Strack, who is on the job market from Bonn (though this doesn’t appear to be his main job market paper). The paper concerns the implications of Tversky and Kahneman’s prospect theory is its 1992 form. This form of utility is nothing obscure: the 1992 paper has over 5,000 citations, and the original prospect theory paper has substantially more. Roughly, cumulative prospect theory (CPT) says that agents have utility which is concave above a reference point, convex below it, with big losses and gains that occur with small probability weighed particularly heavily. Such loss aversion is thought to explain, for example, the simultaneous existence of insurance and gambling, or the difference in willingness to pay for objects you possess versus objects you don’t possess.

As Machina, among others, pointed out a couple decades ago, once you leave expected utility, you are definitely going to be writing down preferences that generate strange behavior at least somewhere. This is a direct result of Savage’s theorem. If you are not an EU-maximizer, then you are violating at least one of Savage’s axioms, and those axioms in their totality are proven to avoid many types of behavior that we find normatively unappealing such as falling for the sunk cost fallacy. Ebert and Strack write down a really general version of CPT, even more general than the rough definition I gave above. They then note that loss aversion means I can always construct a right-skewed gamble with negative expected payout that the loss averse agent will accept. Why? Agents like big gains that occur with small probability. Right-skew the gamble so that a big gain occurs with a tiny amount of probability, and otherwise the agent loses a tiny amount. An agent with CPT preferences will accept this gamble. Such a gamble exists at any wealth level, no matter what the reference point. Likewise, there is a left-skewed, positive expected payoff gamble that is rejected at any wealth level.

If you take a theory-free definition of risk aversion to mean “Risk-averse agents never accept gambles with zero expected payoff” and “Risk-loving agents always accept a risk with zero expected payoff”, then the theorem in the previous paragraph means that CPT agents are neither risk-averse, nor risk-loving, at any wealth level. This is interesting because a naive description of the loss averse utility function is that CPT agents are “risk-averse above the reference point, and risk-loving below it”. But the fact that small probability events are given more weight, in Ebert and Strack’s words, dominates whatever curvature the utility function possesses when it comes to some types of gambles.

So what does this mean, then? Let’s take CPT agents into a dynamic framework, and let them be naive about their time inconsistency (since they are non EU-maximizers, they will be time inconsistent). Bring them to a casino where a random variable moves with negative drift. Give them an endowment of money and any reference point. The CPT agent gambles at any time t as long as she has some strategy which (naively) increases her CPT utility. By the skewness result above, we know she can, at the very least, gamble a very small amount, plan to stop if I lose, and plan to keep gambling if I win. There is always such a bet. If I do lose, then tomorrow I will bet again, since there is a gamble with positive expected utility gain no matter my wealth level. Since the process has negative drift, I will continue gambling until I go bankrupt. This result isn’t relying on any strange properties of continuous time or infinite state spaces; the authors construct an example on a 37-number roulette wheel using the original parameterization of Kahneman and Tversky which has the CPT agent bet all the way to bankruptcy.

What do we learn? Two things. First, a lot of what is supposedly explained by prospect theory may, in fact, be explained by the skewness preference which the heavy weighting on low probability events in CPT, a fact mentioned by a number of papers the authors cite. Second, not to go all Burke on you, but when dealing with qualitative models, we have good reason to stick to the orthodoxy in many cases. The logical consequences of orthodox models will generally have been explored in great depth. The logical consequences of alternatives will not have been explored in the same way. All of our models of dynamic utility are problematic: expected utility falls in the Rabin critique, ambiguity aversion implies sunk cost fallacies, and prospect theory is vulnerable in the ways described here. But any theory which has been used for a long time will have its flaws shown more visibly than newer, alternative theories. We shouldn’t mistake the lack of visible flaws for their lack more generally.

SSRN Feb. 2012 working paper (no IDEAS version).

Follow

Get every new post delivered to your Inbox.

Join 184 other followers

%d bloggers like this: