For a forthcoming Annual Review of Economics, Itzhak Gilboa attempts to list the most interesting open problems in decision theory. Should we redefine rationality? I can think of three definitions: a maximizer (the game theory definition), someone with complete and transitive preferences (the classical definition), and someone who satisfies Savage subjective expected utility axioms (the most common definition). Gilboa proposes a definition of rationality as someone who, if a “mistake” in their decisionmaking were pointed out, would not want to change their decision ex-post. For instance, if someone had cyclical preferences, and this was pointed out to them, presumably they would change. If they don’t change, then it’s tough to argue that descriptive (or even in some cases) normative decision theorists should call that person irrational. I don’t know if I accept Gilboa’s definition as a good one – I would rather have classical Savage minus completeness, and not call someone who hasn’t “realized an implication” an irrational person. Rather, that person is just someone with bounded knowledge. A number of similar possible lines of research are introduced in this paper: where do Bayesian priors come from? Is decision of probabilities useful in many cases, or should we move toward the upper/lower probability set notation? When do groups reach “better” decisions than individuals? When do (and when should) people rely on rules, and when should they rely on induction from similar past events?
This paper is a mine of ideas for future research.