Induction, as Hume showed, is philosophically invalid. Nonetheless, we must make decisions under uncertainty. Gilboa and coauthors write a model general enough to encompass many different inductive systems: Bayesian, case-based (2008 looks like 1929, hence the economy in 2009 will be like the economy in 1930), rule-based (if money supply increases 10%, the price level will increase 10% next period), etc. With a unified framework, we can compare different inductive systems in a decision theoretic or game theoretic framework.
In particular, let an agent apply, in time 0, equal probability to all states of the world (Bayesian hypotheses) and equal weight to all singleton analogies (“time period t is like time period 2, hence outcome in time t will be the same as in time 2”). Note that every period, an enormous number of Bayesian hypotheses are falsified (any hypothesis with positive weight on an outcome in period t that does not actually occur), while only a small number of case-based analogies are falsified (since most case-based analogies make no prediction for a given t; for instance, the hypothesis about the economy above makes no prediction about what happens in 2006). If the number of finite periods is long enough, the total weight places on non-Bayesian beliefs goes to one since almost all Bayesian beliefs are refuted. The idea that Bayesian reasoning is “pushed out” by simpler reasoning is quite general.
There are two ways around the conclusion. First, rather than a uniform prior, allow substantial mass to be placed on a state which turns out to be true. For instance, if all the Bayesian weight in the prior were placed on the state which turns out to be true, then no Bayesian ideas are ever refuted, and some case-based ones are every period, meaning Bayesian beliefs wind up with all the weight. More generally, let a process be parameterized, and only put weight on the finite set of parameterized beliefs which do not increase in size as the number of periods increase: for instance, if comets are known to reappear cyclically, put 1/2^k weight on a cycle of k years. Then once the true cycle is seen, all other Bayesian beliefs are refuted, by the total weight on Bayesian beliefs never falls below 1/2^k as the total weight on non-Bayesian beliefs goes to zero. Note that none of this argument is normative, but rather descriptive dynamics of inductive model choice. The problem with parameterizing is that the ideas we study with decision theory a la Savage does not tend to lend itself to the type of parameterization open to Bayesians in AI research or statistics: we require beliefs over a grand state space for which there is little reason to assume particular priors.