“Fact-Free Learning,” E. Aragones et al (2005)

The Bayesian model assumes that when people learn new information, they update their beliefs about events, and that nothing other than new information can change an agent’s mind. In the real world, this is not true; for instance, the entire branch of mathematics is essentially the art of finding relationships among already known facts. Verifying the relationship once it has been pointed out is generally simple. This suggests that learning about relationships among already known facts may be in the class of problems NP. Indeed, the authors show that finding a parsimonious, accurate relationship among known facts – that is, knowing whether there is a set (r,k) such that the R^2 of a linear relationship with k variables is at least r – turns out to be NP-complete, meaning easy to verify but, in the worst case, very difficult to discover. This is a computational reason why agents may not acts as Bayesians. Further, this result is linked to Aumann’s agree to disagree result. When discovering relationships is costly, agents may disagree because they have not discovered the same rule, and even after being told what the other rule is, they may disagree because of a different relative value places on parsimony and accuracy.

The worry with this paper is whether rational agents should, normatively, care about parsimony in a relationship. When examining the past, it can be argued that we want to “overfit”, to use as much data as possible. To induct about the future, it strikes me that there is no philosophical difference between parsimonious and complex models: induction is impossible in either case. All that can be said is that the assumptions which will allow induction are “less complicated” when the model is parsimonious, but again, I know of no philosophical argument that suggests this makes induction somehow more or less accurate.


%d bloggers like this: