Bayesian statistics is so closely linked with induction that one often hears it called “Bayesian induction.” What could be more inductive than taking a prior, gathering data, updating the prior with Bayes Law, and limiting to the true distribution of some parameter?
Gelman (of the popular statistics blog) and Shalizi point that, in practice, Bayesian statistics should actually be seen as Popper-style hypothesis-based deduction. The problem is intricately linked to the “taking a prior” above. In practice, the parameter space, or the domain used in estimation, is never broad enough to encompass every possible theory (even nonparametrics makes assumptions about conditional independence). This is for all the standard modelmaking reasons: tractability, parsimony, etc. Given that this is the case, the authors argue that computing the posterior is only the first step of good Bayesian data work. Once the posterior is estimated, the predictions of the posterior model should be compared with real world data – not with alternative hypotheses as in classical estimation – and if the replicated data do not fit, the estimation should be redone on a different domain. Replacing the domain is a form of model rejection a la Popper.
(The authors also note the “statistical folklore” that classical statistical tests are in many ways a measure of sample size and nothing more. As social scientists, we know that all of our models, strictly speaking, are wrong: otherwise they would not be models. That being the case, any model is social science will be rejected given enough evidence. I think this point is often misunderstood by economists.)