There is a massive literature on what criteria are relevant when deciding whether a theory “explains” evidence: causal necessity, various forms of statistical relevance, etc. Less developed is agreement on how strong a hypothesis is given that we agree it potentially explains the data.
Schupbach and Sprenger axiomatize the idea of explanatory power. Formally, let E(e,h) be a function mapping an event and a hypothesis (perhaps deterministic, perhaps not, as regular measurability is all that is required) into [-1,1]. By Bayes’ rule, we can immediately write E(e,h) as E(Pr(e),Pr(h|e),Pr(h|~e)). Let E be increasing in the statistical relevance of h to e – that is, E is larger when Pr(e|h)>Pr(e). Require that if a second hypothesis h2 is independent (statistically) from e, then E(h^h2|e)=E(h|e). Finally, ensure that if the negation of a hypothesis h entails the event e, then explanatory power E(e,h) does not depend on Pr(h); this essentially restricts the prior probability of h from affecting the explanatory power of h on e.
If those three conditions hold, then E must be a monotonically increasing function of Pr(h|e)/Pr(h|~e). That is, E will take on its maximal value when h implies e tautologically, and its minimum value when h implies not e. If you further assume symmetry (E(e,h)=-E(-e,h)), impose that E(e,h) is zero when e and h are statistically independent, and make a technical normalizing assumption, then E can be pinned uniquely as (A-B)/(A+B), where A=Pr(h|e) and B=Pr(h|~e). The authors go on to show a handful of nice properties which come from this axiomatization. For instance, if h does explain e, but is totally useless for explaining e2, we would like the “explanatory” power of h on e^e2 to be less than the explanatory power of h on e. The function above does behave in that manner. Also, if h explains e to some degree, and Pr(e2|e^h)<Pr(e2|e), then E(e^e2,h)<E(e,h); that is, if some new evidence e2 is more surprising knowing evidence e under hypothesis h, then h is less explanatory of the conjunction e and e2 than of e alone.
It seems quite useful to have a tractable model of explanatory power of a hypothesis (which is assumed to be true). As the authors note, it is not known whether there is any firm link between the probability of a hypothesis given evidence and the explanatory power of that hypothesis on the evidence. For economists, it seems natural to desire theories which are confirmed by most events we claim as support, and which are denied by most events which we do not claim as support. I would like to see examples about whether this definition of explanatory power is useful for discussing proposed hypotheses even before going to the data – that is, “even before examining the data, the methodology herein constructs a hypothesis which is very/mildly/weakly explanatory.”
One last thing: I’m continually fascinated by what passes muster in different fields. This paper is from the philosophy literature, not strictly economics. Check out footnote 12, which I literally laughed out loud upon seeing. “E is closely related to Kemeny and Oppenheim’s (1952) measure of “factual support” F . In fact, these two measures are structurally equivalent; however, regarding the interpretation of the measure, E(e, h) is F(h, e) flip-flopped (h is replaced by e, and e is replaced by h).” Well, I’d say that’s more than closely related!
http://philsci-archive.pitt.edu/5521/1/ExplanatoryPower.pdf (Final WP – final version published in Philosophy of Science Jan 2011)