The expert testing literature is really quite cool. The world is throwing out some stochastic process – a string of stock prices, or rain totals, or whatever – and an “expert” claims to know the process. I don’t know anything at all but I want to “test” whether the expert actually knows the process or is a charlatan. A number of results have been proven here: with tests that decide in finite time, it is generically true that fake experts have a strategy that can pass any test, though this manipulability result disappears if we suitably restrict the class of possible stochastic functions nature can be playing, or if we make the tester a Bayesian, or if we consider certain mechanism design problems where the tester is using the data from the test in certain ways rather than just testing what the expert knows. A particularly interesting note about how experts might be successfully tested was Fortnow and Vohra, who found that restricting the set of strategies to those which are computable in a “reasonable” amount of time, in complexity terms, was sufficient to make it impossible for false experts to manipulate some tests.
Hu and Shmaya then take the next step and derive precisely the conditions on “complexity” of strategies which are necessary and sufficient for the manipulability result to hold. They restrict tests to those which are computable, in the sense that they can be described by a Turing machine; if you know the Church-Turing thesis, this essentially means any test which I can describe to you in words, or write down in a contract, is allowed. If supposed experts can only use strategies which are Turing computable, then there does exist a test which is nonmanipulable while still accepted true experts with high probability. This is simply an strengthening of the result in Fortnow and Vohra. But if supposed experts can use slightly more complicated strategies – strategies that are computable with a Turing machine associated with an oracle that can answer the “halting problem” – then they can manipulate any computable test. The word “slightly” in the previous sentence is precise in the sense that according to something computer scientists call an arithmetic hierarchy, the machine-plus-oracle above is the first class of machines that can answer more complex questions than a simple Turing machine.
The heart of the proof involves a famous result in complexity called the Enumeration Theorem. Essentially, the existence of functions which cannot be run on a (necessarily finite) Turing machine is countable, since the set of possible Turing machine programs is just a (countable) set of finite sequences of symbols. The Enumeration Theorem just defines a Universal Program U(m,n) such that if program 11256 in the set of countable programs outputs f(m) when m is input, then U(m,11256) outputs f(m) as well. It turns out to be impossible for a Turing machine to figure out the domain of its own Universal program (this is the halting problem). The link between the computability of “winning” strategies for a false expert given a computable test is intimately linked to being able to solve halting problems, which gives us our result.
It strikes me that the expert testing literature is probably close to tapped out when it comes to “basic” results like “what tests can be manipulated given domain of nature’s strategies X and domain of expert strategies Y?” But broader questions in expert testing – particularly at the intersection of mechanism design and testing, or the links between inductive reasoning and testing – are still wide open. And these are not mathematical or philosophical curiosities. Imagine a hedge fund claims to garner above average risk-adjusted returns using special knowledge of the underlying stochastic process guiding the stock market, and you propose a test this before investing? Many straightforward tests are not only beatable by a fake expert in theory, but the strategy to beat the test is simple to derive: you can be fooled. A presentation I caught by Dean Foster a couple months back – forgive me if I have mentioned this before on this site – argued that some investment firms not only can, but do, take advantage of investors in precisely the way the testing literature says they can. Identifying fakes, or at least not being harmed by them, is a serious question.
Working Paper (IDEAS). Paper forthcoming in Theoretical Economics.