Computer Science Theory Seminar
Predictions about the future are often evaluated through statistical tests. As shown by recent literature, many known tests are subject to adverse selection problems and are ineffective at discriminating between forecasters who are competent and forecasters who are uninformed but predict strategically. This paper presents necessary and sufficient conditions under which it is possible to discriminate between informed and uninformed forecasters. These conditions have a natural Bayesian interpretation. It is shown that optimal tests take the form of simple likelihood-ratio tests comparing forecasters’ predictions against the predictions of a hypothetical outside observer. The result rests on a novel connection between the problem of testing strategic forecasters and the classical Neyman-Pearson paradigm of hypothesis testing.