Abstract:
Forecasters should be tested by the Brier score and not just by the calibration score, which can always be made arbitrarily small. The Brier score is the sum of the calibration score and the refinement score; the latter measures how good the sorting into bins with the same forecast is, and thus attests to expertise. This raises the question of whether one can gain calibration without losing expertise, which we refer to as calibeating. We provide an easy way to calibeat any forecast, by a deterministic online procedure. We moreover show that calibeating can be achieved by a stochastic procedure that is itself calibrated, and then extend the results to simultaneously calibeating multiple procedures, and to deterministic procedures that are continuously calibrated.
Publisher's Version