Hey! Let’s check the calibration of some coronavirus forecasts.

Sally Cripps writes:

I am part of an international group of statisticians and infectious disease/epidemiology experts. We have recently investigated the statistical predictive performance of the UW-IHME model for predicting daily deaths in the US. We have found that the predictions for daily number of deaths provided by the IHME model have been highly inaccurate.

The UW-IHME model has been found to perform poorly even when attempting to predict the number of next day deaths. In particular, the true number of next day deaths has been outside the IHME prediction intervals as much as 70% of the time. If the model has this much difficulty in predicting the next day, we are concerned how the model will perform over the longer horizon, and in international locations where the accuracy of the data and applicability of the model are in question.

The attached manuscript is a result of this collaborative effort and here is the link.

We hope you find this report of interest and that you will share it with your colleagues and readers. As you can see it is largely about uncertainty quantification.

Indeed we are reconvening the uncertainty conference when COVID19 has hopefully passed and apologies for not being in contact sooner, but what with bushfires, floods (in Australia) and COVID19 it has been a trying last 6 months.

At least Australia seems to be doing something right about the management of COVID19. Unfortunately possible explanations for this, which may help other nations, must remain mere speculation because our government is refusing to release data. I do hope the situation in the US stabilizes soon.

It’s nice to be calibrated. But it’s calibrated to be nice.

Seriously, though . . . this sort of calibration check can only help. It is through criticism that we learn.