New coronavirus forecasting model

Kostya Medvedovsky writes:

I wanted to direct your attention to the University of Texas COVID-19 Modeling Consortium’s new projections.

They’re very similar to the IMHE model you’ve covered before, and had some calibration issues.

However, per the writeup by Spencer Woody et al., they do three things you may be interested in:
They fix an error with what with looks to be serial correlation in the IMHE model.
They’re fit in RStan (which you suggested the IMHE model should be written in lieu of using the non-linear least squares approach IMHE took).
They’re fit using U.S. data, as opposed to the IMHE model which was originally fit using data from Wuhan.
They also add in some stuff about cell phone data, but I frankly don’t know if that’s material or razzle dazzle. That data could be meaningful, it could not be. I’m vaguely skeptical, but who knows.

At a first pass, they projections look like a clear smell-test improvement over the IMHE model because they get less confident in their projections over time:

By contract, IMHE gets more confident over time:

I’m glad they’re using Stan, and I really like the transparency of what they’re doing, but I’m skeptical of these forecasts because of their high certainty level. For example, here’s what’s on the home page right now:

I can believe that their best estimate is that we’ve already reached the peak. But a probability of 97% that the peak will have passed within 7 days? That seems so high. But I guess the point is that (a) they’re curve fitting, and as the curve above shows, the second derivative has gone to zero, and (b) the results depend on policy, so there’s an implicit stationarity assumption: if deaths go up in any particular location, the local government can shut things down for awhile to stabilize the rates.

Anyway, it’s good to have all the code so we can see the mapping from assumptions and data to conclusions.