This question comes up a lot, in one form or another. Here’s a topical version, from Luigi Leone:
I am writing after three weeks of lockdown.
I would like to put to your attention this Imperial College report (issued on monday, I believe).
The report estimates 9.8% of the Italian population (thus, 6 mil) and 15% of the Spanish population (thus about 7 mil people) as already infected. Their estimation is based on Bayesian models of which I do not know a thing, while you know a lot. Hence, I cannot judge. But on a practical note, I was impressed by the credibility intervals: for Italy between 1.9 mil and 15.2 mil, and for Spain between 1.7 mil and 19 mil! What could a normal person do of these estimates that imply opposite conclusions (for instance for the mortality rate, which could oscillate between the Spanish flu at one end and the regular flu at the other end of the interval)? It is also strange for me, that the wider credibility intervals are found for the countries with more data (tests, positives, deaths), not for those with less data.
My reply: When you get this sort of wide interval, the appropriate response is to call for more data. The wide intervals are helpful in telling you that more information will be needed if you want to make an informed decision.
As noted above, this comes up all the time. When we say to accept uncertainty and embrace variation, the point is not that uncertainty (or certainty) is a good in itself but rather guide our actions. Certainty, or the approximation of certainty, can help in our understanding. Uncertainty can inform our decision making.