David Spiegelhalter wants a checklist for quality control of statistical models?

David Spiegelhalter writes in with a quick question:

Although I don’t do any technical stuff now, I find myself arguing for using quantified expert judgement in assessing a distribution for the size of systematic biases in estimates from lower-quality data-sources, particularly for official stats such as migration estimates, but also in other areas.

We have promoted using expert judgement in trying to ‘de-bias’ observational studies in meta-analysis, which has had a good lot of citations but has not really caught on. This is essentially the same as assessing proper priors, but since Bayes theorem is not used, we can avoid the B-word.

Sander Greenland recently did a review of the whole area of quantifying biases in epidemiology, and questioned why it had not become established.

My experience is that audiences express scepticism about quantified judgement, wondering about the quality control. When I was discussing this at StanCon, I wondered if checklists for quality-control of priors had been established, and whether there was any chance of these becoming standardised and more ‘official’ (like CONSORT, STROBE etc). Things like…

What is the prior?
Whose responsibility is it?
When was it assessed?
What sources were used?
What is a reasonable range for sensitivity analysis?
What is its impact on conclusions?
Does the prior-predictive distribution look reasonable?
etc

Maybe this has all been done, in which case I would like to promote such a checklist.

I don’t know of any such checklist. My only comment is that I would change “prior” to “model” because I’m concerned about model assumps in general. Indeed, the model for data and measurement is typically much more important than the prior distribution for model parameters.