Panos Toulis writes:

The debate on the Santa Clara study actually me to think about the problem from a finite sample inference perspective. In this case, we can fully write down the density

f(S | θ) in known analytic form, where S = (vector of) test positives, θ = parameters (i.e., sensitivity, specificity and prevalence).

Given observed values s_obs we can invert a test to obtain an exact confidence set for θ.I wrote down one such procedure and its theoretical properties (See Procedure 1.) I believe that finite sample validity is a benefit over asymptotic/approximate procedures such as bootstrap or Bayes, which may add robustness. I compare results in Section 4.3.

I recently noticed that in your paper with Bob, you discuss this possibility of test inversion in Section 6. What I propose is just one way to do this style of inference.

From my perspective, I don’t see the point of all these theorems: given that the goal is to generalize to the larger population, I think probability modeling is the best way to go. And I have problems with test inversion for general reasons; see here and, in particular, this comment here. Speaking generally, I am concerned that hypothesis-test inversions will not be robust to assumptions about model error.

But I recognize that other people find the classical hypothesis-testing perspective to be useful, so I’m sharing this paper.

Also relevant is this note by Will Fithian that also uses a hypothesis-testing framework.