Forget about multiple testing corrections. Actually, forget about hypothesis testing entirely.

Tai Huang writes:

I am reading this paper [Why we (usually) don’t have to worry about multiple comparisons, by Jennifer, Masanao, and myself]. I am searching how to do multiple comparisons correctly under Bayesian inference for A/B/C testing. For the traditional t-test approach, Bonferroni correction is needed to correct alpha value.

I am confused with your suggestion of not worrying about multiple comparisons. For example,

– For A/B testing, if P(A>B) > 0.975, I declare A wins.

– For A/B/C testing, does “No Correction Needed” mean that I can still use 0.975 to compare and get the same type 1 error rate as in A/B testing case?

My reply:

The published version of this paper is here.

The short answer is that I think it’s a mistake to “declare A wins” if Pr(A>B) > 0.975. The problem is with the perceived need for a deterministic conclusion. I don’t think type 1 error rates are relevant for reasons discussed here and here.

I can see that some readers might think my answer is “cheating”—my way around the type 1 error calibration problem is to say that I don’t care about type 1 error—but I’m serious. Here’s something I wrote about type 1 errors etc. back in 2004, one of our earliest blog posts.