Robert Matthews writes:
Your post on the design and analysis of trials really highlights how now more than ever it’s vital the research community takes seriously all that “nit-picking stuff” from statisticians about the dangers of faulty inferences based on null hypothesis significance testing.
These dangers aren’t restricted to the search for new therapies. I’m currently conducting a literature review of existing prophylactics for upper respiratory tract infections which may reduce the risk of SARS-CoV-2 infection. I’ve found a number of studies with point estimates indicating substantial risk reduction that have nevertheless been dismissed as failures because they did not achieve statistical significance.
Maybe the time has finally come to make the big move into that “post p
I agree that the idea of statistical significance continues to create all sorts of problems, both theoretical and practical, and we (the scientific establishment) should move past the practice of using statistical significance to summarize experiments. At best, statistical significance provides some rough guidance into the question, “Are more data needed to make any sort of useful conclusion here?”—but even for that specialized question, there are better tools.
That said, I doubt we’ll see any revolution right now. I expect we’ll muddle through using existing practices, partly because people are in too much of a hurry to change, and partly because of the dominance of classical statistical training. I was just talking with a medical researcher the other day who wanted to do a classical power analysis. The result of the power analysis wasn’t useless; it just had to be interpreted carefully. Interpreted in the conventional way, the power analysis could be worse than useless. I do think we should move beyond statistical significance and that lives could be saved by doing things right; unfortunately I don’t see this happening in general practice in the short term.