Attempts at providing helpful explanations of statistics must avoid instilling misleading or harmful notions: ‘Statistical significance just tells us whether or not something definitely does or definitely doesn’t cause cancer’

Getting across (scientifically) profitable notions of statistics to non-statisticians (as well as fellow statisticians) ain’t easy.

Statistics is what it is, but explaining it as what it ain’t just so it is easy to understand (and thereby likely to make you more popular) should no longer be tolerated. What folks take away from easy to understand incorrect explanations can be dangerous to them and others. Worse they can become more gruesome than even vampirical ideas – false notions that can’t be killed by reason.

I recently came across the quoted explanation in the title of this post in a youtube a colleague tweeted How Not to Fall for Bad Statistics – with Jennifer Rogers.

The offending explanation of statistics as the alchemy of converting uncertainty into certainty occurs at around 7 minutes. Again, “Statistical significance just tells us whether or not something definitely does or definitely doesn’t cause cancer.” So if you were uncertain if something caused cancer, just use  statistical significance to determine if it definitely does or definitely doesn’t. Easy peasy. If p > .05 nothing to worry about. On the other hand, if p < .05 do whatever you can to avoid it. Nooooooo!

Now, if only a statistician was doing such a talk or maybe a highly credentialed statistician – but at the time Jennifer Rogers was the Director of Statistical Consultancy Services at the University of Oxford, an associate professor at Oxford and still is vice president for external affairs of the Royal Statistical Society. And has a TEDx talk list on her personal page. How could they have gotten statistical significance so wrong?

OK, at another point in the talk she did give a correct definition of p_values and at another point she explained a confidence interval as an interval of plausible values. But then she claimed for a particular confidence interval  at around 37 minutes “I would expect 95% of them between 38 and 66” where she seems to be referring to future estimates or maybe even the “truth”. Again getting across (scientifically) profitable notions of statistics to non-statisticians (as well as fellow statisticians) ain’t easy. We all are at risk of accidentally giving incorrect definitions and explanations. Unfortunately those are the ones folks are most likely to take away as they are much easier to make sense of and seemingly more profitable for what they want to do.

So we all need to speak up about them and retract ones we make. This video has had almost 50,000 views!!!

Unfortunately, there is more to complain about in the talk. Most of the discussion about confidence intervals seemed to be just a demonstration of how to determine statistical significance with them. The example made this especially perplexing to me being that it addressed a survey to determine how many agreed with an advertisement claim – of 52 surveyed 52% agreed. Now when I first went to university, I wanted to go into advertising (there was even a club for that at the University of Toronto). Things may have changed since then, but then getting even 10% of people to accept an adverting claim would have to considered a success.

But here the uncertainty in the survey results is assessed primarily using a null hypothesis of 50% agreement. What? As if we are really worried that 52 people flipped a random coin to answer the survey. Really? However, with that convenient assumption it is all about whether the confidence interval includes 50% or not. At around 36 minutes if the confidence interval does not cross 50% “I say it’s a statistically significant result” QED.

Perhaps the bottom line here is that as with journalists who would benefit from statisticians giving advice as to how to avoid being mislead by statistics, all statisticians need other statisticians to help them avoiding explanations of statistics that may instil misleading notions of what statistics are, can do and especially what one should make of them. So we all need to speak up about them and retract ones we make.