# “Inferential statistics as descriptive statistics”

Valentin Amrhein​, David Trafimow, and Sander Greenland write:

Statistical inference often fails to replicate. One reason is that many results may be selected for drawing inference because some threshold of a statistic like the P-value was crossed, leading to biased reported effect sizes. Nonetheless, considerable non-replication is to be expected even without selective reporting, and generalizations from single studies are rarely if ever warranted. Honestly reported results must vary from replication to replication because of varying assumption violations and random variation; excessive agreement itself would suggest deeper problems, such as failure to publish results in conflict with group expectations or desires. A general perception of a “replication crisis” may thus reflect failure to recognize that statistical tests not only test hypotheses, but countless assumptions and the entire environment in which research takes place. Because of all the uncertain and unknown assumptions that underpin statistical inferences, we should treat inferential statistics as highly unstable local descriptions of relations between assumptions and data, rather than as generalizable inferences about hypotheses or models. And that means we should treat statistical results as being much more incomplete and uncertain than is currently the norm. Acknowledging this uncertainty could help reduce the allure of selective reporting: Since a small P-value could be large in a replication study, and a large P-value could be small, there is simply no need to selectively report studies based on statistical results. Rather than focusing our study reports on uncertain conclusions, we should thus focus on describing accurately how the study was conducted, what problems occurred, what data were obtained, what analysis methods were used and why, and what output those methods produced.

I think the title of their article, “Inferential statistics as descriptive statistics: there is no replication crisis if we don’t expect replication,” is too clever by half: Ultimately, we do want to be able to replicate our scientific findings. Yes, the “replication crisis” could be called an “overconfidence crisis” in that the expectation of high replication rates was itself a mistake—but that’s part of the point, that if findings are that hard to replicate, this is a problem for the world of science, for journals such as PNAS which routinely publish papers that make general claims on the basis of much less evidence than is claimed.

Anyway, I agree with just about all of this linked article except for my concern about the title.