The Journal of the American Medical Association published a correction notice with perhaps the most boring title ever written:
Incorrect Data Due to Incorrect Conversion Factor
In the Original Investigation entitled “Effect of Intravenous Acetaminophen vs Placebo Combined With Propofol or Dexmedetomidine on Postoperative Delirium Among Older Patients Following Cardiac Surgery: The DEXACET Randomized Clinical Trial,” published in the February 19, 2019, issue of JAMA, an incorrect conversion factor of 2.4 was used to convert fentanyl to morphine equivalents. The correct conversion factor is 100. Data related to postoperative morphine equivalents administered, as reported in the Abstract, Results section of the main article text, Tables 2 and 3, and eFigures 1 and 2 in Supplement 2, were recalculated using the correct conversion factor. This article was corrected online.
Maybe next time JAMA runs a correction, they can just pick something from this list of unused titles. “Here Lies Yesterday,” perhaps?
I learned about the above correction notice from David Allison, who wrote:
Note sure what the lesson for data analysis quality control is here is here, but interesting to wonder about how that mistake was not caught pre-publication.
Hmmm . . . I don’t think it’s surprising at all that this mistake was not caught! I say this for three reasons:
1. It’s a Reinhart-Rogoff Excel error. If the data and code are not public, then there’s no clean workflow, and errors can be introduced just about anywhere in the process.
2. Correctness of the published claims is the responsibility of the author, not the journal. If the author didn’t care enough to check, an error can get through. Also, everyone makes mistakes. (Just go here and search for *correction notice*.)
3. There are a few million scientific papers published every year, thus even more submissions. If each submission gets three reviews, and each reviewer checks every detail of the paper . . . do the math. The result is that nobody has time to do any science! That’s why I prefer post-publication review.
Lots of errors do get caught in the review process, and that’s fine—but no surprise that lots more errors never get noticed until later, if at all.