This controversial hydroxychloroquine paper: What’s Lancet gonna do about it?

Peer review is not a form of quality control

In the past month there’s been a lot of discussion of the flawed Stanford study of coronavirus prevalence—it’s even hit the news—and one thing came up was that the article under discussion was just a preprint—it wasn’t even peer reviewed!

For example, in a NYT op-ed:

This paper, and thousands more like it, are the result of a publishing phenomenon called the “preprint” — articles published long before the traditional form of academic quality control, peer review, takes place. . . . They generally carry a warning label: “This research has yet to be peer reviewed.” To a scientist, this means it’s provisional knowledge — maybe true, maybe not. . . .

That’s fine, as long as you recognize that “peer-reviewed research” is also “provisional knowledge — maybe true, maybe not.” As we’ve learned in recent years, lots of peer-reviewed research is really bad. Not just wrong, as in, hey, the data looked good but it was just one of those things, but wrong, as in, we could’ve or should’ve realized the problems with this paper before anyone even tried to replicate it.

The beauty-and-sex-ratio research, the ovulation-and-voting research, embodied cognition, himmicanes, ESP, air rage, Bible Code, the celebrated work of Andrew Wakefield, the Evilicious guy, the gremlins dude—all peer-reviewed.

I’m not saying that all peer-reviewed work is bad—I’ve published a few hundred peer-reviewed papers myself, and I’ve only had to issue major corrections for 4 of them—but to consider peer review as “academic quality control” . . . no, that’s not right. The quality of the paper has been, and remains, the responsibility of the author, not the journal.


So, a new one came in. A recent paper published in the famous/notorious medical journal Lancet reports that hydroxychloroquine and chloroquine increased the risk of in-hospital death by 30% to 40% and increased arrhythmia by a factor of 2 to 5. The study hit the news with the headline, “Antimalarial drug touted by President Trump is linked to increased risk of death in coronavirus patients, study says.” (Meanwhile, Trump says that Columbia is “a liberal, disgraceful institution.” Good thing we still employ Dr. Oz!)

All this politics . . . in the meantime, this Lancet study has been criticized; see here and here. I have not read the article in detail so I’m not quite sure what to make of the criticisms; I linked to them on Pubpeer in the hope that some experts can join in.

Now we have open review. That’s much better than peer review.

What’s gonna happen next?

I can see three possible outcomes:

1. The criticisms are mistaken. Actually the research in question adjusted just fine for pre-treatment covariates, and the apparent data anomalies are just misunderstandings. Or maybe there are some minor errors requiring minor corrections.

2. The criticisms are valid and the authors and journal publicly acknowledge their mistakes. I doubt this will happen. Retractions and corrections are rare. Even the most extreme cases are difficult to retract or correct. Consider the most notorious Lancet paper of all, the vaccines paper by Andrew Wakefield, which appeared in 1998, and was finally retracted . . . in 2010. If the worst paper ever took 12 years to be retracted, what can we expect for just run-of-the-mill bad papers?

3. The criticisms are valid, the authors dodge and do not fully grapple with the criticism, and the journal stays clear of the fray, content to rack up the citations and the publicity.

That last outcome seems very possible. Consider what happened a few years ago when Lancet published a ridiculous article purporting to explain variation in state-level gun deaths using 25 state-level predictors representing different gun control policies. A regression with 50 data points and 25 predictors and no regularization . . . wait! This was a paper that was so fishy that, even though it was published in a top journal and even though its conclusions were simpatico with the views of gun-control experts, those experts still blasted the paper with “I don’t believe that . . . this is not a credible study and no cause and effect inferences should be made from it . . . very flawed piece of research.” A couple of researchers at Rand (full disclosure: I’ve worked with these two people) followed up with a report concluding:

We identified a number of serious analytical errors that we suspected could undermine the article’s conclusions. . . . appeared likely to support bad gun policies and to hurt future research efforts . . . overfitting . . . clear evidence that its substantive conclusions were invalid . . . factual errors and inconsistencies in the text and tables of the article.

They published a letter in Lancet with their criticisms, and the authors responded with a bunch of words, not giving an inch on any of their conclusions or reflecting on the problems of using multiple regression the way they did. And, as far as Lancet is concerned . . . that’s it! Indeed, if you go to the original paper on the Lancet website, you’ll see no link to this correspondence. Meanwhile, according to Google, the article has been cited 74 times. OK, sure, 74 is not a lot of citations, but still. It’s included in a meta-analysis published in JAMA—and one of the authors of that meta-analysis is the person who said he did not believe the Lancet paper when it came out! The point is, it’s in the literature now and it’s not going away.

A few years ago I wrote, in response to a different controversy regarding Lancet, that journal reputation is a two-way street:

Lancet (and other high-profile journals such as PPNAS) play a role in science publishing, that is similar to the Ivy League in universities: It’s hard to get in, but once you’re in, you have that Ivy League credential, and you have to really screw up to lose that badge of distinction.

Or, to bring up another analogy I’ve used in the past, the current system of science publication and publicity is like someone who has a high fence around his property but then keeps the doors of his house unlocked. Any burglar who manages to get inside the estate then has free run of the house. . . .

As Dan Kahan might say, what do you call a flawed paper that was published in a journal with impact factor 50 after endless rounds of peer review? A flawed paper. . . .

My concern is that Lancet papers are inappropriately taken more seriously than they should. Publishing a paper in Lancet is fine. But then if the paper has problems, it has problems. At that point it shouldn’t try to hide behind the Lancet reputation, which seems to be what is happening. And, yes, if that happens enough, it should degrade the journal’s reputation. If a journal is not willing to rectify errors, that’s a problem no matter what the journal is.

Remember Newton’s third law? It works with reputations too. The Lancet editor is using his journal’s reputation to defend the controversial study. But, as the study becomes more and more disparaged, the sharing of reputation goes the other way.

I can imagine the conversations that will occur:

Scientist A: My new paper was published in the Lancet!

Scientist B: The Lancet, eh? Isn’t that the journal that published the discredited Iraq survey, the Andrew Wakefield paper, and that weird PACE study?

A: Ummm, yeah, but my article isn’t one of those Lancet papers. It’s published in the serious, non-politicized section of the magazine.

B: Oh, I get it: The Lancet is like the Wall Street Journal—trust the articles, not the opinion pages?

A: Not quite like that, but, yeah: If you read between the lines, you can figure out which Lancet papers are worth reading.

B: Ahhh, I get it.

Now we just have to explain this to journalists and policymakers and we’ll be in great shape. Maybe the Lancet could use some sort of tagging system, so that outsiders can know which of its articles can be trusted and which are just, y’know, there?

Long run, reputation should catch up to reality. . . .

I don’t think the long run has arrived yet. Almost all the press coverage of this study seemed to be taking the Lancet label as a sign of quality.

Speaking of reputations . . . the first author of the Lancet paper is from Harvard Medical School, which sounds pretty impressive, but then again we saw that seriously flawed paper that come out from Stanford Medical School, and a few months ago we heard about a bungled job from the University of California medical school. These major institutions are big places, and you can’t necessarily trust a paper, just because it comes from a generally respected medical center.

Again, I haven’t looked at the article in detail, nor am I any kind of expert on hydro-oxy-chloro-whatever-it-is, so let me say one more time that outcome 1 above is still a real possibility to me. Just cos someone sends me some convincing-looking criticisms, and there are data availability problems, that doesn’t mean the paper is no good. There could be reasonable explanations for all of this.