MIT’s science magazine misrepresents critics of Stanford study

I’m disappointed. MIT can and should do better. I know MIT is not perfect—even setting aside Jeffrey Epstein and the Media Lab more generally, it’s just an institution, and all institutions have flaws. But they should be able to run a competent science magazine, for chrissake.

Scene 1

Last month, I received the following query by email:

I have interviewed John Ioannidis and Eran Bendavid regarding the Santa Clara study. I am writing for Undark (MIT’s science magazine) and wonder if you would be willing to chat briefly today or tomorrow morning?


I agreed and spoke with the reporter for about half an hour. In the conversation I emphasized that I had no particular issues with Ioannidis and, indeed, I hadn’t mentioned Ioannidis even once in my post on that study.

After the conversation, I remembered one more thing so I sent the reporter an email:

I remember that I did write one thing about Ioannidis; see here. I disagreed with his statement that their study was “the best science can do.” I also disagreed with coauthor Sood’s statement, “I don’t want ‘crowd peer review’ or whatever you want to call it,” I think crowd peer review is a good thing.

And a few days later I followed up with a link to one of my posts on how to do Bayesian analysis of the study.

Scene 2

The Undark article came out.

And it had problems.

The authors did not talk about the benefits of crowd peer review. That’s fine. It’s their article, not mine.

What bothered me is that they misrepresented the critics of the Stanford study, taking careful scientific criticism (and a bit of annoyance at sloppy science being promoted in the news media) as being political and personal.

1. The Undark article said: “Other critics said the antibody test used in the Santa Clara study was so unreliable that it was possible none of the 50 participants who tested positive had actually been infected. This, despite the fact that almost all surveys to date suffer similar test-reliability problems in low-prevalence areas.”

But the data in that study really are consistent with a very low rate of true positives in that sample. The critics are correct here! In our analysis of these data (https://www.medrxiv.org/content/10.1101/2020.05.22.20108944v2.full.pdf), we summarized as follows:

For now, we do not think the data support the claim that the number of infections in Santa Clara County was between 50 and 85 times the count of cases reported at the time, or the implied interval for the IFR of 0.12–0.2%. These numbers are consistent with the data, but the data are also consistent with a near-zero infection rate in the county. The data of Bendavid et al. (2020a,b) do not provide strong evidence about the number of people infected or the infection fatality ratio; the number of positive tests in the data is just too small, given uncertainty in the specificity of the test.

The fact that other surveys have similar problems with test reliability . . . sure, other surveys have these problems too! The lower the rate of positive tests in the data, the more you have to be concerned about false positives. The Santa Clara study had only 1.5% positive tests. That’s a really low number.

This is a technical point. It has to do with the possible false positive rate. It depends on the numbers. Talky-talk won’t do it. If you’re gonna do journalism on this one, it has to be quantitative journalism. Again, if any magazine should be able to handle this, it’s MIT’s magazine.

2. The Undark article said: “The attacks on Ioannidis continued to snowball. In a recent blog post about the Stanford study, Columbia University statistician Andrew Gelman wrote that Ioannidis and his co-authors ‘owe an apology not just to us, but to Stanford.’”

The misleading thing about the way this is presented is that they got the time sequence wrong. In the previous paragraph, they linked to a 20 May article from the Nation and a 15 May article from Buzzfeed. Then, after giving some background, they wrote, “The attacks on Ioannidis continued to snowball” and mentioned my post. But that post of mine was from 19 Apr. To present my post as part of a snowball of attacks . . . that’s just wrong.

Also, I did not attack Ioannidis in any way. Indeed, my post does not mention Ioannidis even once! In comments some people bring up Ioannidis, and at one point I noted that he was author #16 of a 17-author paper. This has never been about Ioannidis for me.

I do think the authors of the Stanford paper owe us an apology, but this has nothing to do with politics or Ioannidis or the ideological leanings of Buzzfeed or whatever. As I wrote in my post, “Everyone makes mistakes. I don’t think they authors need to apologize just because they screwed up. I think they need to apologize because these were avoidable screw-ups. They’re the kind of screw-ups that happen if you want to leap out with an exciting finding and you don’t look too carefully at what you might have done wrong.”

3. Undark wrote: “Ioannidis, right or wrong, has raised difficult questions, in the best tradition of science. Silencing him is an enormous risk to take.” Just to say this again: (a) my writings on this topic are not about Ioannidis et al., (b) Bendavid et al. made avoidable statistical errors in their papers, these were errors that many people pointed out, and they did not take the opportunity to reassess their conclusions, and (c) I think the “silencing him” think makes no sense. Nobody’s about to silence these Stanford professors who can continue to post their preprints etc.

The Stanford team made strong public claims, and other scientists pointed out errors in their claims. Meanwhile the news media got involved. If someone on Fox news said one thing or someone at the Nation said another . . . that’s fine, it’s the free press, they can feel free to share their takes on the news, but that’s not science. Just cos the Stanford study was featured on Fox news, that doesn’t make it wrong. Also, just cos some critics were praised in the Nation, that doesn’t mean it’s right for MIT’s science magazine to ignore the substance of the criticisms.

Scene 3

I sent off a polite note to the reporter who’d interviewed me with the three points above. The author responded that they did not imply that our statistical points were incorrect, so I followed up:

Thanks for the quick reply. I think it would help if the article clarified the point. Something like, “The critics were right on this one. The Stanford team really did make several mistakes in their statistics. And, just to be clear, the statistical criticisms by Will Fithian, Andrew Gelman, and others did not focus on or even mention Ioannidis. That said, it’s notable that the Stanford research got caught up in a political and media firestorm in a way that other, similar studies did not.”

They ask why did that happen. I have three quick answers. First, the Stanford paper got tons of media attention. This has nothing to do with Fox news, it’s just that if a paper gets tons of attention, then tons of people will read it, so if a paper does have problems, they might well be found. Second, the Stanford paper had some clear statistical errors. Third, the fraction of positive tests was low, which makes the results more sensitive to statistical assumptions.

Again, I bring in a technical point, naively thinking that a reporter for MIT’s science magazine will want to get the technical details right.

They also misrepresented the investigative reporting of Stephanie Lee; see this thread.

Scene 4

They edited the article in response to one of my points. But the edit just made things worse!

In the earlier version of the article, they wrote, “The attacks on Ioannidis continued to snowball. In a recent blog post about the Stanford study, Columbia University statistician Andrew Gelman wrote that Ioannidis and his co-authors ‘owe an apology not just to us, but to Stanford.’”

This has been changed to: “Attacks on Ioannidis came early and often. Just days after the study published, Columbia University statistician Andrew Gelman wrote that Ioannidis and his co-authors ‘owe an apology not just to us, but to Stanford.’”

First, I never attacked anyone. I pointed out errors in a much-discussed paper. I wrote that a group of authors make avoidable errors and I thought they should apologize for wasting our time with sloppy work. That is not an attack.

Second, my post was not about Ioannidis. Indeed, I did not mention Ioannidis at all in that post. Nor was my post about Ioannidis and his co-authors Ioannidis was author #16 out of 17. There was no “Ioannidis and his co-authors.”

In addition, in making these changes they still did not anywhere in your article acknowledged that Will Fithian, me, and other critics (including Stanford’s own Trevor Hastie and Rob Tibshirani) were correct that the Stanford paper had errors which invalidated their main claims.

Look, I get it. MIT’s science magazine want to write a story about Ioannidis. That’s fine. He’s had a busy career. I have nothing against him. What I don’t like is that they are taking open scientific discussion by the scientific community, and investigative reporting by Stephanie Lee and others, and inappropriately labeling them as political.

I don’t think Undark is doing Ioannidis any favors here either. He’s a busy person, he was author #16 on a 17-author paper that had no statisticians on it and that made serious statistical errors. That’s fine! Statistics is hard. Why go to so much effort to misrepresent the critics of this paper? This is how science works: when people make mistakes, we point them out. It’s not personal. It’s not about who is author #16 or whatever.

A more accurate story would state very clearly that (a) the Stanford paper had serious statistical errors, (b) the critics were correct, and (c) the scientific criticisms of that paper had nothing to do with its sixteenth author. Then they could go to town on the whole politics thing,

A science-empty take on science

Who cares?

I care, partly because Fithian, Lee, and many others put in a ton of work. Bob Carpenter and I were inspired to write a whole goddam paper on how to actually analyze this sort of data.

Yah, yah, you’re saying: It’s my fault because I said the authors of the Stanford study should apologize. Well, no, it’s their fault for not checking their statistics. I’m not saying they’re evil, I’m just saying an apology is in order, given all the time they wasted. I’m not saying they did a bad analysis on purpose; I’m saying they should’ve known enough to know that they didn’t know how to analyze these data. If you lend me your car and I try to fix it and instead I make things worse, leaving a pile of random parts and a pool of oil in the driveway, and it turns out I don’t know much about how to fix foreign cars, then, yeah, I should apologize, even if I was really really honestly trying my best.

But the Stanford team messed up. That happens. I’m mad at MIT’s science magazine because they just published the sort of article that sets back science journalism, an article that presents legitimate and open scientific criticism as being political. This is Lysenko-style science reporting: it’s not what you know, it’s who you know that counts.

I don’t think the authors of the Undark article are evil either. They’re journalists, they have a good story, and they want to go with it. They’re just doing that thing that storytellers sometimes do, of folding up the truth so it will fit better in their container. This is bad journalism, in the same way that the Stanford study used bad statistics. That doesn’t make these journalists bad people, any more than those Stanford doctors are bad people. They just got carried away; they’re misrepresenting the data in a way that better tells their story. And I’m not even saying the misrepresentation is on purpose; these things happen when you write an argument and then fill in the facts to make your case.

But I do blame them for not fixing things after I pointed out the problems in their story. This is MIT; the data should matter.

I do a lot of science, and I do a lot of science reporting. I’m not a fan of scientist-as-hero journalism, and I think science is well served by reporters such as Stephanie Lee who dig deep. I don’t think science is well served when the top engineering school in the world (I say as an alum, class of ’85) is promoting a science-empty take on science, a narrative which is full of politics but can’t find the time to establish the scientific facts.

I guess I”m just naive. Last decade I was getting angry that prestigious journals such as PNAS were publishing absolute crap. Now I’m used to it, but it’s time for me to be angry at prestigious science journalism. So laff at me, call me naive to think that MIT’s science magazine would want to get things right. I’m still fuming about Scene 4 above, where they politely listen to my criticism and then double down on their bullshit framing of the story.

P.S. David Hogg sends in the above picture of a NYC cat making its way out of quarantine.