Can the science community help journalists avoid science hype? It won’t be easy.

tl;dr: Selection bias.

The public letter

Michael Eisen and Rob Tibshirani write:

Researchers have responded to the challenge of the coronavirus with a commitment to speed and cooperation, featuring the rapid sharing of preliminary findings through “preprints,” scientific manuscripts that have not yet undergone formal peer review. . . .

But the open dissemination of early versions of papers has created a challenge: how to ensure that policymakers and the public do not act too hastily on early studies that are soon shown to have serious errors. . . .

That is why we and a group of over 100 scientists are calling for American scientists and journalists to join forces to create a rapid-review service for preprints of broad public interest. It would corral a diverse contingent of scientists ready to comment on new preprints and to be responsive to reporters on deadline. . . .

My concerns

I think this proposed service could be a good idea. I have only three concerns:

1. The focus on peer review. Given all the problems we’ve seen with peer-reviewed papers, I don’t think preprints create a new challenge. Indeed, had peer review been some sort of requirement for attention, I’m pretty sure that the authors of that Santa Clara paper, with their connections, could’ve rushed it through an expedited peer review at PNAS or JAMA or Lancet or some other tabloid-style journal.

To put it another way, peer review is not generally done by “experts”; it’s done by “peers,” who often have the exact same blind spots as the authors of the papers being reviewed.

Remember Surgisphere? Remember Pizzagate? Remember himmicanes, air rage, ESP, ages ending in 9, beauty and sex ratio, etc etc etc?

2. This new service has to somehow stay independent of the power structure of academic science. For example, you better steer clear of the National Academy of Sciences, no joke, as they seem pretty invested in upholding the status of their members

3. My biggest concern has to do with the stories that journalists like to tell. Or, maybe I should say, stories that audiences like to hear.

One story people like is the scientist as hero. Another is the science scandal, preferably with some fake data.

But what about the story of scientists who are trying their best but are slightly over their head, no fake data but they’re going too far with their claims? This is a story that can be hard to tell.

For example, consider those Stanford medical researchers. They did a reasonable study but then they botched the statistics and hyped their claims. But their claims might be correct! As I and others have written a few thousand times by now, the Stanford team’s data are consistent with their story of the world—along with many other stories. The punchline is not that their claims about coronavirus are wrong; it’s that their study does not provide the evidence that they have claimed (and continue to claim). It’s the distinction between evidence and truth—and that’s a subtle distinction!

Another example came up a few years ago, when two economists published a paper claiming that death rates among middle-aged non-Hispanic whites were increasing. It turned out they were wrong: death rates had been increasing, then flat, during the time of their study. And, more relevantly, death rates had been steadily increasing among women in that demographic category but not men. The economists in their analysis had forgotten to do age adjustment, and it just happened that the baby boom passed through their age window during the period under study, causing the average age of their category to increase by just enough to show an artifactual rise in death rate.

Anyway, I had a hard time talking with reporters about this study when it came out. I made it clear on the blog that the economists had messed up by not age adjusting—but, at the same time, their key point, which was a comparison of the U.S. to other countries, still seemed to hold up.

I recall talking with a respected journalist from a major news outlet who just didn’t know what to do with this. He had three story templates in mind:

1. Brilliant Nobel-prize-winning economist makes major discovery, or

2. Bigshot Nobel-prize-winning economist gets it all wrong, or

3. Food fight in academia.

I wouldn’t give him any of the three stories, for the following reasons:

1. The published paper really was flawed, especially given that it was often taken to imply that there was an increasing mortality rate among middle-aged white men, which really wasn’t the case. This myth continues to be believed by major economists (see here, for example), I guess because it’s such a great story.

2. The paper had this big mistake but the main conclusion, comparing to other countries, seemed to hold up. So I refused to tell the reporter that the paper was wrong.

3. I didn’t want a food fight. I wanted to say that the authors of the paper made some good points, but there was this claim about increasing death rates that wasn’t quite right.

I wouldn’t play ball and create a fight, so the journalist went with storyline 1, of the scientist-as-hero.

It can be hard to report on a study that has flaws but is not an absolute train wreck of a scandal. Surgisphere—that’s easy to write about. The latest bit of messed-up modeling—not so much.

So I support Eisen and Tibshirani’s efforts. But I don’t think it’ll be easy, especially given that there are news outlets that will print misinformation put out by reporters who have an interest in creating science heroes. Yeah, I’m looking at you, “MIT’s science magazine.”

Selection bias

We’ve talked about this before; see here and here. Here’s the logic:

Suppose you’re a journalist and you hear about some wild claim made by some scientist somewhere. If you talk with some outside experts who convince you that the article is flawed, you’ll decide not to write about it. But somewhere else there is a reporter who swallows the press release, hook, line, and sinker. This other reporter would of course run a big story. Hence the selection bias that the stories that do get published are likely to repeat the hype. Which in turn gives a motivation for researchers and public relations people to do the hype in the first place.

P.S. Steve shares the above photo of Hopscotch, who seems skeptical about some of the claims being presented to him.