Someone pointed me to this post by a doctor named Daniel Hopkins on a site called KevinMD.com, expressing skepticism about a new study of remdesivir. I guess some work has been done following up on that trial on 18 monkeys. From the KevinMD post:
On April 29th Anthony Fauci announced the National Institute of Allergy and Infectious Diseases, an institute he runs, had completed a study of the antiviral remdesivir for COVID-19. The drug reduced time to recovery from 15 to 11 days, he said, a breakthrough proving “a drug can block this virus.” . . .
While the results were preliminary, unpublished, and unconfirmed by peer review, Fauci felt an obligation, he said, to announce them immediately. Indeed, he explained, remdesivir trials “now have a new standard,” a call for researchers everywhere to consider halting any studies, and simply use the drug as routine care.
Hopkins has some specific criticisms of how the results of the study were reported:
Let us focus on something Fauci stressed: “The primary endpoint was the time to recovery.” . . . Unfortunately, the trial registry information, data which must be entered before and during the trial’s actual execution, shows Fauci’s briefing was more than just misleading. On April 16th, just days before halting the trial, the researchers changed their listed primary outcome. This is a red flag in research. . . . Unfortunately, the trial registry information, data which must be entered before and during the trial’s actual execution, shows Fauci’s briefing was more than just misleading. On April 16th, just days before halting the trial, the researchers changed their listed primary outcome. This is a red flag in research. . . . In other words they shot an arrow and then, after it landed, painted their bullseye. . . .
OK, this might be a fair description, or maybe not. You can click through and follow the links and judge for yourself.
Here I want to talk about two concerns that came up in this discussion which arise more generally when considering this sort of wide-open problem where many possible treatments are being considered.
I think these issues are important in many settings, so I’d like to talk about them without thinking too much about remdesivir or that particular study or the criticisms on that website. The criticisms could all be valid, or they could all be misguided, and it would not really affect the points I will make below.
Here are the 2 issues:
1. How to report and analyze data with multiple outcomes.
2. How to make decisions about when to stop a trial and use a drug as routine care.
1. In the above-linked post, Hopkins writes:
This choice [of primary endpoint], made in the planning stages, was the project’s defining step—the trial’s entry criteria, size, data collection, and dozens of other elements, were tailored to it. This is the nature of primary outcomes: they are pivotal, studies are built around them. . . .
Choosing any primary outcome means potentially missing other effects. Research is hard. You set a goal and design your trial to reach for it. This is the beating heart of the scientific method. You can’t move the goalposts. That’s not science.
I disagree. Yes, setting a goal and designing your trial to reach for it is one way to do science, but it’s not the only way. It’s not “the beating heart of the scientific method.” Science is not a game. It’s not about “goalposts”; it’s about learning how the world works.
2. Lots is going on with coronavirus, and doctors will be trying all sorts of different treatments in different situations. If there are treatments that people will be trying anyway, I don’t see why they shouldn’t be used as part of experimental protocols. My point is that, based on the evidence available, even if remdesivir should be used as routine care, it’s not clear that all the studies should be halted. More needs to be learned, and any study is just a formalization of the general idea that different people will be given different treatments.
Again, this is not a post about remdesivir. I’m talking about more general issues of experimentation and learning from data.