“Curing Coronavirus Isn’t a Job for Social Scientists”

Anthony Fowler wrote a wonderful op-ed.

You have to read the whole thing, but let me start with his most important point, about “the temptation to overclaim” in social science:

One study estimated the economic value of the people spared through social-distancing efforts. Essentially, the authors took estimates from epidemiologists about the number of lives that could be saved, then multiplied them with estimates of the statistical value of a life from economists. The researchers admittedly did not consider any of the potential costs of social distancing. Yet, in the concluding sentence of their abstract, they write, “Overall, the analysis suggests that social distancing initiatives and policies in response to the Covid-19 epidemic have substantial economic benefits.” To an economist, this sentence might simply convey that they computed large benefits but did not consider costs. But to a layperson or policy maker, it sounds like they have conducted a thorough analysis and concluded that social distancing is, on net, economically beneficial. Not surprisingly, many news outlets have cited this study to support the claim that there is no trade-off between saving lives and economic recovery.

We discussed some of the problems with the dollar-value-of-life analysis in the comments section here. But the key “sociological” point here is that social scientists are trading off our collective reputation—they’re spending our cultural capital on these claims. If some dude on the street said that social distancing was with 8 trillion dollars, you (or a newspaper editor) would be like, huh? But if a respected social scientist says it, then, hmmm . . . maybe we’re on to some thing here. I also have a problem with this sort of crude dollars-and-cents reasoning because it ignores how things are implemented. A pause in the economy, if done haphazardly and with uncertainty, can wreak economic havoc. A pause that’s well coordinated could cause much less disruption. The relevant social science subdiscipline here is not cost-benefit analysis, it’s whatever it is that helps you understand how different levels of government and different private-sector entities can coordinate. The problems are solved by cooperation, not by turning on the money spigot.

This is all to say that “social science expertise” or “management expertise,” considered broadly, is extremely important here. Political science is relevant too! But not so much the kind of social and political science that I do, or the sorts of studies that Fowler critiques in his op-ed.

OK, now back to the beginning. Fowler writes:

The public appetite for more information about Covid-19 is understandably insatiable. Social scientists have been quick to respond. . . . While I understand the impulse, the rush to publish findings quickly in the midst of the crisis does little for the public and harms the discipline of social science.

Even in normal times, social science suffers from a host of pathologies. Results reported in our leading scientific journals are often unreliable because researchers can be careless, they might selectively report their results, and career incentives could lead them to publish as many exciting results as possible, regardless of validity. . . .

He gives some examples:

In one recent study, survey respondents were asked to self-report their social distancing, but people often misreport their beliefs and behaviors in political surveys. Another study used GPS data to measure visits to places of interest like restaurants and movie theaters, but this seems like a poor test of social distancing at a time when many such places are closed (especially in more Democratic places). A second challenge is that even if we find a clear difference between Democratic and Republican behavior, it’s difficult to say whether this difference is explained by political attitudes or other factors. Democrats tend to live in more urban places, where the pandemic has been more severe and local governments have implemented more stringent policies and guidelines; neither of these studies accounted for these alternative explanations.

Another recent study [parodied here] investigated the extent to which watching “Hannity” versus “Tucker Carlson Tonight” may have increased the spread of Covid-19. This is the kind of study that might make one skeptical in normal times. An extra concern now is that the paper was likely written in just a few days. Although the authors write that they used variation in sunset times to estimate the effect of watching “Hannity,” a closer reading suggests that they’re mostly using variation in how much people in different media markets watch television and how much Fox News they watch. Maybe conservative commentators like Sean Hannity have exacerbated the spread of Covid-19, but it’s dangerous for social scientists to publicize these kinds of results before they have been carefully vetted.

But what about the value of this work? Fowler writes:

One possible reason for rushing science in the midst of a crisis is that the benefits of quickly getting new information to the public and to policy makers outweigh the potential costs of giving them less reliable information. Perhaps one could make this argument for those studying how to cure or prevent the spread of Covid-19. But most of the work being done by social scientists on Covid-19, while interesting and important, is not urgent. Understanding how political attitudes affect social distancing may be relevant for understanding political psychology, for example, and it might even help us design better solutions in a future pandemic, but it doesn’t significantly benefit society to have this information today.

Good point. This work could be valuable, but it’s not urgent.

Fowler continues:

The second troubling trend is the temptation of social scientists to speak outside their areas of expertise. . . . I’ve recently seen scholars in fields as varied as political philosophy and macroeconomics giving public-health advice and predicting the future trajectory of the pandemic without seriously discussing the limits of their knowledge or the credibility of their assumptions. A legal scholar first predicted 500 deaths in the U.S., then appeared to revise that to 5,000, and most recently revised it again to 50,000. Despite the scholar’s lack of any relevant expertise or experience, these woefully optimistic early projections reportedly influenced decisions in the White House.

Just one thing. Law school professors get lots of attention, which maybe they deserve and maybe they don’t. But no need to call them “social scientists.” Social science has enough problems without shouldering the burdens of those smooth-talking b.s. artists.

And let me again remind you of the problems with the scientist-as-hero narrative. Some scientists really can be heroes right now. Those of us who run social experiments, analyze surveys, and publish papers using cool datasets, maybe not so much.