2 perspectives on the relevance of social science to our current predicament: (1) social scientists should back off, or (2) social science has a lot to offer

Perspective 1: Social scientists should back off

This is what the political scientist Anthony Fowler wrote the other day:

The public appetite for more information about Covid-19 is understandably insatiable. Social scientists have been quick to respond. . . . While I understand the impulse, the rush to publish findings quickly in the midst of the crisis does little for the public and harms the discipline of social science. Even in normal times, social science suffers from a host of pathologies. Results reported in our leading scientific journals are often unreliable because researchers can be careless, they might selectively report their results, and career incentives could lead them to publish as many exciting results as possible, regardless of validity. A global crisis only exacerbates these problems. . . . and the promise of favorable news coverage in a time of crisis further distorts incentives. . . .

Perspective 2: Social science has a lot to offer

42 people published an article that begins:

The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping.

The author list includes someone named Nassim, but not Taleb, and someone named Fowler, but not Anthony. It includes someone named Sander but not Greenland. Indeed it contains no authors with names of large islands. It includes someone named Zion but no one who, I’d guess, can dunk. Also no one from Zion. It contains someone named Dean and someone named Smith but . . . ok, you get the idea. It includes someone named Napper but no sleep researchers named Walker. It includes someone named Rand but no one from Rand. It includes someone named Richard Petty but not the Richard Petty. It includes Cass Sunstein but not Richard Epstein. Make of all this what you will.

As befits an article with 42 authors, there are a lot of references: 6.02 references per author, to be precise. But, even with all these citations, I’m not quite sure where this research can be used to “support COVID-19 pandemic response,” as promised in the title of the article.

The trouble is that so much of the claims are so open-ended that they don’t tell us much about policy. For example, I’m not sure what we can do with a statement such as this:

Negative emotions resulting from threat can be contagious, and fear can make threats appear more imminent. A meta-analysis found that targeting fears can be useful in some situations, but not others: appealing to fear leads people to change their behaviour if they feel capable of dealing with the threat, but leads to defensive reactions when they feel helpless to act. The results suggest that strong fear appeals produce the greatest behaviour change only when people feel a sense of efficacy, whereas strong fear appeals with low-efficacy messages produce the greatest levels of defensive responses.

Beyond the very indirect connection to policy, I’m also concerned because, of the three references cited in the above passage, one is from PNAS in 2014 and one was from Psychological Science in 2013. That’s not a good sign!

Looking at the papers in more detail . . . The PNAS study found that if you manipulate people’s Facebook news feeds by increasing the proportion of happy or sad stories, people will post more happy or sad things themselves. The Psychological Science study is based on two lab experiments: 101 undergraduates who “participated in a study ostensibly measuring their thoughts about “island life,” and 48 undergraduates who were “randomly assigned to watch one of three videos” of a shill. Also a bunch of hypothesis tests with p-values like 0.04. Anyway, the point here is not to relive the year 2013 but rather to note that the relevance of these p-hacked lab experiments to policy is pretty low.

Also, the abstract of the 40-author paper says, “In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues.” But then the paper goes on to unqualified statements that the authors don’t even seem to agree with.

For example, from the article, under the heading, “Disaster and ‘panic’” [scare quotes in original]:

There is a common belief in popular culture that, when in peril, people panic, especially when in crowds. That is, they act blindly and excessively out of self-preservation, potentially endangering the survival of all. . . .However, close inspection of what happens in disasters reveals a different picture. . . . Indeed, in fires and other natural hazards, people are less likely to die from over-reaction than from under-reaction, that is, not responding to signs of danger until it is too late. In fact, the concept of ‘panic’ has largely been abandoned by researchers because it neither describes nor explains what people usually do in disaster. . . . use of the notion of panic can be actively harmful. News stories that employ the language of panic often create the very phenomena that they purport to condemn. . . .

But, just a bit over two moths ago, one of the authors of this article wrote an op-ed titled, “The Cognitive Bias That Makes Us Panic About Coronavirus”—and he cited lots of social-science research in making that argument.

Now, I don’t think social science research has changed so much between 28 Feb 2020 (when this pundit wrote about panic and backed it up with citations) and 30 Apr 2020 (when this same pundit coauthored a paper saying that researchers shouldn’t be talking about panic). And, yes, I know that the author of an op-ed doesn’t write the headline. But, for a guy who thinks that “the concept of ‘panic’” is not useful in describing behavior, it’s funny how quickly he leaps to use that word. A quick google turned up this from 2016: “How Pro Golf Explains the Stock Market Panic.”

All joking aside, this just gets me angry. These so-called behavioral scientists are so high and mighty, with big big plans for how they’re going to nudge us to do what they want. Bullfight tickets all around! Any behavior they see, they can come up with an explanation for. They have an N=100 lab experiment for everything. They can go around promoting themselves and their friends with the PANIC headline whenever they want. But then in their review article, they lay down the law and tell us how foolish we are to believe in “‘panic.’” They get to talk about panic whenever they want, but when we want to talk about it, the scare quotes come out.

Don’t get me wrong. I’m sure these people mean well. They’re successful people who’ve climbed to the top of the greasy academic pole; their students and colleagues tell them, week after week and month after month, how brilliant they are. We’re facing a major world event, they want to help, so they do what they can do.

Fair enough. If you’re an interpretive dancer like that character from Jules Feiffer, and you want to help with a world crisis, you do an interpretive dance. If you’re a statistician, you fit models and make graphs. If you’re a blogger, you blog. If you’re a pro athlete, you want until you’re allowed to play again, and then you go out and entertain people. You do what you can do.

The problem is not with social scientists doing their social science thing; the problem is with them overclaiming, overselling, and then going around telling people what to do.

A synthesis?

Can we find any overlap between the back-off recommendation of Fowler and we-can-do-it attitude of the 42 authors? Maybe.

Back to Fowler:

Social scientists have for decades studied questions of great importance for pandemics and beyond: How should we structure our political system to best respond to crises? How should responses be coordinated between local, state and federal governments? How should we implement relief spending to have the greatest economic benefits? How can we best communicate health information to the public and maximize compliance with new norms? To the extent that we have insights to share with policy makers, we should focus much of our energy on that.

Following Fowler, maybe the 42 authors and their brothers and sisters in the world of social science should focus not on “p less than 0.05” psychology experiments, Facebook experiments, and ANES crosstabs, but on some more technical work on political and social institutions, tracing where people are spending their money, and communicating health information.

On the plus side, I didn’t notice anything in that 42-authored article promoting B.S. social science claims such as beauty and sex ratio, ovulation and voting, himmicanes, Cornell students with ESP, the critical positivity ratio, etc etc. I choose these particular claims as examples because they weren’t just mistakes—like, here’s a cool idea, too bad it didn’t replicate—but were they were quantitatively wrong, and no failed replication was needed to reveal their problems. A little bit of thought and real-world knowledge, was enough. Also, these were examples with no strong political content, so there’s no reason to think the journals involved were “doing a Lancet” and publishing fatally flawed work because it pushed a political agenda.

So, yeah, it’s good that they didn’t promote any of these well-publicized bits of bad science. On the other hand, then it’s not so clear from reading the article that not all the science that they do promote, can be trusted.

Also, remember the problems with the scientist-as-hero narrative.