When speculating about causes of trends in mortality rates: (a) make sure that what you’re trying to explain has actually been happening, and (b) be clear where your data end and your speculations begin.

A reporter writes:

I’d be very interested in getting your take on this recent paper. I am immensely skeptical of it. That’s not to say many Trump supporters aren’t racist! But we’re now going to claim that this entire rise in all-cause mortality can be attributed to the false sense of lost status? So so so so so skeptical.

You’re cited, and the headline takeaway is about perceived racialized threat to social status. But threat to social status isn’t mentioned — % of GOP voteshare is taken as a straightforward proxy of this. But doesn’t voteshare % jump around for a million reasons, often in reaction the most recent election?

I took a look. I don’t see how they can say “For these reasons (and for the sake of parsimony), like Case and Deaton (2017), our starting premise is to examine as a singular phenomenon; the rise in national mortality rates of working-age white men and women.” Just look at figure 2C here. They cite this paper but they don’t seem to get the point that the rate among middle-aged men was going down, not up, from 2005-2015. This is important because much of the decline-of-status discussion centers on men.


Also, see here (which links to an unpublished report with tons more graphs). Some lines to up and some lines go down. “For the sake of parsimony” just doesn’t cut it here. Later in the paper they write that the rise in white mortality “is more accentuated in women than in men.” But “more accentuated” seems wrong. According to the statistics, the mortality rate among 45-54-year-old non-Hispanic white men was declining from 2005-2015.

This is a big problem in social science: lots of effort expended to explain some phenomenon, with it being clear exactly what is being explained. So you have to be careful about statements such as, “A valid causal story must explain something that is occurring widely among whites and also explain why it is not occurring among blacks.” I don’t think that kind of monocausal thinking is helpful.

The comparisons by education group are tricky because average education levels have been increasing over time. That’s not to say the authors should not break things down by education group, just that it’s tricky.

Regarding their county-level analysis: it seems that what they find is that Republican vote share in 2016 is predictive of trends in white mortality rates. This is similar to other correlations that we’ve been seeing: in short, Trump did well (and Clinton poorly) among white voters in certain rural and low-income places in the country. I don’t see that this gives any direct evidence regarding status threat. Also I don’t think the following statement makes sense: “In the absence of an instrumental variable, or of a natural experiment, our study provides a conservative estimate of the effect of the Republican vote share by controlling for a host of economic and social factors.” First, “conservative” is a kind of weasel word that allows people to imply without evidence that true effects are higher than what they found; second, “effect of the Republican vote share” doesn’t make sense. A vote share doesn’t kill people. It doesn’t make sense to say that person X died because a greater percentage of people in person X’s county voted for Trump.

Finally, they put this in italics: “For perhaps the first time, we are suggesting that a major population health phenomenon – a widespread one – cannot be explained by actual social or economic status disadvantage but instead is driven by perceived threat to status.” But I don’t see the evidence for it. They don’t supply any data on “perceived threat to status.” At least, I didn’t see anything in the data. So, sure, they can suggest what they want, but I don’t find it convincing.

All that said, I have general positive feelings about the linked paper, in the sense that they’re studying something worth looking into. Social scientists including myself spend lots of time on fun topics like golf putting and sumo wrestling, and this can be a great way to develop and understand research methods; but it’s also good for people to take a shot at more important problems, even if the data aren’t really there to address the questions we’d like to ask.

There should be a way for researchers to study these issues without feeling the need to exaggerate what they’ve found (as in this press release, on “a striking reversal [in mortality rate trends] among working-age whites, which seems to be driven principally by anxiety among whites about losing social status to Blacks”—without mentioning that (a) the trends go in opposite directions for men and women and (b) their research offers no evidence that anything is being driven, principally or otherwise, by anxiety or social status.

P.S. I can understand my correspondent’s desire for anonymity here. A couple years ago I got blasted on twitter by a leading public health researcher for my response to Case and Deaton. He wrote that I had “scoffed at the Case/Deaton finding about U.S. life expectancy . . . Has he ever admitted he was wrong about that?” I sent him an email saying, “Whenever I am wrong in public, I always announce my error in public too. I’ve corrected four of my published papers and have corrected many errors or unclear points in my other writings. But I can only issue a correction if I know where I was wrong. Can you please explain where I was wrong regarding the work of Case and Deaton? I am not aware of any errors that I made in that regard. Thank you.” We did a few emails back and forth and at no time did he give any examples of where I’d “scoffed” or where I’d been wrong. He wrote that I spent most of my time “carping about compositional effects” and that my efforts “helped spread the idea that Case and Deaton were wrong, that there was nothing to see here, that it was all liberal whining about inequality, etc., etc.” When the facts get in the way of the story, shoot the messenger.