Don’t say your data “reveal quantum nature of human judgments.” Be precise and say your data are “consistent with a quantum-inspired model of survey responses.” Yes, then your paper might not appear in PNAS, but you’ll feel better about yourself in the morning.

This one came up in a blog comment by Carlos; it’s an article from PNAS (yeah, I know) called “Context effects produced by question orders reveal quantum nature of human judgments.” From the abstract:

In recent years, quantum probability theory has been used to explain a range of seemingly irrational human decision-making behaviors. The quantum models generally outperform traditional models in fitting human data, but both modeling approaches require optimizing parameter values. However, quantum theory makes a universal, nonparametric prediction for differing outcomes when two successive questions (e.g., attitude judgments) are asked in different orders. Quite remarkably, this prediction was strongly upheld in 70 national surveys carried out over the last decade (and in two laboratory experiments) and is not one derivable by any known cognitive constraints.

This set off a bunch of alarm bells:

1. “Universal, nonparametric prediction”: I’m always suspicious of claims of universality in psychology.

2. “Quite remarkably, this prediction was strongly upheld in 70 national surveys”: Quite remarkably, indeed. This just seems a bit too good to be true.

3. And the big thing . . . how can quantum theory make a prediction about survey responses? Quantum theory is about little particles and, indirectly, about big things made from little particles. For example, quantum theory explains, in some sense, the existence of rigid bodies such as tables, chairs, and billiard balls.

From reading the paper, it’s my impression that they’re not talking about quantum theory, as it’s usually understood in physics, at all. Rather, they’re talking about a statistical model for survey responses, a model which is inspired by analogy to certain rules of quantum mechanics. That’s fine—I’m on record as offering tentative support to this general line of research—I just want to be clear on what we’re talking about. I think it might be clearer to call these “quantum-inspired statistical models” rather than “quantum probability theory.”

As for the model itself: I took a quick look and it seems like it could make sense. It’s a latent-variable multidimensional model of attitudes, with the twist that whatever question was asked before could affect the salience of the different dimensions. The model makes a particular prediction which they call the QQ equality and which they claim is supported in their 70 surveys. I did not look at that evidential claim in detail. One thing that confuses me is why they are treating this QQ equality as evidence for their particular quantum-inspired model. Wouldn’t it be evidence for any model, quantum-inspired or otherwise, that makes this particular prediction?

It’s not clear to me that the quantum-inspired nature of the model is what is relevant here, so I think the title of the paper is misleading.

Here it is again:

Context effects produced by question orders reveal quantum nature of human judgments

I think a more accurate title would be:

Context effects produced by question orders are consistent with a quantum-inspired model of survey responses

Here are the explanations for my corrections:

1. Changed “reveal” to “are consistent with” because the data are, at best, consistent with a particular model. This is not the same as revealing some aspect of nature.

2. Changed “quantum nature” to “quantum-inspired model” because, as discussed above, it’s not a quantum model, it’s only quantum-inspired; also, it’s just a particular model, it’s not a property of nature. If I were to fit a logistic regression to some test questions—that’s standard practice in psychometrics, it’s called the Rasch model—and the model were to fit the data well, it would not be correct for me to say that I’ve revealed the logistic nature of test taking.

3. Changed “human judgments” to “survey responses” because there’s nothing in the data about judgments; it’s all survey responses. It would be ok with me if they wanted to say “attitudes” instead. But “judgments” doesn’t seem quite right.

Anyway, there might be something there. Too bad about all the hype. I guess the justification for the hype is that, without the hype, the paper probably wouldn’t’ve been published in a tabloid; and without the tabloid credentials, maybe our blog readers would never have head about this work, and then we wouldn’t’ve heard about it either.