Marc Hauser: Victim of statistics?

I have no idea; this is just a theory.

In the past, when disgraced primatologist Marc Hauser has come up in this space, it’s been because he “fabricated data, manipulated experimental results, and published falsified findings” (in the words of the Department of Health and Human Services, as quoted by wikipedia), juxtaposed with the whole “Evilicious” thing.

My take on Hauser as a scientist has been that, if his work has value, it is in its theoretical contributions. It’s clear that Hauser’s ideas of quantitative research are all screwed up (that’s how you get behavior like this: “The committee painstakingly reconstructed the process of data analysis and determined that Hauser had changed values, causing the result to be statistically significant, an important criterion showing that findings are probably not due to chance.”) but he might be a wonderful qualitative researcher. Perhaps he constructed good theories based on his careful observations of monkey behavior.

But what about Hauser’s own behavior? What went wrong there? A few months ago I conjectured that Hauser was a victim of the “great man” theory of science. The great man theory, Harvard snobbery, and generic sexism combined when he analogized boring, data-crunching scientists to “schoolmarms.” On one hand, this is horrible, that someone with these sort of attitudes and behaviors had power and influence in a major educational institution. At the same time, it’s kinda sad that he was trapped in his macho, Edge Foundation ideology. If Hauser really was talented at qualitative observation and theorizing, it’s a pity that he couldn’t contextualizing his strengths and weaknesses, rather than first disparaging quantitative researchers as “schoolmarms” and then turning around and faking his data. Qualitative theories were not enough; he had to rig his data too. The Great Man can do it all, right?

OK, that’s all well and good, but a recent exchange in comments led me to another thought. Here’s what I wrote:

Maybe Hauser had excellent qualitative understanding and was able to come up with excellent theories, and maybe it was just his statistical naivety that led him to expect every experiment to turn out just as predicted, which in turn motivated cheating.

We’ve talked about this before, that people want their theory to be something it can’t be; they want it to be a universal explanation that works in every example.

This is the sense that Hauser was a victim of statistics. More precisely, he was a victim of the attitude that, if a theory is correct, it should work in every example, what Tversky and Kahneman called the fallacy of the law of small numbers. (We discussed an extreme example of this fallacy a few years ago.) Hauser was a victim of statistics in the way that Evel Knievel was a victim of gravity.

What happens if (a) based on your qualitative understanding of the world, you feel that your theory is true, (b) you have a naive belief in the so-called law of small numbers, and (c) your data don’t support your theory (in the conventional way of providing “statistically significant” evidence? It’s natural, then, to move to (d) adjust your data to the higher truth, and then (e) lie about it. OK, most researchers don’t go to steps d and e, as they violate various norms of science—but you can see how such steps can seem to make sense.

Also, there are lots of incentives to not be honest about your data. If you’re honest and say something like, “We have this great theory, it makes qualitative sense, but our hard data show no statistical significance,” then I think it’s a lot, lot harder to get published in Science, Nature, PNAS, Psychological Science, or even a lesser-ranking field journal.

Marc Hauser: victim of an unrealistic expectation that, if a theory has value, every experiment (or nearly every experiment) should confirm it. The problem for him was that 80% power was not just a slogan, a way to get grants. He really believed it.

But I have no idea; this is just a theory.

P.S. Why write about a former Ted talk / Harvard professor whose theories have now been forgotten? It’s the usual story. This particular Edge foundation ubermensch may have left the scene, but I suspect the general modes of thinking are as much of a problem today as they were in 1971 when Tversky and Kahneman published that paper. One reason for focusing on extreme cases is that they are good stories; another reason is that they give a clue about how strong these cognitive biases can be. If belief in the law of small numbers is so strong that it can destroy an illustrious career . . . that’s a big deal.