What are my statistical principles?

Jared Harris writes:

I am not a statistician but am a long time reader of your blog and have strong interests in most of your core subject matter, as well as scientific and social epistemology.

I’ve been trying for some time to piece together the broader implications of your specific comments, and have finally gotten to a perspective that seems to be implicit in a lot of your writing, but deserves to be made more explicit. (Or if you’ve already made it explicit, I want to find out where!)

My sense is that many see statistics as essentially defensive — helping us *not* to believe things that are likely to be wrong. While this is clearly part of the story it is not an adequate mission statement.

Your interests seem much wider — for example your advocacy of maximally informative graphs and multilevel models. I’d just like to have a clearer and more explicit statement of the broad principles.

An attempted summary: Experimental design and analysis, including statistics, should help us learn as much as we can from our work:
– Frame and carry out experiments that help us learn as much as possible.
– Analyze the results of the experiments to learn as much as possible.

One obstacle to learning from experiments is the way we talk and think about experimental outcomes. We say an experiment succeeded or failed — but this is not aligned with maximizing learning. Naturally we want to minimize or hide failures and this leads to the file drawer problem and many others. Conversely we are inclined to maximize success and so we are motivated to produce and trumpet “successful” results even if they are uninformative.

We’d be better aligned if we judged experiments on whether they are informative or uninformative (a matter of degree). Negative results can be extremely informative. The cost to the community of missing or suppressing negative results can be enormous because of the effort that others will waste. Also negative results can help delimit “negative space” and contribute to seeing important patterns.

I’m not at all experienced with experiment design, but I guess that designing experiments to be maximally informative would lead to a very different approach than designing experiments to have the best possible chance of yielding positive results, and could produce much more useful negative results.

This approach has some immediate normative implications:

One grave sin is wasting effort on uninformative experiments and analysis, when we could have gotten informative outcomes — even if negative. Design errors like poor measurement and forking paths lead to uninformative results. This seems like a stronger position than just avoiding poorly grounded positive results.

Another grave sin is suppressing informative results — whether negative or positive. The file drawer problem should be seen as a moral failure — partly collective because most disciplines and publishing venues share the bias against negative results.

I was going to respond to this with some statement of my statistical principles and priorities—but then I thought maybe all of you could make more sense out of this than I can. You tell me what you think are my principles and priorities based on what you’ve read from me, then I’ll see what you say and react to it. It might be that what you think are my priorities, are not my actual priorities. If so, that implies that some of what I’ve written has been misfocused—and it would be good for us to know that!