Conditioning on a statistical method as a “meta” version of conditioning on a statistical model

When I do applied statistics, I follow Bayesian workflow: Construct a model, ride it hard, assess its implications, add more information, and so on. I have lots of doubt in my models, but when I’m fitting any particular model, I condition on it. The idea is we take our models seriously as that’s the best way to learn from them.

When I talk about statistical methods, though, I’m much more tentative or pluralistic: I use Bayesian inference but I’m wary of its pitfalls (for example, here, here, and here) and I’m always looking over my shoulder.

I was thinking about this because I recently heard a talk by a Bayesian fundamentalist—one of those people (in this case, a physicist) who was selling the entire Bayesian approach, all the way down to the use of Bayes factors for comparing models. OK, I don’t like Bayes factors, but the larger point is that I was a little bit put off by what seemed to be evangelism, the proffered idea that Bayes is dominant.

But then, awhile afterward, I reflected that this presenter has an attitude about statistical methods that I have about statistical models. His attitude is to take the method—Bayes, all the way thru Bayes factors—as given, and push it as far as possible. Which is what I do with models. The only difference is that my thinking is at the scale of months—learning from fitted models—and he’s thinking at the scale of decades—his entire career. I guess both perspectives are legitimate.