“Are Relational Inferences from Crowdsourced and Opt-in Samples Generalizable? Comparing Criminal Justice Attitudes in the GSS and Five Online Samples”

https://statmodeling.stat.columbia.edu/2020/03/15/are-relational-inferences-from-crowdsourced-and-opt-in-samples-generalizable-comparing-criminal-justice-attitudes-in-the-gss-and-five-online-samples/

Justin Pickett writes:

You’ve blogged a good bit on MTurk, weighting, and model-based inference. Drawing heavily on your work (Gelman, 2007; Gelman and Carlin, 2002; Wang et al., 2015), Andrew Thompson and I [Pickett] just published a study that largely confirms your concerns about MTurk (and opt-in samples), but that also emphasizes the promise of model-based adjustments. The article focuses on bias in regression coefficients that results when selection is a collider variable. It attempts to pull together insights from Gelman (2007), Solon et al. (2015), Winship and Radbill (1994), put them together in one place, and apply them to online sampling.

Coincidentally, a couple days earlier Paul Alper sent an email pointing us to this news article by Andy Newman about participants in Mechanical Turk.