Are Population-Based Survey Experiments Truly Unparalleled? A Test Using Parallel Designs
Sample size: 659
Field period: 12/18/2012-02/25/2013
Much individual-level experimental research employs convenience samples. A familiar critique of this approach concerns how samples and features of experimental settings constrain the generalizability of findings. Increasing attention has therefore been placed on population-based survey experiments due to their representativeness of a national population. Are the characteristics of research participants and setting consequential for inferences about phenomena like political communication? To answer this question, we analyze and compare effect estimates from three framing experiments executed in a parallel fashion in a lab with students and nonstudent adults, through Amazon Mechanical Turk, online opt-in recruitment (from Craigslist and Facebook), an exit poll, and a nationally representative sample (TESS). Additionally, we explore the utility of a matching technique to estimate population-representative causal effects from our convenience samples.
How does the sample used in experimental studies impact the generalizability of the experimental results to the national population? More specifically, does inference about political issue framing effects depend on the subject pool on which a framing study is conducted?
The study involved three separate experiments implemented in a parallel fashion with six different samples (TESS provided the nationally-representative sample). The first experiment was focused on the salient issue of student loan forgiveness. A control group received basic information about a policy proposal, while a treatment group received the same information and an argument against the proposal framed in terms of individual responsibility.
The second experiment replicates a recent study about the DREAM Act (See Druckman, Peterson, Slothuus 2013). In addition to a control group that only receives basic information about the policy proposal, treatment groups received information about the degree of partisan polarization on the issue and either an argument in favor or opposed to the DREAM Act.
The third study examines tolerance towards a hate group rally, and involved two manipulations: whether or not the description of the rally emphasized free speech concerns, and the location of the rally (local or distant).
Support for student loan forgiveness, support for the DREAM Act, and tolerance of a hate group rally.
On all three experiments, we find statistically significant effects of framing on policy support, in line with expectations. The results of our cross-sample comparisons suggest that convenience samples provide an adequate source of experimental data, despite entailing significantly different types of respondents, and we are able to use convenience sample data to quite accurately extrapolate effects to the nationally representative sample provided by TESS. The results bolster the value of much extant experimental evidence collected from convenience samples, while also highlighting the broad and consistent effects of message framing on political opinions.
The question of how different experimental samples impact causal inferences in the social sciences is an important one. Much social science research employs experimental methods with a variety of samples, yet, there are few systematic assessments of how these samples impact the inferences researchers are making. Although focused on one type of experimental effect (political communication and framing effects), by incorporating six of the most common samples used in social science experimentation this study presents one of the most comprehensive analyses of the consequences of experimental samples for estimates of treatment effects. The results suggest that, at times, convenience samples can provide an adequate alternative to population-based survey experiments. We suggest that future research should continue along this path and further isolate the conditions under which different samples produce similar and dissimilar effects. In doing so, we can better evaluate the quality and generalizability of experimental research, as well as provide a guide for the implementation and funding of social science research.
Leeper, Thomas J., and Kevin J. Mullinix. 2014. "To Whom, with What Effect?: Parallel Experiments on Framing." Paper presented at the Annual Meeting of the Midwest Political Science Association, Chicago, IL.