Vote Over-Reporting: A Test of the Social Desirability Hypothesis

Download data and study materials from OSF

Principal investigators:

Allyson L. Holbrook

University of Illinois at Chicago

Email: allyson@uic.edu

Homepage: http://www.uic.edu/cuppa/pa/faculty_holbrook.htm

Jon A. Krosnick

Stanford University

Email: krosnick@stanford.edu

Homepage: http://comm.stanford.edu/faculty-krosnick/


Sample size: 6094

Field period: 11/15/2002-12/06/2002

Abstract

Our experimental study tests the widely-presumed notion that the tendency to present oneself in an admirable light (social desirability biases) lead respondents to over-report turnout in surveys. In other words, it is presumed that surveys consistently over-estimate voter turnout because respondents feel that voting is socially desirable and falsely report voting when they did not. Most previous attempts to reduce social desirability response bias have not reduced over-reporting. Here, we test two methods shown to reduce social desirability response bias in other domains, the list method and randomized response, in an experiment on vote over-reporting. Respondents were randomly assigned to one of three conditions: direct reports of turnout, the list method, and randomized response. Our inferences about the alternate methods' power to reduce over-reporting come from comparisons of the three experimental groups.

Hypotheses

H1: If vote over-reporting is at least partly the result of social desirability response bias, estimates of turnout will be lower when assessed by the list method than when assessed by direct report.
H2: If vote over-reporting is at least partly the result of social desirability response bias, estimates of turnout will be lower when measured via randomized response than when measured via direct report.

Experimental Manipulations

Shortly after the 2002 national elections, respondents' turnout in the election was assessed via one of three methods (respondents were randomly assigned to condition):
(1) direct reports of turnout
(2) the list method
(3) randomized response
The first method simply asked respondents whether they had voted or not. Half of respondents in the list method group were randomly assigned to be asked to report how many of a list of three behaviors (not including voter turnout) they had performed. The other half of list method respondents were asked to report how many of a list of four behaviors (the same three behaviors as the first group plus whether or not they voted in the 2002 election) they had performed. An aggregate estimate of turnout comes from comparing the mean number of behaviors reported by the two groups, with the difference attributable to the decision to turnout. Respondents assigned to the randomized response condition were asked to flip a coin. All respondents were provided the options "yes" and "no." If the coin came up heads, they were asked to report whether they had voted in the 2002 election. If the coin came up tails, they were instructed to choose "no."

Outcomes

Estimates of voter turnout.

Summary of Results

Direct report
Fifty-nine percent of respondents in this condition reported that they had voted, and this was used as a baseline for comparing to the other two conditions.
List Method
Respondents in the three-item list condition reported that they had performed 2.16 of the behaviors on average, and respondents in the four-item list condition reported that they had performed 2.74 of the behaviors on average. Thus, the aggregate estimate of turnout from this condition was 58%, which is nearly identical to the proportion of respondents who said they voted in response to a direct question.
Randomized Response
Fifty-one percent of respondents in this condition chose the "yes" response option. However, half of respondents should have responded "no" by chance alone, because approximately half should have had tails come up when they flipped a coin. The fact that less than 50% of all respondents responded "no" suggests that respondents did not successfully implement the randomized response procedure.

Conclusions

We tested two methods for reducing social desirability response bias to determine if these methods would reduce estimates of voter turnout. If either had led to reduced estimates of voter turnout, it would have suggested that over-reporting of voter turnout is, at least in part, due to social desirability bias. Neither method led to reduced estimates of turnout, but the lessons to be learned from each is different.
The list method did not reduce estimates of voter turnout. The estimate of turnout from the list method is nearly identical to that from the self-report condition, providing the strongest evidence to date that social desirability concerns are not a cause of vote over-reporting.
The randomized response method appeared not to work in this format. The proportions of respondents who reported "no" suggest that respondents did not successfully implement the randomized response technique.