Download data and study materials from OSF
University of Michigan
University of Michigan
University of Michigan
Sample size: 2078
Field period: 03/27/2016-06/30/2016
People pick-and-choose in which polls to believe in based, in large part, on the favorability of polls’ results. Yet, how these biases persist within the competitive information ecosystem of the horserace coverage remains to be investigated: There are many election polls with consistent or contradictory results even when they are conducted within the same time frame. Polls also vary in their methodological quality; while some are methodologically robust, others are poor. Third, even the same poll results could be interpreted in completely different ways. Media coverage of polls is heavily opinionated by pundits. Pundits are mostly either experts commenting on the methodological aspects of specific polls, experts critiquing polls in general, or partisans engaging in subjective (non-methodological) discrediting of unfavorable poll results. All these features of polls and how they are interpreted in news reports could serve to both mitigate or bolster individuals’ biases in processing of polling evidence. Drawing from literatures on motivated reasoning, corrective attempts/fact-checking, and opinionated news, the current study leverages a survey experiment designed to tap these dynamic and competitive aspects of the horserace coverage in the 2016 election. We examine individuals’ perceived accuracy of polls as well as their subsequent electoral predictions by manipulating multiple polls’ results, their methodological quality, and partisan and expert interpretation on the findings in hypothetical news reports.
See the attached Table of Experimental Conditions for below hypotheses and RQs.
The 12 conditions proposed will allow us to test eight substantive hypotheses and two research questions. Our first set of hypotheses concern the presence of motivated reasoning and how it depends on poll quality. First, we expect to find evidence that motivated reasoning operates when respondents simultaneously encounter conflicting poll results (H1). We plan to test this by looking at differences in the perceptions of polls in condition C1 by partisanship (See Table 1). Next, we expect that partisans will differ from one another in the credibility that they associate with polls that have consistent results (H2; C2 and C3 by partisanship). These same conditions will allow us to answer our first research question: Do people recognize methodological quality differences between polls with consistent results? (RQ1; C2 and C3). When polls are inconsistent, however, we expect that individuals will be responsive to methodological quality (H3; C4 and C5). Whether variations in methodological quality will enhance or mitigate motivational biases in conflicting polls is a key question (RQ2; C4 and C5 by partisanship vs C1 by partisanship). Compared to conditions where poll results are consistent, we expect that inconsistent results will trigger stronger methods-based assessments (H4; C4 and C5 by partisanship vs C2 and C3 by partisanship).
Our second set of hypotheses concern how commentary on polls might further moderate poll evaluations. Because out-partisans can latch onto expert critiques to dismiss disliked results, we expect that expert commentary will enhance motivated reasoning for polls with consistent findings (H5; C6 and C7 by partisanship vs C2 and C3 by partisanship). In contrast, we expect that expert commentary will reduce motivational biases where polls’ results are inconsistent, because they will serve as an informational corrective (H6; C8 and C9 by partisanship vs C4 and C5 by partisanship). Partisan commentary, on the other hand, should enhance motivated reasoning (H7; C10 and C11 by partisanship vs C1). Finally, we included an additional condition in which an expert critiques conflicting polls both of which are high quality. In this condition, we expect the effects of motivational biases will be stronger than in a condition without expert commentary (H8; C12 vs C1). This final condition is also important for ecological validity reasons as the entire polling industry has increasingly come under fire.
Across all of the conditions, we expect to replicate the earlier studies’ findings that more politically aware respondents will be the most susceptible to motivational biases; we plan to use education and party strength to test this. Aside from the respondents’ perceptions of accuracy, we will also examine whether our manipulations alter respondents’ perceptions of likely election outcomes.
1.Consistent (vs inconsistent) Results
2.Consistent (vs inconsistent) Methodological Quality
3.Presence (vs absence) of Corrective Expert Commentary
4.Presence (vs absence) of General Polling Critique
5.Presence (vs absence) of Partisan Commentary
Comparing the two polls directly, which poll do you think is more accurate in representing the public support for the likely candidates in this election?
(1) The first poll (KnowPolitics) is much more accurate than the second one (Public-Metrics)
(2) The first poll (KnowPolitics) is somewhat more accurate than the second one (Public-Metrics)
(3) The first poll (KnowPolitics) is a little more accurate than the second one (Public-Metrics)
(4) Neither poll is more accurate than the other poll
(5) The second poll (Public-Metrics) is a little more accurate than the first one (KnowPolitics)
(6) The second poll (Public-Metrics) is somewhat more accurate than the first one (KnowPolitics)
(7) The second poll (Public-Metrics) is much more accurate than the first one (KnowPolitics)
If the election were held tomorrow and it was between Hillary Clinton and Donald Trump, which candidate do you think would win and become the next President of the U.S.?
(1) Clinton is much more likely to win
(2) Clinton is somewhat more likely to win
(3) Clinton is a little more likely to win
(4) Both candidates are equally likely to win
(5) Trump is a little more likely to win
(6) Trump is somewhat more likely to win
(7) Trump is much more likely to win
This study is preregistered at EGAP: Evidence in Governance and Politics (ID: 20160629AA)