Download data and study materials from OSF
Sciences Po Paris
Texas A&M University
Temple University and Focus for Democracy
College of William & Mary
Texas A&M University
Sample size: 803
Field period: 04/02/2020-05/06/2021
The core experiment aimed to measure the extent to which subjects found counter-attitudinal and/or explicitly partisan messages to be credible and persuasive. The basic design was a straightforward 2×2 factorial design where the social media post provided was either liberal or conservative and the argument made was explicitly partisan or not.
For the purpose of our experiment, we simulated a series of posts on the social media platform Twitter. Each respondent was presented with four posts, which are called “tweets,” and randomization was constrained to ensure that each subject observed exactly one message from each cell in the 2×2 matrix. To further strengthen the generalizability of our study, we created liberal and conservative tweets on four different topics that would be salient to both adults and teenagers: school prayer, minimum wage, police shootings, and marijuana legalization. Implicitly partisan tweets made no reference to a political party, while explicitly partisan tweets connected liberal tweets to the Democratic Party and conservative tweets to the Republican Party.
We asked subjects their general attitudes about the four issue areas in the beginning of the survey. Subject were asked their level of agreement on a 4-point Likert scale (strongly agree; somewhat agree; somewhat disagree; strongly disagree) for the following four statements:
If the policy position randomly assigned to be expressed in the tweet agreed with the respondent’s preference, it was coded as pro-attitudinal. If the tweet espoused the position contrary to the respondent’s preference, it was coded as counter-attitudinal. This coding allows the analysis to highlight how respondents evaluate content they disagree (agree) with rather than filter through ideological matches.
The second add-on experiment varied the extent to which a tweet was shared, liked, and referenced in comments. As mentioned in the literature review, the extent to which other people engaged with the content could influence how people gauge the credibility of online opinions. To generate believable engagement numbers, one of the authors sampled 500 tweets in his feed and calculated the mean and standard deviation for likes, retweets, and comments among the 90th to 95th percentiles of engagement. From these distributions, four “high engagement” profiles were randomly generated and randomly appended to tweets.
Our primary outcome variable in the experiment was the subject’s perception of the credibility of the tweet. We assessed this using two questions, both on an ordinal (1-5) scale. We first asked, “How accurate is the argument in this tweet?” (1 = not at all accurate, 2, 3, 4, 5= very accurate). We then asked “How convincing is the argument in this tweet?” (1 = not at all convincing, 2, 3, 4, 5= very convincing). These adjectives (accurate, convincing) were selected in conjunction with representatives from NORC who specialize in reading-level appropriate survey question wording. To construct our dependent variable—credibility—we took the average of the subjects’ responses to the two questions.
We also asked subjects a question related to a secondary outcome of interest: persuasiveness. Here, we asked subjects to report their attitude on the policy issue, allowing us to examine the difference in their pre-stimulus versus post-stimulus responses.
We find mixed support for our preregistered hypotheses (H1-H6). First, while the effect posited in Hypothesis 1 is in the expected direction, whether a tweet cites a reliable news source does not meaningfully (0.004 for adults and 0.030 for teenagers) or significantly (p<0.906 for adults and p<0.588 for teenagers) affect credibility assessments among adults or teenagers. Ideological messages that cite unknown Twitter users are assessed as no less credible than ideological messages that cite the AP wire service, the Reuters service, UPI, or USA Today. Second, we find strong support for Hypothesis 2. Counter-attitudinal messages are perceived as substantially less credible than pro-attitudinal messages by adults (-1.123, which is 90 percent of a standard deviation in the credibility outcome) and teenagers (-1.205, which is 97 percent of a standard deviation in the credibility outcome). The sizes of these effects are statistically indistinguishable across the adult and teenager samples.
Third—providing mixed support for Hypothesis 3 and 4—we find that adults and teenagers assess partisan messages as less credible than non-partisan messages. However, there is not a significant interactive effect when adults or teenagers receive partisan and counter-attitudinal messages (p< 0.283 and p< 0.562 respectively). Partisan cues lead to lower credibility assessments regardless of whether the message is counter-attitudinal or pro-attitudinal. Counter-attitudinal messages that include a party cue receive slightly lower credibility assessments (-1.257 for adults and -1.289 for teenagers) than counter-attitudinal messages without a party cue (-1.123 for adults and -1.205 for teenagers). There are not meaningful differences in the effect of party cues on adult or teenager credibility assessments. Yet, we expected teenagers to evaluate partisan messages as less credible than adults.
The effects (or lack thereof) of high engagement are less ambiguous. For both the adult and teenager samples the effect of tweet popularity did not improve the perceived credibility of a tweet (-0.022 and 0.048, respectively). Thus, Hypotheses 5 and 6 are not supported by these data. This null finding is reassuring because it suggests that while high engagement tweets may be more likely to be promoted by algorithms and seen, users will not find them more credible on the basis of high engagement metrics alone.
We observe similar findings when respondent political attitudes comprise our dependent variable. A positive effect reflects that the treatment moves respondents closer to the position of the ideological message; a negative effect means the treatment moves respondents further away. Whether a tweet cites a mainstream news source does not affect respondent political attitudes. Counter-attitudinal messages move respondents in the opposite direction of the tweet’s ideological message. Counter-attitudinal and partisan messages have a larger effect than just counter-attitudinal messages—however, this difference is only statistically significant for teenagers (p<0.002). Contrary to the analyses with credibility assessments as the dependent variable, party cues alone have no effect on political attitudes. In line with the previous analyses, a highly engaged tweet is not more likely to move political attitudes towards or away from the tweet’s ideological message. Overall, adults’ and teenagers’ political attitudes are affected by message cues in mostly the same way.
Arceneaux, Kevin, Johanna Dunaway, David Nickerson, Jaime Settle, and Spencer Goidel. “Political Cue Taking on Social Media Among Teens.”
”Political Cue Taking on Social Media Among Teens” American Political Science Association, September 2022
Political Cue Taking on Social Media Among Teens” American Political Science Association, October 2021