Explore All Articles
All Articles
Article Topic
Misinformation interventions are common, divisive, and poorly understood
Emily Saltz, Soubhik Barari, Claire Leibowicz and Claire Wardle
Social media platforms label, remove, or otherwise intervene on thousands of posts containing misleading or inaccurate information every day. Who encounters these interventions, and how do they react? A demographically representative survey of 1,207 Americans reveals that 49% have been exposed to some form of online misinformation intervention.
Review of social science research on the impact of countermeasures against influence operations
Laura Courchesne, Julia Ilhardt and Jacob N. Shapiro
Despite ongoing discussion of the need for increased regulation and oversight of social media, as well as debate over the extent to which the platforms themselves should be responsible for containing misinformation, there is little consensus on which interventions work to address the problem of influence operations and disinformation campaigns.
Twitter flagged Donald Trump’s tweets with election misinformation: They continued to spread both on and off the platform
Zeve Sanderson, Megan A. Brown, Richard Bonneau, Jonathan Nagler and Joshua A. Tucker
We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels.
Happiness and surprise are associated with worse truth discernment of COVID-19 headlines among social media users in Nigeria
Leah R. Rosenzweig, Bence Bago, Adam J. Berinsky and David G. Rand
Do emotions we experience after reading headlines help us discern true from false information or cloud our judgement? Understanding whether emotions are associated with distinguishing truth from fiction and sharing information has implications for interventions designed to curb the spread of misinformation.

Research note: This photograph has been altered: Testing the effectiveness of image forensic labeling on news image credibility
Cuihua Shen, Mona Kasra and James F. O’Brien
Despite the ubiquity of images and videos in online news environments, much of the existing research on misinformation and its correction is solely focused on textual misinformation, and little is known about how ordinary users evaluate fake or manipulated images and the most effective ways to label and correct such falsities.
Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online
Ziv Epstein, Adam J. Berinsky, Rocky Cole, Andrew Gully, Gordon Pennycook and David G. Rand
Recent research suggests that shifting users’ attention to accuracy increases the quality of news they subsequently share online. Here we help develop this initial observation into a suite of deployable interventions for practitioners. We ask (i) how prior results generalize to other approaches for prompting users to consider accuracy, and (ii) for whom these prompts are more versus less effective.

Elections
Source alerts can reduce the harms of foreign disinformation
Jason Ross Arnold, Alexandra Reckendorf and Amanda L. Wintersieck
Social media companies have begun to use content-based alerts in their efforts to combat mis- and disinformation, including fact-check corrections and warnings of possible falsity, such as “This claim about election fraud is disputed.” Another harm reduction tool, source alerts, can be effective when a hidden foreign hand is known or suspected.
How COVID drove the evolution of fact-checking
Samikshya Siwakoti, Kamya Yadav, Nicola Bariletto, Luca Zanotti, Ulas Erdogdu and Jacob N. Shapiro
With the outbreak of the coronavirus pandemic came a flood of novel misinformation. Ranging from harmless false cures to dangerous rhetoric targeting minorities, coronavirus-related misinformation spread quickly wherever the virus itself did. Fact-checking organizations around the world took up the charge against misinformation, essentially crowdsourcing the task of debunking false narratives.

Research note: Likes, sarcasm and politics: Youth responses to a platform-initiated media literacy campaign on social media
Ioana Literat, Abubakr Abdelbagi, Nicola YL Law, Marcus Y-Y Cheung and Rongwei Tang
To better understand youth attitudes towards media literacy education on social media, and the opportunities and challenges inherent in such initiatives, we conducted a large-scale analysis of user responses to a recent media literacy campaign on TikTok. We found that reactions to the campaign were mixed, and highly political in nature.