Explore All Articles

By Topic

By Author

All Articles

Article Topic

Research Note

Feedback and education improve human detection of image manipulation on social media

Adnan Hoq, Matthew J. Facciani and Tim Weninger

This study investigates the impact of educational interventions and feedback on users’ ability to detect manipulated images on social media, addressing a gap in research that has primarily focused on algorithmic approaches. Through a pre-registered randomized and controlled experiment, we found that feedback and educational content significantly improved participants’ ability to detect manipulated images on social media.

Keep Reading

How spammers and scammers leverage AI-generated images on Facebook for audience growth

Renée DiResta and Josh A. Goldstein

Much of the research and discourse on risks from artificial intelligence (AI) image generators, such as DALL-E and Midjourney, has centered around whether they could be used to inject false information into political discourse. We show that spammers and scammers—seemingly motivated by profit or clout, not ideology—are already using AI-generated images to gain significant traction on Facebook.

Keep Reading

How do social media users and journalists express concerns about social media misinformation? A computational analysis

Jianing Li and Michael W. Wagner

This article describes partisan-based, accuracy-based, and action-based discussions through which U.S. social media users and journalists express concerns about social media misinformation. While platform policy stands out as the most highly discussed topic by both social media users and journalists, much of it is cast through a party politics lens.

Keep Reading

Who reports witnessing and performing corrections on social media in the United States, United Kingdom, Canada, and France?

Rongwei Tang, Emily K. Vraga, Leticia Bode and Shelley Boulianne

Observed corrections of misinformation on social media can encourage more accurate beliefs, but for these benefits to occur, corrections must happen. By exploring people’s perceptions of witnessing and performing corrections on social media, we find that many people say they observe and perform corrections across the United States, the United Kingdom, Canada, and France.

Keep Reading

Journalistic interventions matter: Understanding how Americans perceive fact-checking labels

Chenyan Jia and Taeyoung Lee

While algorithms and crowdsourcing have been increasingly used to debunk or label misinformation on social media, such tasks might be most effective when performed by professional fact checkers or journalists. Drawing on a national survey (N = 1,003), we found that U.S. adults evaluated fact-checking labels created by professional fact checkers as more effective than labels by algorithms and other users. News

Keep Reading

How different incentives reduce scientific misinformation online

Piero Ronzani, Folco Panizza, Tiffany Morisseau, Simone Mattavelli and Carlo Martini

Several social media employ or consider user recruitment as defense against misinformation. Yet, it is unclear how to encourage users to make accurate evaluations. Our study shows that presenting the performance of previous participants increases discernment of science-related news. Making participants aware that their evaluations would be used by future participants had no effect on accuracy.

Keep Reading
Research Note

User experiences and needs when responding to misinformation on social media

Pranav Malhotra, Ruican Zhong, Victor Kuan, Gargi Panatula, Michelle Weng, Andrea Bras, Connie Moon Sehat, Franziska Roesner and Amy Zhang

This study examines the experiences of those who participate in bottom-up user-led responses to misinformation on social media and outlines how they can be better supported via software tools. Findings show that users desire support tools designed to minimize time and effort in identifying misinformation and provide tailored suggestions for crafting responses to misinformation that account for emotional and relational context.

Keep Reading

Assessing misinformation recall and accuracy perceptions: Evidence from the COVID-19 pandemic

Sarah E. Kreps and Douglas L. Kriner

Misinformation is ubiquitous; however, the extent and heterogeneity in public uptake of it remains a matter of debate. We address these questions by exploring Americans’ ability to recall prominent misinformation during the COVID-19 pandemic and the factors associated with accuracy perceptions of these claims.

Keep Reading

Who knowingly shares false political information online?

Shane Littrell, Casey Klofstad, Amanda Diekman, John Funchion, Manohar Murthi, Kamal Premaratne, Michelle Seelig, Daniel Verdear, Stefan Wuchty and Joseph E. Uscinski

Some people share misinformation accidentally, but others do so knowingly. To fully understand the spread of misinformation online, it is important to analyze those who purposely share it. Using a 2022 U.S. survey, we found that 14 percent of respondents reported knowingly sharing misinformation, and that these respondents were more likely to also report support for political violence, a desire to run for office, and warm feelings toward extremists.

Keep Reading