Explore All Articles
All Articles
Article Topic

Prebunking misinformation techniques in social media feeds: Results from an Instagram field study
Sander van der Linden, Debra Louison-Lavoy, Nicholas Blazer, Nancy S. Noble and Jon Roozenbeek
Boosting psychological defences against misleading content online is an active area of research, but transition from the lab to real-world uptake remains a challenge. We developed a 19-second prebunking video about emotionally manipulative content and showed it as a Story Feed ad to N = 375,597 Instagram users in the United Kingdom.

Information control on YouTube during Russia’s invasion of Ukraine
Yevgeniy Golovchenko, Kristina Aleksandrovna Pedersen, Jonas Skjold Raaschou-Pedersen and Anna Rogers
This research note investigates the aftermath of YouTube’s global ban on Russian state-affiliated media channels in the wake of Russia’s full-scale invasion of Ukraine in 2022. Using over 12 million YouTube comments across 40 Russian-language channels, we analyzed the effectiveness of the ban and the shifts in user activity before and after the platform’s intervention.

People are more susceptible to misinformation with realistic AI-synthesized images that provide strong evidence to headlines
Sean Guo, Yiwen Zhong and Xiaoqing Hu
The development of artificial intelligence (AI) allows rapid creation of AI-synthesized images. In a pre-registered experiment, we examine how properties of AI-synthesized images influence belief in misinformation and memory for corrections. Realistic and probative (i.e., providing strong evidence) images predicted greater belief in false headlines.

LLMs grooming or data voids? LLM-powered chatbot references to Kremlin disinformation reflect information gaps, not manipulation
Maxim Alyukov, Mykola Makhortykh, Alexandr Voronovici and Maryna Sydorova
Some of today’s most popular large language model (LLM)-powered chatbots occasionally reference Kremlin-linked disinformation websites, but it might not be for the reasons many fear. While some recent studies have claimed that Russian actors are “grooming” LLMs by flooding the web with disinformation, our small-scale analysis finds little evidence for this.

Do language models favor their home countries? Asymmetric propagation of positive misinformation and foreign influence audits
Ho-Chun Herbert Chang, Tracy Weener, Yung-Chun Chen, Sean Noh, Mingyue Zha and Hsuan Lo
As language models (LMs) continue to develop, concerns over foreign misinformation through models developed in authoritarian countries have emerged. Do LMs favor their home countries? This study audits four frontier LMs by evaluating their favoritism toward world leaders, then measuring how favoritism propagates into misinformation belief.

The small effects of short user corrections on misinformation in Brazil, India, and the United Kingdom
Sacha Altay, Simge Andı, Sumitra Badrinathan, Camila Mont’Alverne, Benjamin Toff, Rasmus Kleis Nielsen and Richard Fletcher
How effective are user corrections in combatting misinformation on social media, and does adding a link to a fact check improve their effectiveness? We conducted a pre-registered online experiment on representative samples of the online population in Brazil, India, and the United Kingdom (N participants = 3,000, N observations = 24,000).

Feedback and education improve human detection of image manipulation on social media
Adnan Hoq, Matthew J. Facciani and Tim Weninger
This study investigates the impact of educational interventions and feedback on users’ ability to detect manipulated images on social media, addressing a gap in research that has primarily focused on algorithmic approaches. Through a pre-registered randomized and controlled experiment, we found that feedback and educational content significantly improved participants’ ability to detect manipulated images on social media.

The origin of public concerns over AI supercharging misinformation in the 2024 U.S. presidential election
Harry Yaojun Yan, Garrett Morrow, Kai-Cheng Yang and John Wihbey
We surveyed 1,000 U.S. adults to understand concerns about the use of artificial intelligence (AI) during the 2024 U.S. presidential election and public perceptions of AI-driven misinformation. Four out of five respondents expressed some level of worry about AI’s role in election misinformation.

Conspiracy Theories
Understanding climate change conspiracy beliefs: A comparative outlook
Daniel Stockemer and Jean-Nicolas Bordeleau
Are climate change conspiracy theories widespread across the world, or do we find climate change conspiracy beliefs more so in some countries than in others? This research note explores the prevalence of conspiracy beliefs that identify climate change as a hoax across eight geographically and culturally diverse countries.

A playbook for mapping adolescent interactions with misinformation to perceptions of online harm
Gowri S. Swamy, Morgan G. Ames and Niloufar Salehi
Digital misinformation is rampant, and understanding how exposure to misinformation affects the perceptions and decision-making processes of adolescents is crucial. In a four-part qualitative study with 25 college students 18–19 years old, we found that participants first assess the severity of harms (e.g.,