Recent Articles
Emotional resonance and participatory misinformation: Learning from a K-pop controversy
Sungha Kang, Rachel E. Moran and Jin Ha Lee
In today’s digital media environment, emotionally resonant narratives often spread faster and stick more firmly than verifiable facts. This paper explores how emotionally charged communication in online controversies fosters not only widespread engagement but also the participatory nature of misinformation. Through a case study of a K-pop controversy, we show how audiences act not just as consumers but as co-authors of alternative narratives in moments of uncertainty.

Prebunking misinformation techniques in social media feeds: Results from an Instagram field study
Sander van der Linden, Debra Louison-Lavoy, Nicholas Blazer, Nancy S. Noble and Jon Roozenbeek
Boosting psychological defences against misleading content online is an active area of research, but transition from the lab to real-world uptake remains a challenge. We developed a 19-second prebunking video about emotionally manipulative content and showed it as a Story Feed ad to N = 375,597 Instagram users in the United Kingdom.

Reframing misinformation as informational-systemic risk in the age of societal volatility
Nuurrianti Jalli
When a bank run, a pandemic, or an election spirals out of control, the spark is often informational. In 2023, rumors online helped accelerate the collapse of Silicon Valley Bank. During COVID-19, false claims about vaccines fueled preventable harms by undermining public trust in health guidance, and election lies in the United States fed into the broader dynamics that culminated in the January 6 Capitol attack.

Towards the study of world misinformation
Piero Ronzani
What if nearly everything we think we know about misinformation came from just a sliver of the world? When research leans heavily on online studies from a few wealthy nations, we risk drawing global conclusions from local noise. A WhatsApp group of fishermen, a displaced community in a refugee camp, or a bustling market in the Global South are not marginal examples of information environments; such contexts call for an evolution of how we study misinformation.

Information control on YouTube during Russia’s invasion of Ukraine
Yevgeniy Golovchenko, Kristina Aleksandrovna Pedersen, Jonas Skjold Raaschou-Pedersen and Anna Rogers
This research note investigates the aftermath of YouTube’s global ban on Russian state-affiliated media channels in the wake of Russia’s full-scale invasion of Ukraine in 2022. Using over 12 million YouTube comments across 40 Russian-language channels, we analyzed the effectiveness of the ban and the shifts in user activity before and after the platform’s intervention.

People are more susceptible to misinformation with realistic AI-synthesized images that provide strong evidence to headlines
Sean Guo, Yiwen Zhong and Xiaoqing Hu
The development of artificial intelligence (AI) allows rapid creation of AI-synthesized images. In a pre-registered experiment, we examine how properties of AI-synthesized images influence belief in misinformation and memory for corrections. Realistic and probative (i.e., providing strong evidence) images predicted greater belief in false headlines.
Explore by Topic
- Artificial Intelligence
- Asia
- Big Data
- China
- Conspiracy Theories
- Content Moderation
- COVID-19
- Debunking
- Disinformation
- Editorial
- Education
- Elections
- Emotion
- Europe
- Fact-checking
- Fake News
- Gaming
- Healthcare
- Impact
- Information Bias
- Law & Government
- Mainstream Media
- Media Literacy
- Partisan Issues
- Philosophy
- Platform Regulation
- Platforms
- Politics
- Prebunking
- Propaganda
- Psychology
- Public Health
- Public Opinion
- Public Relations
- Research
- Russia
- Search engines
- Social Media
- Sources
- Technology
- Twitter/X
- Vaccines
- Youth
- Youtube