Explore All Articles

By Topic

By Author

All Articles

Article Topic

People rely on their existing political beliefs to identify election misinformation

Sora Park, Jee Young Lee, Kieran McGuinness, Caroline Fisher and Janet Fulton

Rather than assuming that people are motivated to fact-check, we investigated the process that people go through when and if they encounter political misinformation. Using a digital diary method, we asked 38 participants to collect examples of political misinformation during Australia’s 2025 federal election and explain why they determined it to be misinformation (n = 254).

Keep Reading

Emotional resonance and participatory misinformation: Learning from a K-pop controversy

Sungha Kang, Rachel E. Moran and Jin Ha Lee

In today’s digital media environment, emotionally resonant narratives often spread faster and stick more firmly than verifiable facts. This paper explores how emotionally charged communication in online controversies fosters not only widespread engagement but also the participatory nature of misinformation. Through a case study of a K-pop controversy, we show how audiences act not just as consumers but as co-authors of alternative narratives in moments of uncertainty.

Keep Reading
Commentary

Towards the study of world misinformation

Piero Ronzani

What if nearly everything we think we know about misinformation came from just a sliver of the world? When research leans heavily on online studies from a few wealthy nations, we risk drawing global conclusions from local noise. A WhatsApp group of fishermen, a displaced community in a refugee camp, or a bustling market in the Global South are not marginal examples of information environments; such contexts call for an evolution of how we study misinformation.

Keep Reading
Research Note

People are more susceptible to misinformation with realistic AI-synthesized images that provide strong evidence to headlines

Sean Guo, Yiwen Zhong and Xiaoqing Hu

The development of artificial intelligence (AI) allows rapid creation of AI-synthesized images. In a pre-registered experiment, we examine how properties of AI-synthesized images influence belief in misinformation and memory for corrections. Realistic and probative (i.e., providing strong evidence) images predicted greater belief in false headlines.

Keep Reading

Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights

Fan Yang, Luke Heemsbergen and Robbie Fordyce

Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources.

Keep Reading
Research Note

Do language models favor their home countries? Asymmetric propagation of positive misinformation and foreign influence audits

Ho-Chun Herbert Chang, Tracy Weener, Yung-Chun Chen, Sean Noh, Mingyue Zha and Hsuan Lo

As language models (LMs) continue to develop, concerns over foreign misinformation through models developed in authoritarian countries have emerged. Do LMs favor their home countries? This study audits four frontier LMs by evaluating their favoritism toward world leaders, then measuring how favoritism propagates into misinformation belief.

Keep Reading

Toxic politics and TikTok engagement in the 2024 U.S. election

Ahana Biswas, Alireza Javadian Sabet and Yu-Ru Lin

What kinds of political content thrive on TikTok during an election year? Our analysis of 51,680 political videos from the 2024 U.S. presidential cycle reveals that toxic and partisan content consistently attracts more user engagement—despite ongoing moderation efforts. Posts about immigration and election fraud, in particular, draw high levels of toxicity and attention.

Keep Reading

Declining information quality under new platform governance

Burak Özturan, Alexi Quintana-Mathé, Nir Grinberg, Katherine Ognyanova and David Lazer

Following the leadership transition on October 27, 2022, Twitter/X underwent a notable change in platform governance. This study investigates how these changes influenced information quality for registered U.S. voters and the platform more broadly. We address this question by analyzing two complementary datasets—a Twitter panel and a Decahose sample.

Keep Reading

State media tagging does not affect perceived tweet accuracy: Evidence from a U.S. Twitter experiment in 2022

Claire Betzer, Montgomery Booth, Beatrice Cappio, Alice Cook, Madeline Gochee, Benjamin Grayzel, Leyla Jacoby, Sharanya Majumder, Michael Manda, Jennifer Qian, Mitchell Ransden, Miles Rubens, Mihir Sardesai, Eleanor Sullivan, Harish Tekriwal, Ryan Waaland and Brendan Nyhan

State media outlets spread propaganda disguised as news online, prompting social media platforms to attach state-affiliated media tags to their accounts. Do these tags reduce belief in state media misinformation? Previous studies suggest the tags reduce misperceptions but focus on Russia, and current research does not compare these tags with other interventions.

Keep Reading