Explore All Articles

By Topic

By Author

All Articles

Article Topic

Research Note

Information control on YouTube during Russia’s invasion of Ukraine

Yevgeniy Golovchenko, Kristina Aleksandrovna Pedersen, Jonas Skjold Raaschou-Pedersen and Anna Rogers

This research note investigates the aftermath of YouTube’s global ban on Russian state-affiliated media channels in the wake of Russia’s full-scale invasion of Ukraine in 2022. Using over 12 million YouTube comments across 40 Russian-language channels, we analyzed the effectiveness of the ban and the shifts in user activity before and after the platform’s intervention.

Keep Reading
Research Note

People are more susceptible to misinformation with realistic AI-synthesized images that provide strong evidence to headlines

Sean Guo, Yiwen Zhong and Xiaoqing Hu

The development of artificial intelligence (AI) allows rapid creation of AI-synthesized images. In a pre-registered experiment, we examine how properties of AI-synthesized images influence belief in misinformation and memory for corrections. Realistic and probative (i.e., providing strong evidence) images predicted greater belief in false headlines.

Keep Reading

Not so different after all? Antecedents of believing in misinformation and conspiracy theories on COVID-19

Florian Wintterlin

Misinformation and conspiracy theories are often grouped together, but do people believe in them for the same reasons? This study examines how these conceptually distinct forms of deceptive content are processed and believed using the COVID-19 pandemic as context. Surprisingly, despite their theoretical differences, belief in both is predicted by similar psychological factors—particularly conspiracy mentality and the perception that truth is politically constructed—suggesting that underlying distrust in institutions may outweigh differences in types of deceptive content in shaping susceptibility.

Keep Reading
Research Note

LLMs grooming or data voids? LLM-powered chatbot references to Kremlin disinformation reflect information gaps, not manipulation

Maxim Alyukov, Mykola Makhortykh, Alexandr Voronovici and Maryna Sydorova

Some of today’s most popular large language model (LLM)-powered chatbots occasionally reference Kremlin-linked disinformation websites, but it might not be for the reasons many fear. While some recent studies have claimed that Russian actors are “grooming” LLMs by flooding the web with disinformation, our small-scale analysis finds little evidence for this.

Keep Reading

When knowing more means doing less: Algorithmic knowledge and digital (dis)engagement among young adults

Myojung Chung

What if knowing how social media algorithms work doesn’t make you a more responsible digital citizen, but a more cynical one? A new survey of U.S. young adults finds that while higher algorithmic awareness and knowledge are linked to greater concerns about misinformation and filter bubbles, individuals with greater algorithmic awareness and knowledge are less likely to correct misinformation or engage with opposing viewpoints on social media—possibly reflecting limited algorithmic agency.

Keep Reading
Commentary

A dual typology of social media interventions and deterrence mechanisms against misinformation

Amir Karami

In response to the escalating threat of misinformation, social media platforms have introduced a wide range of interventions aimed at reducing the spread and influence of false information. However, there is a lack of a coherent macro-level perspective that explains how these interventions operate independently and collectively.

Keep Reading

Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights

Fan Yang, Luke Heemsbergen and Robbie Fordyce

Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources.

Keep Reading
Research Note

Do language models favor their home countries? Asymmetric propagation of positive misinformation and foreign influence audits

Ho-Chun Herbert Chang, Tracy Weener, Yung-Chun Chen, Sean Noh, Mingyue Zha and Hsuan Lo

As language models (LMs) continue to develop, concerns over foreign misinformation through models developed in authoritarian countries have emerged. Do LMs favor their home countries? This study audits four frontier LMs by evaluating their favoritism toward world leaders, then measuring how favoritism propagates into misinformation belief.

Keep Reading
Commentary

New sources of inaccuracy? A conceptual framework for studying AI hallucinations

Anqi Shao

In February 2025, Google’s AI Overview fooled itself and its users when it cited an April Fool’s satire about “microscopic bees powering computers” as factual in search results (Kidman, 2025). Google did not intend to mislead, yet the system produced a confident falsehood.

Keep Reading