Explore All Articles

By Topic

By Author

All Articles

Article Topic

Research Note

LLMs grooming or data voids? LLM-powered chatbot references to Kremlin disinformation reflect information gaps, not manipulation

Maxim Alyukov, Mykola Makhortykh, Alexandr Voronovici and Maryna Sydorova

Some of today’s most popular large language model (LLM)-powered chatbots occasionally reference Kremlin-linked disinformation websites, but it might not be for the reasons many fear. While some recent studies have claimed that Russian actors are “grooming” LLMs by flooding the web with disinformation, our small-scale analysis finds little evidence for this.

Keep Reading

When knowing more means doing less: Algorithmic knowledge and digital (dis)engagement among young adults

Myojung Chung

What if knowing how social media algorithms work doesn’t make you a more responsible digital citizen, but a more cynical one? A new survey of U.S. young adults finds that while higher algorithmic awareness and knowledge are linked to greater concerns about misinformation and filter bubbles, individuals with greater algorithmic awareness and knowledge are less likely to correct misinformation or engage with opposing viewpoints on social media—possibly reflecting limited algorithmic agency.

Keep Reading
Commentary

A dual typology of social media interventions and deterrence mechanisms against misinformation

Amir Karami

In response to the escalating threat of misinformation, social media platforms have introduced a wide range of interventions aimed at reducing the spread and influence of false information. However, there is a lack of a coherent macro-level perspective that explains how these interventions operate independently and collectively.

Keep Reading

Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights

Fan Yang, Luke Heemsbergen and Robbie Fordyce

Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources.

Keep Reading
Research Note

Do language models favor their home countries? Asymmetric propagation of positive misinformation and foreign influence audits

Ho-Chun Herbert Chang, Tracy Weener, Yung-Chun Chen, Sean Noh, Mingyue Zha and Hsuan Lo

As language models (LMs) continue to develop, concerns over foreign misinformation through models developed in authoritarian countries have emerged. Do LMs favor their home countries? This study audits four frontier LMs by evaluating their favoritism toward world leaders, then measuring how favoritism propagates into misinformation belief.

Keep Reading
Commentary

New sources of inaccuracy? A conceptual framework for studying AI hallucinations

Anqi Shao

In February 2025, Google’s AI Overview fooled itself and its users when it cited an April Fool’s satire about “microscopic bees powering computers” as factual in search results (Kidman, 2025). Google did not intend to mislead, yet the system produced a confident falsehood.

Keep Reading

Toxic politics and TikTok engagement in the 2024 U.S. election

Ahana Biswas, Alireza Javadian Sabet and Yu-Ru Lin

What kinds of political content thrive on TikTok during an election year? Our analysis of 51,680 political videos from the 2024 U.S. presidential cycle reveals that toxic and partisan content consistently attracts more user engagement—despite ongoing moderation efforts. Posts about immigration and election fraud, in particular, draw high levels of toxicity and attention.

Keep Reading

The unappreciated role of intent in algorithmic moderation of abusive content on social media

Xinyu Wang, Sai Koneru, Pranav Narayanan Venkit, Brett Frischmann and Sarah Rajtmajer

A significant body of research is dedicated to developing language models that can detect various types of online abuse, for example, hate speech, cyberbullying. However, there is a disconnect between platform policies, which often consider the author’s intention as a criterion for content moderation, and the current capabilities of detection models, which typically lack efforts to capture intent.

Keep Reading
Research Note

The small effects of short user corrections on misinformation in Brazil, India, and the United Kingdom

Sacha Altay, Simge Andı, Sumitra Badrinathan, Camila Mont’Alverne, Benjamin Toff, Rasmus Kleis Nielsen and Richard Fletcher

How effective are user corrections in combatting misinformation on social media, and does adding a link to a fact check improve their effectiveness? We conducted a pre-registered online experiment on representative samples of the online population in Brazil, India, and the United Kingdom (N participants = 3,000, N observations = 24,000).

Keep Reading