Explore All Articles
All Articles
Article Topic

Do language models favor their home countries? Asymmetric propagation of positive misinformation and foreign influence audits
Ho-Chun Herbert Chang, Tracy Weener, Yung-Chun Chen, Sean Noh, Mingyue Zha and Hsuan Lo
As language models (LMs) continue to develop, concerns over foreign misinformation through models developed in authoritarian countries have emerged. Do LMs favor their home countries? This study audits four frontier LMs by evaluating their favoritism toward world leaders, then measuring how favoritism propagates into misinformation belief.

New sources of inaccuracy? A conceptual framework for studying AI hallucinations
Anqi Shao
In February 2025, Google’s AI Overview fooled itself and its users when it cited an April Fool’s satire about “microscopic bees powering computers” as factual in search results (Kidman, 2025). Google did not intend to mislead, yet the system produced a confident falsehood.

Toxic politics and TikTok engagement in the 2024 U.S. election
Ahana Biswas, Alireza Javadian Sabet and Yu-Ru Lin
What kinds of political content thrive on TikTok during an election year? Our analysis of 51,680 political videos from the 2024 U.S. presidential cycle reveals that toxic and partisan content consistently attracts more user engagement—despite ongoing moderation efforts. Posts about immigration and election fraud, in particular, draw high levels of toxicity and attention.

The unappreciated role of intent in algorithmic moderation of abusive content on social media
Xinyu Wang, Sai Koneru, Pranav Narayanan Venkit, Brett Frischmann and Sarah Rajtmajer
A significant body of research is dedicated to developing language models that can detect various types of online abuse, for example, hate speech, cyberbullying. However, there is a disconnect between platform policies, which often consider the author’s intention as a criterion for content moderation, and the current capabilities of detection models, which typically lack efforts to capture intent.

The small effects of short user corrections on misinformation in Brazil, India, and the United Kingdom
Sacha Altay, Simge Andı, Sumitra Badrinathan, Camila Mont’Alverne, Benjamin Toff, Rasmus Kleis Nielsen and Richard Fletcher
How effective are user corrections in combatting misinformation on social media, and does adding a link to a fact check improve their effectiveness? We conducted a pre-registered online experiment on representative samples of the online population in Brazil, India, and the United Kingdom (N participants = 3,000, N observations = 24,000).

Disparities by design: Toward a research agenda that links science misinformation and socioeconomic marginalization in the age of AI
Miriam Schirmer, Nathan Walter and Emőke-Ágnes Horvát
Misinformation research often draws optimistic conclusions, with fact-checking, for example, being established as an effective means of reducing false beliefs. However, it rarely considers the details of socioeconomic disparities that often shape who is most vulnerable to science misinformation. Historical and systemic inequalities have fostered mistrust in institutions, limiting access to credible information, for example, when Black patients distrust public health guidance due to past medical racism.

Declining information quality under new platform governance
Burak Özturan, Alexi Quintana-Mathé, Nir Grinberg, Katherine Ognyanova and David Lazer
Following the leadership transition on October 27, 2022, Twitter/X underwent a notable change in platform governance. This study investigates how these changes influenced information quality for registered U.S. voters and the platform more broadly. We address this question by analyzing two complementary datasets—a Twitter panel and a Decahose sample.

Gendered disinformation as violence: A new analytical agenda
Marília Gehrke and Eedan R. Amit-Danhi
The potential for harm entrenched in mis- and disinformation content, regardless of intentionality, opens space for a new analytical agenda to investigate the weaponization of identity-based features like gender, race, and ethnicity through the lens of violence. Therefore, we lay out the triangle of violence to support new studies aiming to investigate multimedia content, victims, and audiences of false claims.

Our journal statistics for 2024
HKS Misinformation Review Editorial Staff
This editorial provides an overview of the key statistics for Volume 5 (2024) of the HKS Misinformation Review, including submission and acceptance rates, accepted article types, publication speed and frequency, citation impact, most-viewed articles, engagement and readership, as well as author and reviewer demographics.