Explore All Articles
All Articles
Article Topic
Not so different after all? Antecedents of believing in misinformation and conspiracy theories on COVID-19
Florian Wintterlin
Misinformation and conspiracy theories are often grouped together, but do people believe in them for the same reasons? This study examines how these conceptually distinct forms of deceptive content are processed and believed using the COVID-19 pandemic as context. Surprisingly, despite their theoretical differences, belief in both is predicted by similar psychological factors—particularly conspiracy mentality and the perception that truth is politically constructed—suggesting that underlying distrust in institutions may outweigh differences in types of deceptive content in shaping susceptibility.

LLMs grooming or data voids? LLM-powered chatbot references to Kremlin disinformation reflect information gaps, not manipulation
Maxim Alyukov, Mykola Makhortykh, Alexandr Voronovici and Maryna Sydorova
Some of today’s most popular large language model (LLM)-powered chatbots occasionally reference Kremlin-linked disinformation websites, but it might not be for the reasons many fear. While some recent studies have claimed that Russian actors are “grooming” LLMs by flooding the web with disinformation, our small-scale analysis finds little evidence for this.
Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights
Fan Yang, Luke Heemsbergen and Robbie Fordyce
Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources.

Gendered disinformation as violence: A new analytical agenda
Marília Gehrke and Eedan R. Amit-Danhi
The potential for harm entrenched in mis- and disinformation content, regardless of intentionality, opens space for a new analytical agenda to investigate the weaponization of identity-based features like gender, race, and ethnicity through the lens of violence. Therefore, we lay out the triangle of violence to support new studies aiming to investigate multimedia content, victims, and audiences of false claims.
Structured expert elicitation on disinformation, misinformation, and malign influence: Barriers, strategies, and opportunities
Ariel Kruger, Morgan Saletta, Atif Ahmad and Piers Howe
We used a modified Delphi method to elicit and synthesize experts’ views on disinformation, misinformation, and malign influence (DMMI). In a three-part process, experts first independently generated a range of effective strategies for combatting DMMI, identified the most impactful barriers to combatting DMMI, and proposed areas for future research.
Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine
Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova
Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.

Framing disinformation through legislation: Evidence from policy proposals in Brazil
Kimberly Anastácio
This article analyzes 62 bills introduced in the Brazilian Chamber of Deputies between 2019–2022 to understand how legislators frame disinformation into different problems and their respective solutions. The timeframe coincides with the administration of right-wing President Jair Bolsonaro. The study shows a tendency from legislators of parties opposed to Bolsonaro to attempt to criminalize the creation and spread of health-related and government-led disinformation.
Seeing lies and laying blame: Partisanship and U.S. public perceptions about disinformation
Kaitlin Peach, Joseph Ripberger, Kuhika Gupta, Andrew Fox, Hank Jenkins-Smith and Carol Silva
Using data from a nationally representative survey of 2,036 U.S. adults, we analyze partisan perceptions of the risk disinformation poses to the U.S. government and society, as well as the actors viewed as responsible for and harmed by disinformation. Our findings indicate relatively high concern about disinformation across a variety of societal issues, with broad bipartisan agreement that disinformation poses significant risks and causes harms to several groups.
A pro-government disinformation campaign on Indonesian Papua
Dave McRae, Maria del Mar Quiroga, Daniel Russo-Batterham and Kim Doyle
This research identifies an Indonesian-language Twitter disinformation campaign posting pro-government materials on Indonesian governance in Papua, site of a protracted ethno-nationalist, pro-independence insurgency. Curiously, the campaign does not employ common disinformation tactics such as hashtag flooding or the posting of clickbait with high engagement potential, nor does it seek to build user profiles that would make the accounts posting this material appear as important participants in a debate over Papua’s status.