Explore All Articles
All Articles
Article Topic

People are more susceptible to misinformation with realistic AI-synthesized images that provide strong evidence to headlines
Sean Guo, Yiwen Zhong and Xiaoqing Hu
The development of artificial intelligence (AI) allows rapid creation of AI-synthesized images. In a pre-registered experiment, we examine how properties of AI-synthesized images influence belief in misinformation and memory for corrections. Realistic and probative (i.e., providing strong evidence) images predicted greater belief in false headlines.

LLMs grooming or data voids? LLM-powered chatbot references to Kremlin disinformation reflect information gaps, not manipulation
Maxim Alyukov, Mykola Makhortykh, Alexandr Voronovici and Maryna Sydorova
Some of today’s most popular large language model (LLM)-powered chatbots occasionally reference Kremlin-linked disinformation websites, but it might not be for the reasons many fear. While some recent studies have claimed that Russian actors are “grooming” LLMs by flooding the web with disinformation, our small-scale analysis finds little evidence for this.

Do language models favor their home countries? Asymmetric propagation of positive misinformation and foreign influence audits
Ho-Chun Herbert Chang, Tracy Weener, Yung-Chun Chen, Sean Noh, Mingyue Zha and Hsuan Lo
As language models (LMs) continue to develop, concerns over foreign misinformation through models developed in authoritarian countries have emerged. Do LMs favor their home countries? This study audits four frontier LMs by evaluating their favoritism toward world leaders, then measuring how favoritism propagates into misinformation belief.

New sources of inaccuracy? A conceptual framework for studying AI hallucinations
Anqi Shao
In February 2025, Google’s AI Overview fooled itself and its users when it cited an April Fool’s satire about “microscopic bees powering computers” as factual in search results (Kidman, 2025). Google did not intend to mislead, yet the system produced a confident falsehood.

The origin of public concerns over AI supercharging misinformation in the 2024 U.S. presidential election
Harry Yaojun Yan, Garrett Morrow, Kai-Cheng Yang and John Wihbey
We surveyed 1,000 U.S. adults to understand concerns about the use of artificial intelligence (AI) during the 2024 U.S. presidential election and public perceptions of AI-driven misinformation. Four out of five respondents expressed some level of worry about AI’s role in election misinformation.

Conspiracy Theories
Using an AI-powered “street epistemologist” chatbot and reflection tasks to diminish conspiracy theory beliefs
Marco Meyer, Adam Enders, Casey Klofstad, Justin Stoler and Joseph Uscinski
Social scientists, journalists, and policymakers are increasingly interested in methods to mitigate or reverse the public’s beliefs in conspiracy theories, particularly those associated with negative social consequences, including violence. We contribute to this field of research using an artificial intelligence (AI) intervention that prompts individuals to reflect on the uncertainties in their conspiracy theory beliefs.

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation
Jutta Haider, Kristofer Rolf Söderström, Björn Ekström and Malte Rödl
Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research.
Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine
Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova
Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.

Beyond the deepfake hype: AI, democracy, and “the Slovak case”
Lluis de Nadal and Peter Jančárik
Was the 2023 Slovakia election the first swung by deepfakes? Did the victory of a pro-Russian candidate, following the release of a deepfake allegedly depicting election fraud, herald a new era of disinformation? Our analysis of the so-called “Slovak case” complicates this narrative, highlighting critical factors that made the electorate particularly susceptible to pro-Russian disinformation.