Explore All Articles

By Topic

By Author

All Articles

Article Topic

Research Note

The origin of public concerns over AI supercharging misinformation in the 2024 U.S. presidential election

Harry Yaojun Yan, Garrett Morrow, Kai-Cheng Yang and John Wihbey

We surveyed 1,000 U.S. adults to understand concerns about the use of artificial intelligence (AI) during the 2024 U.S. presidential election and public perceptions of AI-driven misinformation. Four out of five respondents expressed some level of worry about AI’s role in election misinformation.

Keep Reading

Conspiracy Theories

Using an AI-powered “street epistemologist” chatbot and reflection tasks to diminish conspiracy theory beliefs

Marco Meyer, Adam Enders, Casey Klofstad, Justin Stoler and Joseph Uscinski

Social scientists, journalists, and policymakers are increasingly interested in methods to mitigate or reverse the public’s beliefs in conspiracy theories, particularly those associated with negative social consequences, including violence. We contribute to this field of research using an artificial intelligence (AI) intervention that prompts individuals to reflect on the uncertainties in their conspiracy theory beliefs.

Keep Reading
Research Note

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation

Jutta Haider, Kristofer Rolf Söderström, Björn Ekström and Malte Rödl

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research.

Keep Reading

Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine

Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova

Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.

Keep Reading
Commentary

Beyond the deepfake hype: AI, democracy, and “the Slovak case”

Lluis de Nadal and Peter Jančárik

Was the 2023 Slovakia election the first swung by deepfakes? Did the victory of a pro-Russian candidate, following the release of a deepfake allegedly depicting election fraud, herald a new era of disinformation? Our analysis of the so-called “Slovak case” complicates this narrative, highlighting critical factors that made the electorate particularly susceptible to pro-Russian disinformation.

Keep Reading

How spammers and scammers leverage AI-generated images on Facebook for audience growth

Renée DiResta and Josh A. Goldstein

Much of the research and discourse on risks from artificial intelligence (AI) image generators, such as DALL-E and Midjourney, has centered around whether they could be used to inject false information into political discourse. We show that spammers and scammers—seemingly motivated by profit or clout, not ideology—are already using AI-generated images to gain significant traction on Facebook.

Keep Reading

The spread of synthetic media on X

Giulio Corsi, Bill Marino and Willow Wong

Generative artificial intelligence (AI) models have introduced new complexities and risks to information environments, as synthetic media may facilitate the spread of misinformation and erode public trust. This study examines the prevalence and characteristics of synthetic media on social media platform X from December 2022 to September 2023.

Keep Reading
Commentary

Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown

Felix M. Simon, Sacha Altay and Hugo Mercier

Many observers of the current explosion of generative AI worry about its impact on our information environment, with concerns being raised about the increased quantity, quality, and personalization of misinformation. We assess these arguments with evidence from communication studies, cognitive science, and political science.

Keep Reading