Trump, Twitter, and truth judgments: The effects of “disputed” tags and political knowledge on the judged truthfulness of election misinformation
John C. Blanchar and Catherine J. Norris
Misinformation has sown distrust in the legitimacy of American elections. Nowhere has this been more concerning than in the 2020 U.S. presidential election wherein Donald Trump falsely declared that it was stolen through fraud. Although social media platforms attempted to dispute Trump’s false claims by attaching soft moderation tags to his posts, little is known about the effectiveness of this strategy.
GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation
Jutta Haider, Kristofer Rolf Söderström, Björn Ekström and Malte Rödl
Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research.
The algorithmic knowledge gap within and between countries: Implications for combatting misinformation
Myojung Chung and John Wihbey
While understanding how social media algorithms operate is essential to protect oneself from misinformation, such understanding is often unevenly distributed. This study explores the algorithmic knowledge gap both within and between countries, using national surveys in the United States (N = 1,415), the United Kingdom (N = 1,435), South Korea (N = 1,798), and Mexico (N = 784).
Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine
Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova
Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.
Beyond the deepfake hype: AI, democracy, and “the Slovak case”
Lluis de Nadal and Peter Jančárik
Was the 2023 Slovakia election the first swung by deepfakes? Did the victory of a pro-Russian candidate, following the release of a deepfake allegedly depicting election fraud, herald a new era of disinformation? Our analysis of the so-called “Slovak case” complicates this narrative, highlighting critical factors that made the electorate particularly susceptible to pro-Russian disinformation.
How spammers and scammers leverage AI-generated images on Facebook for audience growth
Renée DiResta and Josh A. Goldstein
Much of the research and discourse on risks from artificial intelligence (AI) image generators, such as DALL-E and Midjourney, has centered around whether they could be used to inject false information into political discourse. We show that spammers and scammers—seemingly motivated by profit or clout, not ideology—are already using AI-generated images to gain significant traction on Facebook.
Explore by Topic
- Artificial Intelligence
- Asia
- Big Data
- China
- Conspiracy Theories
- Content Moderation
- COVID-19
- Cybersecurity
- Debunking
- Defense
- Disinformation
- Editorial
- Education
- Elections
- Emotion
- Ethics
- Europe
- Fact-checking
- Fake News
- Gaming
- Healthcare
- Impact
- Information Bias
- Information Security
- Infrastructure
- IRA
- Law & Government
- Mainstream Media
- Media Literacy
- Memes
- Partisan Issues
- Philosophy
- Platform Regulation
- Platforms
- Political Economy
- Politics
- Prebunking
- Privacy
- Propaganda
- Psychology
- Public Health
- Public Opinion
- Public Relations
- Research
- Russia
- Search engines
- Social Media
- Sources
- Technology
- Trolls
- Twitter/X
- Vaccines
- Youth
- Youtube