Misinformed about misinformation: On the polarizing discourse on misinformation and its consequences for the field
Irene V. Pasquetto, Gabrielle Lim and Samantha Bradshaw
The field of misinformation is facing several challenges, from attacks on academic freedom to polarizing discourse about the nature and extent of the problem for elections and digital well-being. However, we see this as an inflection point and an opportunity to chart a more informed and contextual research practice.
The role of narrative in misinformation games
Nisha Devasia and Jin Ha Lee
Several existing media literacy games aim to increase resilience to misinformation. However, they lack variety in their approaches. The vast majority focus on assessing information accuracy, with limited exploration of socio-emotional influences of misinformation adoption. Misinformation correction and educational games have explored how narrative persuasion influences personal beliefs, as identification with certain narratives can frame the interpretation of information.
Trump, Twitter, and truth judgments: The effects of “disputed” tags and political knowledge on the judged truthfulness of election misinformation
John C. Blanchar and Catherine J. Norris
Misinformation has sown distrust in the legitimacy of American elections. Nowhere has this been more concerning than in the 2020 U.S. presidential election wherein Donald Trump falsely declared that it was stolen through fraud. Although social media platforms attempted to dispute Trump’s false claims by attaching soft moderation tags to his posts, little is known about the effectiveness of this strategy.
GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation
Jutta Haider, Kristofer Rolf Söderström, Björn Ekström and Malte Rödl
Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research.
The algorithmic knowledge gap within and between countries: Implications for combatting misinformation
Myojung Chung and John Wihbey
While understanding how social media algorithms operate is essential to protect oneself from misinformation, such understanding is often unevenly distributed. This study explores the algorithmic knowledge gap both within and between countries, using national surveys in the United States (N = 1,415), the United Kingdom (N = 1,435), South Korea (N = 1,798), and Mexico (N = 784).
Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine
Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova
Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.
Explore by Topic
- Artificial Intelligence
- Asia
- Big Data
- China
- Conspiracy Theories
- Content Moderation
- COVID-19
- Cybersecurity
- Debunking
- Defense
- Disinformation
- Editorial
- Education
- Elections
- Emotion
- Ethics
- Europe
- Fact-checking
- Fake News
- Gaming
- Healthcare
- Impact
- Information Bias
- Information Security
- Infrastructure
- IRA
- Law & Government
- Mainstream Media
- Media Literacy
- Memes
- Partisan Issues
- Philosophy
- Platform Regulation
- Platforms
- Political Economy
- Politics
- Prebunking
- Privacy
- Propaganda
- Psychology
- Public Health
- Public Opinion
- Public Relations
- Research
- Russia
- Search engines
- Social Media
- Sources
- Trolls
- Twitter/X
- Vaccines
- Youth
- Youtube