Explore All Articles

By Topic

By Author

All Articles

Article Topic

The role of narrative in misinformation games

Nisha Devasia and Jin Ha Lee

Several existing media literacy games aim to increase resilience to misinformation. However, they lack variety in their approaches. The vast majority focus on assessing information accuracy, with limited exploration of socio-emotional influences of misinformation adoption. Misinformation correction and educational games have explored how narrative persuasion influences personal beliefs, as identification with certain narratives can frame the interpretation of information.

Keep Reading

The algorithmic knowledge gap within and between countries: Implications for combatting misinformation

Myojung Chung and John Wihbey

While understanding how social media algorithms operate is essential to protect oneself from misinformation, such understanding is often unevenly distributed. This study explores the algorithmic knowledge gap both within and between countries, using national surveys in the United States (N = 1,415), the United Kingdom (N = 1,435), South Korea (N = 1,798), and Mexico (N = 784).

Keep Reading

Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine

Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova

Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.

Keep Reading

How spammers and scammers leverage AI-generated images on Facebook for audience growth

Renée DiResta and Josh A. Goldstein

Much of the research and discourse on risks from artificial intelligence (AI) image generators, such as DALL-E and Midjourney, has centered around whether they could be used to inject false information into political discourse. We show that spammers and scammers—seemingly motivated by profit or clout, not ideology—are already using AI-generated images to gain significant traction on Facebook.

Keep Reading

The consequences of misinformation concern on media consumption

Elizabeth A. Harris, Stephanie L. DeMora and Dolores Albarracín

For the last decade, policymakers, journalists, and scientists have continued to alert us of the threat of misinformation for making sound decisions in the political, health, and environmental domains. In this study, we evaluate whether perceiving misinformation as a threat affects media use, particularly considering selection of media sources that are politically aligned.

Keep Reading

How do social media users and journalists express concerns about social media misinformation? A computational analysis

Jianing Li and Michael W. Wagner

This article describes partisan-based, accuracy-based, and action-based discussions through which U.S. social media users and journalists express concerns about social media misinformation. While platform policy stands out as the most highly discussed topic by both social media users and journalists, much of it is cast through a party politics lens.

Keep Reading

Who reports witnessing and performing corrections on social media in the United States, United Kingdom, Canada, and France?

Rongwei Tang, Emily K. Vraga, Leticia Bode and Shelley Boulianne

Observed corrections of misinformation on social media can encourage more accurate beliefs, but for these benefits to occur, corrections must happen. By exploring people’s perceptions of witnessing and performing corrections on social media, we find that many people say they observe and perform corrections across the United States, the United Kingdom, Canada, and France.

Keep Reading

The spread of synthetic media on X

Giulio Corsi, Bill Marino and Willow Wong

Generative artificial intelligence (AI) models have introduced new complexities and risks to information environments, as synthetic media may facilitate the spread of misinformation and erode public trust. This study examines the prevalence and characteristics of synthetic media on social media platform X from December 2022 to September 2023.

Keep Reading

US-skepticism and transnational conspiracy in the 2024 Taiwanese presidential election

Ho-Chun Herbert Chang, Austin Horng-En Wang and Yu Sunny Fang

Taiwan has one of the highest freedom of speech indexes while it also encounters the largest amount of foreign interference due to its contentious history with China. Because of the large influx of misinformation, Taiwan has taken a public crowdsourcing approach to combatting misinformation, using both fact-checking ChatBots and public dataset called CoFacts.

Keep Reading

Journalistic interventions matter: Understanding how Americans perceive fact-checking labels

Chenyan Jia and Taeyoung Lee

While algorithms and crowdsourcing have been increasingly used to debunk or label misinformation on social media, such tasks might be most effective when performed by professional fact checkers or journalists. Drawing on a national survey (N = 1,003), we found that U.S. adults evaluated fact-checking labels created by professional fact checkers as more effective than labels by algorithms and other users. News

Keep Reading