Explore All Articles
All Articles
Article Topic
The consequences of misinformation concern on media consumption
Elizabeth A. Harris, Stephanie L. DeMora and Dolores Albarracín
For the last decade, policymakers, journalists, and scientists have continued to alert us of the threat of misinformation for making sound decisions in the political, health, and environmental domains. In this study, we evaluate whether perceiving misinformation as a threat affects media use, particularly considering selection of media sources that are politically aligned.
How do social media users and journalists express concerns about social media misinformation? A computational analysis
Jianing Li and Michael W. Wagner
This article describes partisan-based, accuracy-based, and action-based discussions through which U.S. social media users and journalists express concerns about social media misinformation. While platform policy stands out as the most highly discussed topic by both social media users and journalists, much of it is cast through a party politics lens.
Who reports witnessing and performing corrections on social media in the United States, United Kingdom, Canada, and France?
Rongwei Tang, Emily K. Vraga, Leticia Bode and Shelley Boulianne
Observed corrections of misinformation on social media can encourage more accurate beliefs, but for these benefits to occur, corrections must happen. By exploring people’s perceptions of witnessing and performing corrections on social media, we find that many people say they observe and perform corrections across the United States, the United Kingdom, Canada, and France.
The spread of synthetic media on X
Giulio Corsi, Bill Marino and Willow Wong
Generative artificial intelligence (AI) models have introduced new complexities and risks to information environments, as synthetic media may facilitate the spread of misinformation and erode public trust. This study examines the prevalence and characteristics of synthetic media on social media platform X from December 2022 to September 2023.
US-skepticism and transnational conspiracy in the 2024 Taiwanese presidential election
Ho-Chun Herbert Chang, Austin Horng-En Wang and Yu Sunny Fang
Taiwan has one of the highest freedom of speech indexes while it also encounters the largest amount of foreign interference due to its contentious history with China. Because of the large influx of misinformation, Taiwan has taken a public crowdsourcing approach to combatting misinformation, using both fact-checking ChatBots and public dataset called CoFacts.
Journalistic interventions matter: Understanding how Americans perceive fact-checking labels
Chenyan Jia and Taeyoung Lee
While algorithms and crowdsourcing have been increasingly used to debunk or label misinformation on social media, such tasks might be most effective when performed by professional fact checkers or journalists. Drawing on a national survey (N = 1,003), we found that U.S. adults evaluated fact-checking labels created by professional fact checkers as more effective than labels by algorithms and other users. News
Brazilian Capitol attack: The interaction between Bolsonaro’s supporters’ content, WhatsApp, Twitter, and news media
Joao V. S. Ozawa, Josephine Lukito, Felipe Bailez and Luis G. P. Fakhouri
Bolsonaro’s supporters used social media to spread content during key events related to the Brasília attack. An unprecedented analysis of more than 15,000 public WhatsApp groups showed that these political actors tried to manufacture consensus in preparation for and after the attack. A cross-platform time series analysis showed that the spread of content on Twitter predicted the spread of content on WhatsApp.
Fact-opinion differentiation
Matthew Mettler and Jeffery J. Mondak
Statements of fact can be proved or disproved with objective evidence, whereas statements of opinion depend on personal values and preferences. Distinguishing between these types of statements contributes to information competence. Conversely, failure at fact-opinion differentiation potentially brings resistance to corrections of misinformation and susceptibility to manipulation.
Debunking and exposing misinformation among fringe communities: Testing source exposure and debunking anti-Ukrainian misinformation among German fringe communities
Johannes Christiern Santos Okholm, Amir Ebrahimi Fard and Marijn ten Thij
Through an online field experiment, we test traditional and novel counter-misinformation strategies among fringe communities. Though generally effective, traditional strategies have not been tested in fringe communities, and do not address the online infrastructure of misinformation sources supporting such consumption. Instead, we propose to activate source criticism by exposing sources’ unreliability.
Seeing lies and laying blame: Partisanship and U.S. public perceptions about disinformation
Kaitlin Peach, Joseph Ripberger, Kuhika Gupta, Andrew Fox, Hank Jenkins-Smith and Carol Silva
Using data from a nationally representative survey of 2,036 U.S. adults, we analyze partisan perceptions of the risk disinformation poses to the U.S. government and society, as well as the actors viewed as responsible for and harmed by disinformation. Our findings indicate relatively high concern about disinformation across a variety of societal issues, with broad bipartisan agreement that disinformation poses significant risks and causes harms to several groups.