Explore All Articles
All Articles
Article Topic
Who reports witnessing and performing corrections on social media in the United States, United Kingdom, Canada, and France?
Rongwei Tang, Emily K. Vraga, Leticia Bode and Shelley Boulianne
Observed corrections of misinformation on social media can encourage more accurate beliefs, but for these benefits to occur, corrections must happen. By exploring people’s perceptions of witnessing and performing corrections on social media, we find that many people say they observe and perform corrections across the United States, the United Kingdom, Canada, and France.
The spread of synthetic media on X
Giulio Corsi, Bill Marino and Willow Wong
Generative artificial intelligence (AI) models have introduced new complexities and risks to information environments, as synthetic media may facilitate the spread of misinformation and erode public trust. This study examines the prevalence and characteristics of synthetic media on social media platform X from December 2022 to September 2023.
US-skepticism and transnational conspiracy in the 2024 Taiwanese presidential election
Ho-Chun Herbert Chang, Austin Horng-En Wang and Yu Sunny Fang
Taiwan has one of the highest freedom of speech indexes while it also encounters the largest amount of foreign interference due to its contentious history with China. Because of the large influx of misinformation, Taiwan has taken a public crowdsourcing approach to combatting misinformation, using both fact-checking ChatBots and public dataset called CoFacts.
Journalistic interventions matter: Understanding how Americans perceive fact-checking labels
Chenyan Jia and Taeyoung Lee
While algorithms and crowdsourcing have been increasingly used to debunk or label misinformation on social media, such tasks might be most effective when performed by professional fact checkers or journalists. Drawing on a national survey (N = 1,003), we found that U.S. adults evaluated fact-checking labels created by professional fact checkers as more effective than labels by algorithms and other users. News
Brazilian Capitol attack: The interaction between Bolsonaro’s supporters’ content, WhatsApp, Twitter, and news media
Joao V. S. Ozawa, Josephine Lukito, Felipe Bailez and Luis G. P. Fakhouri
Bolsonaro’s supporters used social media to spread content during key events related to the Brasília attack. An unprecedented analysis of more than 15,000 public WhatsApp groups showed that these political actors tried to manufacture consensus in preparation for and after the attack. A cross-platform time series analysis showed that the spread of content on Twitter predicted the spread of content on WhatsApp.
Fact-opinion differentiation
Matthew Mettler and Jeffery J. Mondak
Statements of fact can be proved or disproved with objective evidence, whereas statements of opinion depend on personal values and preferences. Distinguishing between these types of statements contributes to information competence. Conversely, failure at fact-opinion differentiation potentially brings resistance to corrections of misinformation and susceptibility to manipulation.
Debunking and exposing misinformation among fringe communities: Testing source exposure and debunking anti-Ukrainian misinformation among German fringe communities
Johannes Christiern Santos Okholm, Amir Ebrahimi Fard and Marijn ten Thij
Through an online field experiment, we test traditional and novel counter-misinformation strategies among fringe communities. Though generally effective, traditional strategies have not been tested in fringe communities, and do not address the online infrastructure of misinformation sources supporting such consumption. Instead, we propose to activate source criticism by exposing sources’ unreliability.
Seeing lies and laying blame: Partisanship and U.S. public perceptions about disinformation
Kaitlin Peach, Joseph Ripberger, Kuhika Gupta, Andrew Fox, Hank Jenkins-Smith and Carol Silva
Using data from a nationally representative survey of 2,036 U.S. adults, we analyze partisan perceptions of the risk disinformation poses to the U.S. government and society, as well as the actors viewed as responsible for and harmed by disinformation. Our findings indicate relatively high concern about disinformation across a variety of societal issues, with broad bipartisan agreement that disinformation poses significant risks and causes harms to several groups.
Measuring what matters: Investigating what new types of assessments reveal about students’ online source evaluations
Joel Breakstone, Sarah McGrew and Mark Smith
A growing number of educational interventions have shown that students can learn the strategies fact checkers use to efficiently evaluate online information. Measuring the effectiveness of these interventions has required new approaches to assessment because extant measures reveal too little about the processes students use to evaluate live internet sources.
Correcting campaign misinformation: Experimental evidence from a two-wave panel study
Laszlo Horvath, Daniel Stevens, Susan Banducci, Raluca Popp and Travis Coan
In this study, we used a two-wave panel and a real-world intervention during the 2017 UK general election to investigate whether fact-checking can reduce beliefs in an incorrect campaign claim, source effects, the duration of source effects, and how predispositions including political orientations and prior exposure condition them.