Explore All Articles

By Topic

By Author

All Articles

Article Topic

Journalistic interventions matter: Understanding how Americans perceive fact-checking labels

Chenyan Jia and Taeyoung Lee

While algorithms and crowdsourcing have been increasingly used to debunk or label misinformation on social media, such tasks might be most effective when performed by professional fact checkers or journalists. Drawing on a national survey (N = 1,003), we found that U.S. adults evaluated fact-checking labels created by professional fact checkers as more effective than labels by algorithms and other users. News

Keep Reading

Brazilian Capitol attack: The interaction between Bolsonaro’s supporters’ content, WhatsApp, Twitter, and news media

Joao V. S. Ozawa, Josephine Lukito, Felipe Bailez and Luis G. P. Fakhouri

Bolsonaro’s supporters used social media to spread content during key events related to the Brasília attack. An unprecedented analysis of more than 15,000 public WhatsApp groups showed that these political actors tried to manufacture consensus in preparation for and after the attack. A cross-platform time series analysis showed that the spread of content on Twitter predicted the spread of content on WhatsApp.

Keep Reading

Fact-opinion differentiation

Matthew Mettler and Jeffery J. Mondak

Statements of fact can be proved or disproved with objective evidence, whereas statements of opinion depend on personal values and preferences. Distinguishing between these types of statements contributes to information competence. Conversely, failure at fact-opinion differentiation potentially brings resistance to corrections of misinformation and susceptibility to manipulation.

Keep Reading

Debunking and exposing misinformation among fringe communities: Testing source exposure and debunking anti-Ukrainian misinformation among German fringe communities

Johannes Christiern Santos Okholm, Amir Ebrahimi Fard and Marijn ten Thij

Through an online field experiment, we test traditional and novel counter-misinformation strategies among fringe communities. Though generally effective, traditional strategies have not been tested in fringe communities, and do not address the online infrastructure of misinformation sources supporting such consumption. Instead, we propose to activate source criticism by exposing sources’ unreliability.

Keep Reading

Seeing lies and laying blame: Partisanship and U.S. public perceptions about disinformation

Kaitlin Peach, Joseph Ripberger, Kuhika Gupta, Andrew Fox, Hank Jenkins-Smith and Carol Silva

Using data from a nationally representative survey of 2,036 U.S. adults, we analyze partisan perceptions of the risk disinformation poses to the U.S. government and society, as well as the actors viewed as responsible for and harmed by disinformation. Our findings indicate relatively high concern about disinformation across a variety of societal issues, with broad bipartisan agreement that disinformation poses significant risks and causes harms to several groups.

Keep Reading

Measuring what matters: Investigating what new types of assessments reveal about students’ online source evaluations

Joel Breakstone, Sarah McGrew and Mark Smith

A growing number of educational interventions have shown that students can learn the strategies fact checkers use to efficiently evaluate online information. Measuring the effectiveness of these interventions has required new approaches to assessment because extant measures reveal too little about the processes students use to evaluate live internet sources.

Keep Reading

Correcting campaign misinformation: Experimental evidence from a two-wave panel study 

Laszlo Horvath, Daniel Stevens, Susan Banducci, Raluca Popp and Travis Coan

In this study, we used a two-wave panel and a real-world intervention during the 2017 UK general election to investigate whether fact-checking can reduce beliefs in an incorrect campaign claim, source effects, the duration of source effects, and how predispositions including political orientations and prior exposure condition them.

Keep Reading

How different incentives reduce scientific misinformation online

Piero Ronzani, Folco Panizza, Tiffany Morisseau, Simone Mattavelli and Carlo Martini

Several social media employ or consider user recruitment as defense against misinformation. Yet, it is unclear how to encourage users to make accurate evaluations. Our study shows that presenting the performance of previous participants increases discernment of science-related news. Making participants aware that their evaluations would be used by future participants had no effect on accuracy.

Keep Reading

What do we study when we study misinformation? A scoping review of experimental research (2016-2022)

Gillian Murphy, Constance de Saint Laurent, Megan Reynolds, Omar Aftab, Karen Hegarty, Yuning Sun and Ciara M. Greene

We reviewed 555 papers published from 2016–2022 that presented misinformation to participants. We identified several trends in the literature—increasing frequency of misinformation studies over time, a wide variety of topics covered, and a significant focus on COVID-19 misinformation since 2020. We also identified several important shortcomings, including overrepresentation of samples from the United States and Europe and excessive emphasis on short-term consequences of brief, text-based misinformation.

Keep Reading

Increasing accuracy motivations using moral reframing does not reduce Republicans’ belief in false news

Michael Stagnaro, Sophia Pink, David G. Rand and Robb Willer

In a pre-registered survey experiment with 2,009 conservative Republicans, we evaluated an intervention that framed accurate perceptions of information as consistent with a conservative political identity and conservative values (e.g., patriotism, respect for tradition, and religious purity). The intervention caused participants to report placing greater value on accuracy, and placing greater value on accuracy was correlated with successfully rating true headlines as more accurate than false headlines.

Keep Reading