Explore All Articles

By Topic

By Author

All Articles

Article Topic

State media tagging does not affect perceived tweet accuracy: Evidence from a U.S. Twitter experiment in 2022

Claire Betzer, Montgomery Booth, Beatrice Cappio, Alice Cook, Madeline Gochee, Benjamin Grayzel, Leyla Jacoby, Sharanya Majumder, Michael Manda, Jennifer Qian, Mitchell Ransden, Miles Rubens, Mihir Sardesai, Eleanor Sullivan, Harish Tekriwal, Ryan Waaland and Brendan Nyhan

State media outlets spread propaganda disguised as news online, prompting social media platforms to attach state-affiliated media tags to their accounts. Do these tags reduce belief in state media misinformation? Previous studies suggest the tags reduce misperceptions but focus on Russia, and current research does not compare these tags with other interventions.

Keep Reading

How alt-tech users evaluate search engines: Cause-advancing audits

Evan M. Williams and Kathleen M. Carley

Search engine audit studies—where researchers query a set of terms in one or more search engines and analyze the results—have long been instrumental in assessing the relative reliability of search engines. However, on alt-tech platforms, users often conduct a different form of search engine audit.

Keep Reading
Research Note

Conservatives are less accurate than liberals at recognizing false climate statements, and disinformation makes conservatives less discerning: Evidence from 12 countries

Tobia Spampatti, Ulf J. J. Hahnel and Tobias Brosch

Competing hypotheses exist on how conservative political ideology is associated with susceptibility to misinformation. We performed a secondary analysis of responses from 1,721 participants from twelve countries in a study that investigated the effects of climate disinformation and six psychological interventions to protect participants against such disinformation.

Keep Reading

Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine

Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova

Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.

Keep Reading
Research Note

Misinformation perceived as a bigger informational threat than negativity: A cross-country survey on challenges of the news environment

Toni G. L. A. van der Meer and Michael Hameleers

This study integrates research on negativity bias and misinformation, as a comparison of how systematic (negativity) and incidental (misinformation) challenges to the news are perceived differently by audiences. Through a cross-country survey, we found that both challenges are perceived as highly salient and disruptive.

Keep Reading
Research Note

Gamified inoculation reduces susceptibility to misinformation from political ingroups

Cecilie Steenbuch Traberg, Jon Roozenbeek and Sander van der Linden

Psychological inoculation interventions, which seek to pre-emptively build resistance against unwanted persuasion attempts, have shown promise in reducing susceptibility to misinformation. However, as many people receive news from popular, mainstream ingroup sources (e.g., a left-wing person consuming left-wing media) which may host misleading or false content, and as ingroup sources may be more persuasive, the impact of source effects on inoculation interventions demands attention.

Keep Reading

Fact-opinion differentiation

Matthew Mettler and Jeffery J. Mondak

Statements of fact can be proved or disproved with objective evidence, whereas statements of opinion depend on personal values and preferences. Distinguishing between these types of statements contributes to information competence. Conversely, failure at fact-opinion differentiation potentially brings resistance to corrections of misinformation and susceptibility to manipulation.

Keep Reading

Increasing accuracy motivations using moral reframing does not reduce Republicans’ belief in false news

Michael Stagnaro, Sophia Pink, David G. Rand and Robb Willer

In a pre-registered survey experiment with 2,009 conservative Republicans, we evaluated an intervention that framed accurate perceptions of information as consistent with a conservative political identity and conservative values (e.g., patriotism, respect for tradition, and religious purity). The intervention caused participants to report placing greater value on accuracy, and placing greater value on accuracy was correlated with successfully rating true headlines as more accurate than false headlines.

Keep Reading

Exploring partisans’ biased and unreliable media consumption and their misinformed health-related beliefs

Natasha Strydhorst, Javier Morales-Riech and Asheley R. Landrum

This study explores U.S. adults’ media consumption—in terms of the average bias and reliability of the media outlets participants report referencing—and the extent to which those participants hold inaccurate beliefs about COVID-19 and vaccination. Notably, we used a novel means of capturing the (left-right) bias and reliability of audiences’ media consumption, leveraging the Ad Fontes Media ratings of 129 news sources along each dimension.

Keep Reading