Explore All Articles

By Topic

By Author

All Articles

Article Topic

Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights

Fan Yang, Luke Heemsbergen and Robbie Fordyce

Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources.

Keep Reading
Research Note

Do language models favor their home countries? Asymmetric propagation of positive misinformation and foreign influence audits

Ho-Chun Herbert Chang, Tracy Weener, Yung-Chun Chen, Sean Noh, Mingyue Zha and Hsuan Lo

As language models (LMs) continue to develop, concerns over foreign misinformation through models developed in authoritarian countries have emerged. Do LMs favor their home countries? This study audits four frontier LMs by evaluating their favoritism toward world leaders, then measuring how favoritism propagates into misinformation belief.

Keep Reading

Toxic politics and TikTok engagement in the 2024 U.S. election

Ahana Biswas, Alireza Javadian Sabet and Yu-Ru Lin

What kinds of political content thrive on TikTok during an election year? Our analysis of 51,680 political videos from the 2024 U.S. presidential cycle reveals that toxic and partisan content consistently attracts more user engagement—despite ongoing moderation efforts. Posts about immigration and election fraud, in particular, draw high levels of toxicity and attention.

Keep Reading

Declining information quality under new platform governance

Burak Özturan, Alexi Quintana-Mathé, Nir Grinberg, Katherine Ognyanova and David Lazer

Following the leadership transition on October 27, 2022, Twitter/X underwent a notable change in platform governance. This study investigates how these changes influenced information quality for registered U.S. voters and the platform more broadly. We address this question by analyzing two complementary datasets—a Twitter panel and a Decahose sample.

Keep Reading

State media tagging does not affect perceived tweet accuracy: Evidence from a U.S. Twitter experiment in 2022

Claire Betzer, Montgomery Booth, Beatrice Cappio, Alice Cook, Madeline Gochee, Benjamin Grayzel, Leyla Jacoby, Sharanya Majumder, Michael Manda, Jennifer Qian, Mitchell Ransden, Miles Rubens, Mihir Sardesai, Eleanor Sullivan, Harish Tekriwal, Ryan Waaland and Brendan Nyhan

State media outlets spread propaganda disguised as news online, prompting social media platforms to attach state-affiliated media tags to their accounts. Do these tags reduce belief in state media misinformation? Previous studies suggest the tags reduce misperceptions but focus on Russia, and current research does not compare these tags with other interventions.

Keep Reading

How alt-tech users evaluate search engines: Cause-advancing audits

Evan M. Williams and Kathleen M. Carley

Search engine audit studies—where researchers query a set of terms in one or more search engines and analyze the results—have long been instrumental in assessing the relative reliability of search engines. However, on alt-tech platforms, users often conduct a different form of search engine audit.

Keep Reading
Research Note

Conservatives are less accurate than liberals at recognizing false climate statements, and disinformation makes conservatives less discerning: Evidence from 12 countries

Tobia Spampatti, Ulf J. J. Hahnel and Tobias Brosch

Competing hypotheses exist on how conservative political ideology is associated with susceptibility to misinformation. We performed a secondary analysis of responses from 1,721 participants from twelve countries in a study that investigated the effects of climate disinformation and six psychological interventions to protect participants against such disinformation.

Keep Reading

Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine

Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova

Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.

Keep Reading
Research Note

Misinformation perceived as a bigger informational threat than negativity: A cross-country survey on challenges of the news environment

Toni G. L. A. van der Meer and Michael Hameleers

This study integrates research on negativity bias and misinformation, as a comparison of how systematic (negativity) and incidental (misinformation) challenges to the news are perceived differently by audiences. Through a cross-country survey, we found that both challenges are perceived as highly salient and disruptive.

Keep Reading