Explore All Articles

By Topic

By Author

All Articles

Article Topic

Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights

Fan Yang, Luke Heemsbergen and Robbie Fordyce

Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources.

Keep Reading

Toxic politics and TikTok engagement in the 2024 U.S. election

Ahana Biswas, Alireza Javadian Sabet and Yu-Ru Lin

What kinds of political content thrive on TikTok during an election year? Our analysis of 51,680 political videos from the 2024 U.S. presidential cycle reveals that toxic and partisan content consistently attracts more user engagement—despite ongoing moderation efforts. Posts about immigration and election fraud, in particular, draw high levels of toxicity and attention.

Keep Reading

The unappreciated role of intent in algorithmic moderation of abusive content on social media

Xinyu Wang, Sai Koneru, Pranav Narayanan Venkit, Brett Frischmann and Sarah Rajtmajer

A significant body of research is dedicated to developing language models that can detect various types of online abuse, for example, hate speech, cyberbullying. However, there is a disconnect between platform policies, which often consider the author’s intention as a criterion for content moderation, and the current capabilities of detection models, which typically lack efforts to capture intent.

Keep Reading

Declining information quality under new platform governance

Burak Özturan, Alexi Quintana-Mathé, Nir Grinberg, Katherine Ognyanova and David Lazer

Following the leadership transition on October 27, 2022, Twitter/X underwent a notable change in platform governance. This study investigates how these changes influenced information quality for registered U.S. voters and the platform more broadly. We address this question by analyzing two complementary datasets—a Twitter panel and a Decahose sample.

Keep Reading

Disagreement as a way to study misinformation and its effects

Damian Hodel and Jevin D. West

Experts consider misinformation a significant societal concern due to its associated problems like political polarization, erosion of trust, and public health challenges. However, these broad effects can occur independently of misinformation, illustrating a misalignment with the narrow focus of the prevailing misinformation concept.

Keep Reading

State media tagging does not affect perceived tweet accuracy: Evidence from a U.S. Twitter experiment in 2022

Claire Betzer, Montgomery Booth, Beatrice Cappio, Alice Cook, Madeline Gochee, Benjamin Grayzel, Leyla Jacoby, Sharanya Majumder, Michael Manda, Jennifer Qian, Mitchell Ransden, Miles Rubens, Mihir Sardesai, Eleanor Sullivan, Harish Tekriwal, Ryan Waaland and Brendan Nyhan

State media outlets spread propaganda disguised as news online, prompting social media platforms to attach state-affiliated media tags to their accounts. Do these tags reduce belief in state media misinformation? Previous studies suggest the tags reduce misperceptions but focus on Russia, and current research does not compare these tags with other interventions.

Keep Reading

How alt-tech users evaluate search engines: Cause-advancing audits

Evan M. Williams and Kathleen M. Carley

Search engine audit studies—where researchers query a set of terms in one or more search engines and analyze the results—have long been instrumental in assessing the relative reliability of search engines. However, on alt-tech platforms, users often conduct a different form of search engine audit.

Keep Reading

Google allows advertisers to target the sensitive informational queries of cancer patients

Marco Zenone, Alessandro Marcon, Nora Kenworthy, May van Schalkwyk, Timothy Caulfield, Greg Hartwell and Nason Maani

Alternative cancer treatments are associated with earlier time to death when used without evidence-based treatments. Our study suggests alternative cancer clinics providing scientifically unsupported cancer treatments spent an estimated $15,839,504 on Google ads from 2012 to 2023 targeting users in the United States.

Keep Reading

Structured expert elicitation on disinformation, misinformation, and malign influence: Barriers, strategies, and opportunities

Ariel Kruger, Morgan Saletta, Atif Ahmad and Piers Howe

We used a modified Delphi method to elicit and synthesize experts’ views on disinformation, misinformation, and malign influence (DMMI). In a three-part process, experts first independently generated a range of effective strategies for combatting DMMI, identified the most impactful barriers to combatting DMMI, and proposed areas for future research.

Keep Reading

Conspiracy Theories

Using an AI-powered “street epistemologist” chatbot and reflection tasks to diminish conspiracy theory beliefs

Marco Meyer, Adam Enders, Casey Klofstad, Justin Stoler and Joseph Uscinski

Social scientists, journalists, and policymakers are increasingly interested in methods to mitigate or reverse the public’s beliefs in conspiracy theories, particularly those associated with negative social consequences, including violence. We contribute to this field of research using an artificial intelligence (AI) intervention that prompts individuals to reflect on the uncertainties in their conspiracy theory beliefs.

Keep Reading