Explore All Articles

By Topic

By Author

All Articles

Article Topic

Not so different after all? Antecedents of believing in misinformation and conspiracy theories on COVID-19

Florian Wintterlin

Misinformation and conspiracy theories are often grouped together, but do people believe in them for the same reasons? This study examines how these conceptually distinct forms of deceptive content are processed and believed using the COVID-19 pandemic as context. Surprisingly, despite their theoretical differences, belief in both is predicted by similar psychological factors—particularly conspiracy mentality and the perception that truth is politically constructed—suggesting that underlying distrust in institutions may outweigh differences in types of deceptive content in shaping susceptibility.

Keep Reading

When knowing more means doing less: Algorithmic knowledge and digital (dis)engagement among young adults

Myojung Chung

What if knowing how social media algorithms work doesn’t make you a more responsible digital citizen, but a more cynical one? A new survey of U.S. young adults finds that while higher algorithmic awareness and knowledge are linked to greater concerns about misinformation and filter bubbles, individuals with greater algorithmic awareness and knowledge are less likely to correct misinformation or engage with opposing viewpoints on social media—possibly reflecting limited algorithmic agency.

Keep Reading

Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights

Fan Yang, Luke Heemsbergen and Robbie Fordyce

Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources.

Keep Reading

Toxic politics and TikTok engagement in the 2024 U.S. election

Ahana Biswas, Alireza Javadian Sabet and Yu-Ru Lin

What kinds of political content thrive on TikTok during an election year? Our analysis of 51,680 political videos from the 2024 U.S. presidential cycle reveals that toxic and partisan content consistently attracts more user engagement—despite ongoing moderation efforts. Posts about immigration and election fraud, in particular, draw high levels of toxicity and attention.

Keep Reading

The unappreciated role of intent in algorithmic moderation of abusive content on social media

Xinyu Wang, Sai Koneru, Pranav Narayanan Venkit, Brett Frischmann and Sarah Rajtmajer

A significant body of research is dedicated to developing language models that can detect various types of online abuse, for example, hate speech, cyberbullying. However, there is a disconnect between platform policies, which often consider the author’s intention as a criterion for content moderation, and the current capabilities of detection models, which typically lack efforts to capture intent.

Keep Reading

Declining information quality under new platform governance

Burak Özturan, Alexi Quintana-Mathé, Nir Grinberg, Katherine Ognyanova and David Lazer

Following the leadership transition on October 27, 2022, Twitter/X underwent a notable change in platform governance. This study investigates how these changes influenced information quality for registered U.S. voters and the platform more broadly. We address this question by analyzing two complementary datasets—a Twitter panel and a Decahose sample.

Keep Reading

Disagreement as a way to study misinformation and its effects

Damian Hodel and Jevin D. West

Experts consider misinformation a significant societal concern due to its associated problems like political polarization, erosion of trust, and public health challenges. However, these broad effects can occur independently of misinformation, illustrating a misalignment with the narrow focus of the prevailing misinformation concept.

Keep Reading

State media tagging does not affect perceived tweet accuracy: Evidence from a U.S. Twitter experiment in 2022

Claire Betzer, Montgomery Booth, Beatrice Cappio, Alice Cook, Madeline Gochee, Benjamin Grayzel, Leyla Jacoby, Sharanya Majumder, Michael Manda, Jennifer Qian, Mitchell Ransden, Miles Rubens, Mihir Sardesai, Eleanor Sullivan, Harish Tekriwal, Ryan Waaland and Brendan Nyhan

State media outlets spread propaganda disguised as news online, prompting social media platforms to attach state-affiliated media tags to their accounts. Do these tags reduce belief in state media misinformation? Previous studies suggest the tags reduce misperceptions but focus on Russia, and current research does not compare these tags with other interventions.

Keep Reading

How alt-tech users evaluate search engines: Cause-advancing audits

Evan M. Williams and Kathleen M. Carley

Search engine audit studies—where researchers query a set of terms in one or more search engines and analyze the results—have long been instrumental in assessing the relative reliability of search engines. However, on alt-tech platforms, users often conduct a different form of search engine audit.

Keep Reading

Google allows advertisers to target the sensitive informational queries of cancer patients

Marco Zenone, Alessandro Marcon, Nora Kenworthy, May van Schalkwyk, Timothy Caulfield, Greg Hartwell and Nason Maani

Alternative cancer treatments are associated with earlier time to death when used without evidence-based treatments. Our study suggests alternative cancer clinics providing scientifically unsupported cancer treatments spent an estimated $15,839,504 on Google ads from 2012 to 2023 targeting users in the United States.

Keep Reading