Explore All Articles
All Articles
Article Topic

Information control on YouTube during Russia’s invasion of Ukraine
Yevgeniy Golovchenko, Kristina Aleksandrovna Pedersen, Jonas Skjold Raaschou-Pedersen and Anna Rogers
This research note investigates the aftermath of YouTube’s global ban on Russian state-affiliated media channels in the wake of Russia’s full-scale invasion of Ukraine in 2022. Using over 12 million YouTube comments across 40 Russian-language channels, we analyzed the effectiveness of the ban and the shifts in user activity before and after the platform’s intervention.
The unappreciated role of intent in algorithmic moderation of abusive content on social media
Xinyu Wang, Sai Koneru, Pranav Narayanan Venkit, Brett Frischmann and Sarah Rajtmajer
A significant body of research is dedicated to developing language models that can detect various types of online abuse, for example, hate speech, cyberbullying. However, there is a disconnect between platform policies, which often consider the author’s intention as a criterion for content moderation, and the current capabilities of detection models, which typically lack efforts to capture intent.
Who reports witnessing and performing corrections on social media in the United States, United Kingdom, Canada, and France?
Rongwei Tang, Emily K. Vraga, Leticia Bode and Shelley Boulianne
Observed corrections of misinformation on social media can encourage more accurate beliefs, but for these benefits to occur, corrections must happen. By exploring people’s perceptions of witnessing and performing corrections on social media, we find that many people say they observe and perform corrections across the United States, the United Kingdom, Canada, and France.

User experiences and needs when responding to misinformation on social media
Pranav Malhotra, Ruican Zhong, Victor Kuan, Gargi Panatula, Michelle Weng, Andrea Bras, Connie Moon Sehat, Franziska Roesner and Amy Zhang
This study examines the experiences of those who participate in bottom-up user-led responses to misinformation on social media and outlines how they can be better supported via software tools. Findings show that users desire support tools designed to minimize time and effort in identifying misinformation and provide tailored suggestions for crafting responses to misinformation that account for emotional and relational context.

Did the Musk takeover boost contentious actors on Twitter?
Christopher Barrie
After his acquisition of Twitter, Elon Musk pledged to overhaul verification and moderation policies. These events sparked fears of a rise in influence of contentious actors—notably from the political right. I investigated whether these actors did receive increased engagement over this period by gathering tweet data for accounts that purchased blue-tick verification before and after the Musk takeover.
How effective are TikTok misinformation debunking videos?
Puneet Bhargava, Katie MacDonald, Christie Newton, Hause Lin and Gordon Pennycook
TikTok provides opportunity for citizen-led debunking where users correct other users’ misinformation. In the present study (N=1,169), participants either watched and rated the credibility of (1) a misinformation video, (2) a correction video, or (3) a misinformation video followed by a correction video (“debunking”).
Examining accuracy-prompt efficacy in combination with using colored borders to differentiate news and social content online
Venya Bhardwaj, Cameron Martel and David G. Rand
Recent evidence suggests that prompting users to consider the accuracy of online posts increases the quality of news they share on social media. Here we examine how accuracy prompts affect user behavior in a more realistic context, and whether their effect can be enhanced by using colored borders to differentiate news from social content.
Measuring the effect of Facebook’s downranking interventions against groups and websites that repeatedly share misinformation
Emmanuel M. Vincent, Héloïse Théro and Shaden Shabayek
Facebook has claimed to fight misinformation notably by reducing the virality of posts shared by “repeat offender” websites. The platform recently extended this policy to groups. We identified websites and groups that repeatedly publish false information according to fact checkers and investigated the implementation and impact of Facebook’s measures against them.

Research note: Examining how various social media platforms have responded to COVID-19 misinformation
Nandita Krishnan, Jiayan Gu, Rebekah Tromble and Lorien C. Abroms
We analyzed community guidelines and official news releases and blog posts from 12 leading social media and messaging platforms (SMPs) to examine their responses to COVID-19 misinformation. While the majority of platforms stated that they prohibited COVID-19 misinformation, the responses of many platforms lacked clarity and transparency.