Explore All Articles
All Articles
Article Topic

Hide and seek: The connection between false beliefs and perceptions of government transparency
Mathieu Lavigne, Éric Bélanger, Richard Nadeau, Jean-François Daoust and Erick Lachapelle
This research examines how false beliefs shape perceptions of government transparency in times of crisis. Measuring transparency perceptions using both closed- and open-ended questions drawn from a Canadian panel survey, we show that individuals holding false beliefs about COVID-19 are more likely to have negative perceptions of government transparency.

A story of (non)compliance, bias, and conspiracies: How Google and Yandex represented Smart Voting during the 2021 parliamentary elections in Russia
Mykola Makhortykh, Aleksandra Urman and Mariëlle Wijermars
On 3 September 2021, the Russian court forbade Google and Yandex to display search results for “Smart Voting,” the query referring to a tactical voting project by the jailed Russian opposition leader Alexei Navalny. To examine whether the two search engines complied with the court order, we collected top search outputs for the query from Google and Yandex.

Ridiculing the “tinfoil hats:” Citizen responses to COVID-19 misinformation in the Danish facemask debate on Twitter
Nicklas Johansen, Sara Vera Marjanovic, Cathrine Valentin Kjaer, Rebekah Brita Baglini and Rebecca Adler-Nissen
We study how citizens engage with misinformation on Twitter in Denmark during the COVID-19 pandemic. We find that misinformation regarding facemasks is not corrected through counter-arguments or fact-checking. Instead, many tweets rejecting misinformation use humor to mock misinformation spreaders, whom they pejoratively label wearers of “tinfoil hats.”

Chinese state media Facebook ads are linked to changes in news coverage of China worldwide
Arjun M. Tambe and Toni Friedman
We studied the relationship between Facebook advertisements from Chinese state media on the global media environment by examining the link between advertisements and online news coverage of China by other countries. We found that countries that see a large increase in views of Facebook advertisement from Chinese state media also see news coverage of China become more positive.

Digital literacy is associated with more discerning accuracy judgments but not sharing intentions
Nathaniel Sirlin, Ziv Epstein, Antonio A. Arechar and David G. Rand
It has been widely argued that social media users with low digital literacy—who lack fluency with basic technological concepts related to the internet—are more likely to fall for online misinformation, but surprisingly little research has examined this association empirically. In a large survey experiment involving true and false news posts about politics and COVID-19, we found that digital literacy is indeed an important predictor of the ability to tell truth from falsehood when judging headline accuracy.

Misinformation interventions are common, divisive, and poorly understood
Emily Saltz, Soubhik Barari, Claire Leibowicz and Claire Wardle
Social media platforms label, remove, or otherwise intervene on thousands of posts containing misleading or inaccurate information every day. Who encounters these interventions, and how do they react? A demographically representative survey of 1,207 Americans reveals that 49% have been exposed to some form of online misinformation intervention.

Vaccine hesitancy in online spaces: A scoping review of the research literature, 2000-2020
Timothy Neff, Jonas Kaiser, Irene Pasquetto, Dariusz Jemielniak, Dimitra Dimitrakopoulou, Siobhan Grayson, Natalie Gyenes, Paola Ricaurte, Javier Ruiz-Soler and Amy Zhang
We review 100 articles published from 2000 to early 2020 that research aspects of vaccine hesitancy in online communication spaces and identify several gaps in the literature prior to the COVID-19 pandemic. These gaps relate to five areas: disciplinary focus; specific vaccine, condition, or disease focus; stakeholders and implications; research methodology; and geographical coverage.

Review of social science research on the impact of countermeasures against influence operations
Laura Courchesne, Julia Ilhardt and Jacob N. Shapiro
Despite ongoing discussion of the need for increased regulation and oversight of social media, as well as debate over the extent to which the platforms themselves should be responsible for containing misinformation, there is little consensus on which interventions work to address the problem of influence operations and disinformation campaigns.

The battleground of COVID-19 vaccine misinformation on Facebook: Fact checkers vs. misinformation spreaders
Aimei Yang, Jieun Shin, Alvin Zhou, Ke M. Huang-Isherwood, Eugene Lee, Chuqing Dong, Hye Min Kim, Yafei Zhang, Jingyi Sun, Yiqi Li, Yuanfeixue Nan, Lichen Zhen and Wenlin Liu
Our study examines Facebook posts containing nine prominent COVID-19 vaccine misinformation topics that circulated on the platform between March 1st, 2020 and March 1st, 2021. We first identify misinformation spreaders and fact checkers,1fact checker in our study is defined as any public account (including both individual and organizational accounts) that posts factual information about COVID-19 vaccine or posts debunking information about COVID-19 vaccine misinformation.

Twitter flagged Donald Trump’s tweets with election misinformation: They continued to spread both on and off the platform
Zeve Sanderson, Megan A. Brown, Richard Bonneau, Jonathan Nagler and Joshua A. Tucker
We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels.