We analyzed community guidelines and official news releases and blog posts from 12 leading social media and messaging platforms (SMPs) to examine their responses to COVID-19 misinformation. While the majority of platforms stated that they prohibited COVID-19 misinformation, the responses of many platforms lacked clarity and transparency. Facebook, Instagram, YouTube, and Twitter had largely consistent responses, but other platforms varied with regard to types of content prohibited, criteria guiding responses, and remedies developed to address misinformation. Only Twitter and YouTube described their systems for applying various remedies. These differences highlight the need to establish general standards across platforms to address COVID-19 misinformation more cohesively.
- Do SMPs prohibit COVID-19 misinformation? What types of COVID-19 content do they prohibit, and what criteria do they use to inform action on misleading content?
- What remedies have SMPs developed to address COVID-19 misinformation? How are different remedies applied?
- We conducted a content analysis of community guidelines, official news releases, and blog posts published between February 1, 2020, and April 1, 2021, for 10 social media platforms (Facebook, YouTube, Twitter, Instagram, Reddit, Snapchat, LinkedIn, TikTok, Tumblr, and Twitch) and two messaging platforms (Messenger and WhatsApp). This initial analysis was updated by a rapid review of the same data sources on November 23, 2021.
- The majority of the SMPs explicitly prohibited COVID-19 misinformation (N = 8), but only four (Facebook, Instagram, YouTube, and Twitter) had an independent COVID-19 misinformation policy. Twitch, Tumblr, Messenger, and WhatsApp did not prohibit COVID-19 misinformation.
- The majority of SMPs (N = 10) developed one or more remedies to address misinformation and connect users with credible information. Remedies included soft measures, such as attaching warning labels, and hard measures, such as content removal and account bans. Only YouTube and Twitter described their systems for applying these different remedies, with progressively harsher remedies applied to repeat violators.
- The lack of clarity and transparency from many SMPs regarding their responses to COVID-19 misinformation and systems for applying various remedies makes it difficult for policymakers, researchers, and the general public to determine whether platforms are doing enough to address COVID-19 misinformation.
- Establishing a general set of policies and practices might be necessary to address COVID-19 misinformation in the broader social media ecosystem.
Efforts to contain the COVID-19 pandemic have been complicated by a parallel infodemic, defined as an overabundance of information, including misinformation, which occurs during a disease outbreak (WHO, 2021). With over 3.8 billion users globally (DataReportal, 2021), social media and messaging platforms (hereafter referred to as SMPs) have become one of the major means of seeking and sharing COVID-19–related information. While the wide reach of SMPs has benefits in democratizing information access, it has also contributed to facilitating the rapid spread of mis- and disinformation (Cinelli et al., 2020; Kouzy et al., 2020). Some examples of COVID-19 misinformation1In the interest of brevity, in this paper we use the term “misinformation” broadly to cover information and claims that are both intentionally and unintentionally false and misleading. As noted elsewhere in this paper, the platforms we examine define the scope of “misinformation” in various ways. that have spread widely on SMPs include claims that 5G causes the virus and that COVID-19 vaccines alter DNA (Islam et al., 2021, 2020; Naeem et al., 2021). Such types of misinformation have been widely viewed and shared on SMPs (Nielsen et al., 2020). A recent study found that exposure to such types of misinformation was negatively associated with vaccination intention (Loomba et al., 2021), illustrating that misinformation has real consequences for containing the pandemic.
The magnitude of the COVID-19 pandemic has resulted in urgent calls from all sectors of society for SMPs to do more to address COVID-19 misinformation (Disinformation nation: Social media’s role in promoting extremism and misinformation, 2021; Donovan, 2020). Donovan, 2020). However, SMPs face a number of challenges in determining how best to do this, including the massive volume of content and nuanced nature of misinformation. Thus, SMPs will need to decide what types of content to prioritize, when to take action, and what type of action to take. Legal scholars have proposed typologies of content moderation remedies in internet platforms (Goldman, 2021; Grimmelmann, 2015). These include more lenient “soft” remedies (e.g., warning labels) and more stringent “hard” remedies (e.g., removing content) (Goldman, 2021; Grimmelmann, 2015). Given that the different types of remedies have different consequences for users, it is important to understand how platforms apply these remedies. Additionally, as many SMP users use multiple platforms (DataReportal, 2021), as newer platforms are gaining popularity, and as platforms are increasingly interconnected (e.g., TikTok videos can be shared on Facebook), effectively addressing misinformation requires responses from all platforms.
We examined responses to COVID-19 misinformation by 12 leading SMPs and found that the majority (eight out of 12) prohibited COVID-19 misinformation. This finding by itself is noteworthy, as for several SMPs, it represents a stark reversal from their previous stance on content regulation. As recently as two years ago, companies such as Facebook2Facebook, Inc. changed its name to Meta on October 28, 2021. As the company’s name was Facebook at the time of the initial analysis, we have used the name Facebook, when referring to Facebook, Inc. and Twitter refused to act on misinformation, arguing that doing so would infringe on free speech and was outside of their platform mission (Conger, 2019; Kang & Isaac, 2019). While this change is encouraging, there is much room for improvement in how SMPs address COVID-19 misinformation.
Descriptions of the responses of many SMPs lacked clarity, and their implementation lacked transparency. Many SMPs did not clearly articulate the types of content prohibited. This might be partly attributable to the challenges inherent in moderating scientific content, which is dynamic and often equivocal (Baker et al., 2020). In the COVID-19 context, this has been reflected in evolving scientific positions regarding the nature of COVID-19 transmission (WHO, 2020) and lack of consensus on boosters (Krause et al., 2021), to give just a few examples. Challenges notwithstanding, clearly outlining prohibited content is important. It can help researchers assess whether the types of content SMPs prohibit correspond with the types of content associated with real-world harm. Such evidence could guide SMPs on prioritizing and updating the types of content they prohibit. For instance, Facebook initially did not prohibit personal anecdotes or first-person accounts, but evidence showed that this type of content might be contributing to vaccine hesitancy (Dwoskin, 2021). Facebook has since updated its policy on content related to COVID-19 vaccines and now reduces the distribution of alarmist or sensationalist content about vaccines. Additionally, few platforms have a specific COVID-19 misinformation policy. During an evolving emergency such as a pandemic, such a policy could make it possible for platforms to communicate their stance on various types of content, changes to their responses, and the consequences of violating policies in a transparent and accessible manner.
Similar to previous studies (Nunziato, 2020; Sanderson et al., 2021), we found that most SMPs employed a combination of soft and hard remedies to address misinformation. Most platforms, however, did not clearly describe their systems for applying these different types of remedies. Clear visibility of consequences can deter bad behavior (Seering et al., 2017), and the U.S. surgeon general’s report on health misinformation calls for platforms to “impose clear consequences for accounts that repeatedly violate platform policies” (Office of the Surgeon General, 2021). YouTube’s and Twitter’s strike systems are consistent with the responsive regulation approach, which advocates using soft remedies before escalating to hard remedies (Ayres & Braithwaite, 1992). Given the findings of a recent report that 12 people were responsible for 73% of misinformation on SMPs (Center for Countering Digital Hate, 2021; Dwoskin, 2021), this approach could be well suited to deter superspreaders and repeat violators. Interestingly, Facebook refuted the conclusion of that report, claiming that these 12 individuals were responsible for only 0.05% of all views of vaccine-related content on that platform (Bickert, 2021). Additional analyses by independent researchers could provide more evidence regarding the extent to which specific individuals contribute to the spread of misinformation and the effectiveness of responsive regulation in curbing the spread of misinformation. More transparency from platforms regarding their application of different remedies is key to determining whether they are doing enough to address misinformation.
Facebook, Instagram, YouTube, and Twitter had largely similar responses, but the responses of other platforms differed. There is recognition among scholars that a one-size-fits-all approach to content moderation is not practical as SMPs vary with regard to their functions, audiences, and capacity to moderate content (Gillespie et al., 2020; Goldman, 2021). For instance, the encrypted nature of communication on messaging platforms means that content moderation approaches for these platforms will need to preserve user privacy (Gillespie et al., 2020). Thus, total alignment of responses across platforms is not feasible. However, previous research has shown that even if some platforms act on misinformation, it can continue to spread on other platforms (Sanderson et al., 2021). Additionally, conspiracy theorists might be migrating to alternative platforms (e.g., Parler, Twitch) as a result of mainstream platforms (e.g., Facebook, YouTube) cracking down on misinformation (Browning, 2021). Therefore, some degree of synergy in platform responses might be necessary to address COVID-19 misinformation in the broader social media ecosystem. Public–private co-regulation models are considered well suited to achieve this (Gillespie, 2018; Gorwa, 2019). Platforms could collaborate for mutual benefit and have taken steps in that direction (Shu & Shieber, 2020). Public policies could outline general standards that apply to all SMPs.
While the policies and remedies identified by this analysis represent an important first step, enforcement is vital for them to have an impact. Some platforms, such as Facebook, Instagram, YouTube, and Twitter, release some enforcement metrics in their transparency reports, such as the number of posts taken down or accounts disabled. However, without a meaningful denominator, it is impossible to determine the extent to which these policies are being enforced. External analyses indicate poor enforcement of misinformation policies by SMPs. For instance, one analysis found that a large proportion of content rated false by fact-checkers remained up on Twitter (59%), and a substantial amount also remained up on YouTube (27%) and Facebook (24%) (Brennen et al., 2020). A number of analyses have examined enforcement of misinformation policies by Facebook and identified several gaps. One report found that only 16% of all health information analyzed on Facebook had a warning label (Avaaz, 2020a). Even when Facebook took action, it was slow in doing so, taking up to 22 days to downgrade or attach warning labels to misleading content (Avaaz, 2020b). There are also significant language gaps in enforcement of policies. One analysis found that approximately 70% of misleading COVID-19 content in Spanish and Italian (vs. 29% of misleading content in English) had not received warning labels on Facebook (Avaaz, 2020b). Enforcement also varies by type of content. Although Facebook’s vaccine misinformation policy applies to all content on the platform, warning labels are not applied to content in the comments section. A case study found that misinformation is rife in the Facebook comments section, with roughly one in five comments found to contain misinformation about the vaccines or the pandemic (Chan et al., 2021).
Additionally, recent revelations in the Facebook Papers, such as special treatment given by Facebook to high-profile users when it comes to content moderation (Allyn et al., 2021), hinder the credibility of any enforcement claims made by SMPs. There are growing calls for platforms to make their responses more transparent and data more easily available to researchers (Doctors for America, 2021; MacCarthy, 2020; Office of the Surgeon General, 2021). Such independent monitoring of enforcement of policies and remedies, and evaluation of specific remedies (e.g., labeling content) and their systems of application (e.g., strike system) identified by this analysis, could hold SMPs accountable. By laying out COVID-19 misinformation responses and remedies used by a wide range of SMPs, this work can also serve as a starting point to help policymakers identify a general set of policies that might be applicable to all platforms, as well as those that might be applicable to similar types of platforms (e.g., messaging platforms).
Table 1 summarizes SMP responses to COVID-19 misinformation (see Appendix Table 1 for examples). The majority of SMPs explicitly prohibited COVID-19 misinformation (N = 8), but only four (Facebook, Instagram, YouTube, and Twitter) had an independent COVID-19 misinformation policy. Twitch, Tumblr, Messenger, and WhatsApp did not prohibit COVID-19 misinformation. Half the SMPs delineated one or more types of COVID-19 content that they prohibited (Table 1, Appendix Table 2). Facebook, Instagram, YouTube, and Twitter all prohibited misinformation regarding the nature of the virus, the efficacy and safety of prevention and treatment measures, COVID-19 vaccines, and restrictions and health advisories, as well as content that misrepresented data. LinkedIn prohibited content that fell under the first three of the above categories. TikTok did not publish a detailed list of types of COVID-19 misinformation it prohibited, but it stated that COVID-19 vaccine misinformation was prohibited. Snapchat and Reddit did not articulate the specific types of COVID-19 content they prohibited.
Most platforms (N = 9) listed at least one criterion they used to guide action on COVID-19–related content (Table 1, Appendix Table 3). A majority of SMPs indicated that they would take action on content that had the potential to cause significant harm (N = 7) or was inauthentic (N = 8). Five SMPs also acted on COVID-19 content if it contained claims that had been debunked by public health authorities. Some platforms outlined other criteria. For instance, Facebook and Instagram prohibited content that was considered to be hate speech or harassment, YouTube also acted on borderline content, (i.e., content that almost but not actually violated their misinformation policy), and Twitter took action against misleading COVID-19 content that could cause social unrest or was expressed as an assertion of fact.
The majority of SMPs developed one or more remedies to address misinformation (N = 10) and proactively connect users with credible information (N = 10). Table 2 summarizes the former type of remedies (see Appendix Table 4 for examples). Seven SMPs used labels, warnings, notifications, and links. A different group of seven SMPs imposed restrictions on advertisements that had the potential to spread misinformation or cause harm. Eight SMPs modified their search and recommendation algorithms or disabled “like,” “share,” and forwarding features to reduce the visibility and distribution of misleading content. However, platforms differed slightly in how they used this remedy. For instance, Facebook and Instagram stated that they would reduce the distribution of content from users that had repeatedly shared misleading content or previously violated their policies, while YouTube also applied this remedy to borderline content. Eight SMPs removed some types of content, and seven SMPs temporarily suspended or permanently banned accounts or groups.
Most SMPs did not clearly describe the manner in which they implemented these wide-ranging remedies, except for YouTube and Twitter, which used a strike system. Under this system, individuals received strikes for each violation. For instance, YouTube provided a warning to first-time violators with no strikes. However, if a user had previously violated the COVID-19 content policy, each subsequent violation resulted in a strike, with three strikes leading to channel termination. On Twitter, content that was labeled received one strike, while content that was deleted received two strikes. For the first strike, no account-level action was taken. For each subsequent strike, the following actions were taken: 12-hour account lock (two or three strikes), seven-day account lock (four strikes), and permanent suspension (five or more strikes). Reddit did not have a formal strike system but implied that it also took action in a graduated manner by stating that “our goal is always to start with education and cooperation and only escalate to quarantine or ban if necessary.” Similarly, Facebook stated that profiles “that repeatedly post misinformation related to COVID-19, vaccines, and health may face restrictions, including (but not limited to) reduced distribution, removal from recommendations, or removal from our site,” but it was unclear on what basis these different remedies were applied.
Actions to proactively connect users with credible information are summarized in Table 3 (see Appendix Table 5 for examples). Common measures included information curation (N = 9); labels, banners and links (N = 8); and Q&As with experts (N = 7). Six SMPs used health promotion campaigns to increase awareness and promote appropriate practices advised by health authorities, such as mask-wearing, handwashing, and social distancing. The nature of campaigns tended to vary by platform design. For instance, platforms that allowed users to post temporary stories, such as Instagram and Snapchat, launched stickers and filters to help people share recommended COVID-19 prevention practices and encourage vaccination. Video-based platforms, such as YouTube, launched video campaigns in collaboration with external public health partners to reach underserved populations. Meanwhile, Twitter promoted hashtags, such as #vaccinated and #WearAMask, and TikTok used hashtags to promote challenges, such as the #safehandschallenge. Facebook, Instagram, YouTube, Twitter, and LinkedIn described adjustments to their algorithms to increase visibility of authoritative content in searches and recommendations. These five platforms and TikTok also provided free advertising credits to public health partners for disseminating COVID-19 information.
Due to the end-to-end encrypted features and manner in which information is shared on messaging platforms, these platforms had different strategies to deal with misinformation compared to traditional social media platforms. While Messenger and WhatsApp did not explicitly state that they prohibited COVID-19 misinformation, they implemented some actions to limit its spread. These included attaching forwarding labels to messages that did not originate with the sender and introducing forwarding limits to reduce the spread of viral messages. Additionally, both platforms collaborated with the WHO to provide users with accurate and timely information about COVID-19 via free messaging. WhatsApp also collaborated with external fact-checking organizations and used advanced machine learning approaches to identify and ban accounts engaged in mass messaging. It was unclear whether Facebook account bans also applied to Messenger.
Given the evolving SMP policy environment, we undertook a rapid review of community guidelines, news releases, and blog posts for these 12 SMPs on November 23, 2021, to determine whether there were any changes to their responses to COVID-19 misinformation since our initial analysis. While the majority of SMPs did not have major updates to their responses, a few platforms had some noteworthy changes. Facebook had begun to reduce the distribution of misleading vaccine-related content that did not violate its policies but had the potential to discourage vaccinations. Facebook also stated that it would target certain remedies at misinformation superspreaders. These primarily included soft remedies, such as providing a label or warning when users liked a page found to repeatedly share misinformation, decreasing visibility of posts in the News Feed from individual Facebook accounts repeatedly spreading misinformation, and notifying users that their posts might be demoted in the News Feed if they repeatedly shared misinformation. Facebook also applied some hard remedies, such as removal of some Facebook pages and groups and Instagram accounts linked to 12 high-volume superspreaders identified in an external report (Center for Countering Digital Hate, 2021), despite questioning the findings of the report.
Other platforms that have updated or provided more transparency about their responses to addressing COVID-19 misinformation since our initial analysis are YouTube, Snapchat, Twitter, and Reddit. YouTube extended its ban on COVID-19 vaccine misinformation to misinformation about all vaccines that are approved and considered safe and effective by the WHO and other health authorities. In August 2021, Snapchat released more information regarding its approach to handling misinformation, which differs from other platforms’ approaches in a few ways. For example, Snapchat’s newsfeed is proactively moderated; group chats are not recommended by algorithms; and, rather than applying soft remedies, all content that violates guidelines is removed. Twitter announced that it was testing a new feature that allowed users to report content as misleading. Additionally, Twitter began collaborating with the Associated Press and Reuters to provide more context to conversations and improve information curation. Reddit reversed its decision to not act on misinformation by banning a subreddit. However, the subreddit was banned on the grounds of harassment rather than misinformation, and it remains to be seen whether Reddit takes further steps to address misinformation.
We conducted a content analysis of documents from 10 leading social media platforms and two messaging platforms in the United States (U.S.). There is no single database that monitors the volume of users across all SMPs, making it difficult to rank platforms in terms of popularity. However, a recent nationally representative survey assessed prevalence of use of 10 social media platforms (Facebook, YouTube, Twitter, Instagram, Reddit, Snapchat, LinkedIn, TikTok, Tumblr, and Twitch), and one messaging platform (WhatsApp) (Shearer & Mitchell, 2021). Prevalence of use ranged from 74% for YouTube to 4% for Tumblr (Shearer & Mitchell, 2021). This group of SMPs3We analyzed Messenger independently from Facebook, as Messenger is a messaging platform, and approaches to addressing misinformation by messaging platforms will likely differ from approaches used by social media platforms. Hence, our sample consisted of 12 SMPs. was chosen because prevalence data was available.
Development of the codebook
As there are no well-established frameworks for misinformation classification and intervention in the digital environment, a codebook was developed using an inductive approach. One analyst reviewed documents from a single platform (Twitter) and developed an initial codebook.
Description of codes
- Response to COVID-19 misinformation: Codes under this category included whether the platform prohibited COVID-19 misinformation and whether it had a COVID-19 misinformation policy.
- Types of COVID-19 misinformation prohibited: included false or misleading content about (i) the nature of the virus (including the existence, origin, causes, diagnosis, and transmission of COVID-19), (ii) the efficacy and safety of prevention and treatment measures (e.g., hydroxychloroquine), (iii) COVID-19 vaccines, (iv) restrictions and health advisories (including established mitigation measures such as masks, social distancing and handwashing), and (v) misrepresentation of data (e.g., prevalence of COVID-19 in an area or availability of resources such as hospital beds).
- Criteria used to inform action on COVID-19 related content: included (i) debunking of claims by public health authorities, (ii) potential of the content to lead to significant harm, (iii) authenticity of content (e.g., platform manipulation, fake accounts, or deep fakes), and (iv) other.
- Actions to address misinformation: included (i) labels, warnings, and notifications to inform users that the content they are viewing or sharing is misleading, and links to credible organizations such as the World Health Organization (WHO); (ii) decreasing visibility and spread of misleading content by lowering such content in searches and newsfeeds, and by restricting a user’s ability to engage with or share such content; (iii) content removal; (iv) temporary account bans or permanent account suspensions; and (v) advertising restrictions to limit the promotion of unverified cures or products with unverified claims regarding their prevention and treatment efficacy.
- Actions to promote access to credible information: included (i) information curation, whereby SMPs curate and compile credible information that is easily accessible to users (e.g., a resource center or newsletter); (ii) health promotion and communication campaigns to promote evidence-based measures to contain the pandemic, such as vaccination, social distancing, and mask-wearing; (iii) labels, banners and links to credible organizations, such as WHO or curated information hubs within the platform proactively provided to users; (iv) increasing visibility of authoritative content by elevating such content in searches and newsfeeds; (v) Q&As with public health experts through chats and live streams; and (vi) advertising credits to government and public health organizations to disseminate COVID-19 information.
Coding and analysis
For each SMP, each of the codes described above was marked as present if it was described in their documents at least once, and absent if it was not mentioned. As we were interested in identifying all types of responses used by SMPs to address COVID-19 misinformation, we coded a policy or remedy as present if it was in effect at any point during our time frame of interest (February 1, 2020, to April 1, 2021), even if it was subsequently removed. It should be noted that Instagram, Messenger, and WhatsApp are owned by Facebook, and while some Facebook policies apply to all these platforms, it is not clear which do. Therefore, we treated all these platforms as separate entities. When coding for Instagram, Messenger, and WhatsApp, we applied text segments from Facebook’s documents to these platforms only if the text explicitly referenced the platforms. Coding and analysis occurred in an iterative fashion. Two analysts independently coded documents from six SMPs using the initial version of the codebook, met to discuss their findings, and modified the codebook. Both analysts then independently coded documents from all 12 SMPs using the modified codebook. Cohen’s kappa was calculated to assess intercoder reliability for coding pertaining to each table and for all tables combined, with raters achieving high agreement (overall κ: 0.94, 95% CI: 0.90–0.99; Table 1 κ: 0.95, 95% CI: 0.89–1.00;4For Table 1, the code labeled “other” was not included in the calculation of κ, as this code was added to the codebook and defined through discussion between the two analysts after the second round of coding. Table 2 κ: 0.96, 95% CI: 0.88–1.00; Table 3 κ: 0.93, 95% CI: 0.83–1.00). The analysts met again to discuss their coding, made minor changes to the codebook, and resolved all disagreements through discussion. The presence of each code was summed across platforms to facilitate comparisons.
A few limitations must be noted. We did not analyze news articles, which might have captured additional SMP responses and remedies not noted on their websites. However, media coverage could be uneven and biased towards bigger platforms, such as Facebook and Twitter. We, therefore, decided to limit our analysis to documents published by SMPs. Some of these policies were dynamic and changed over the course of the pandemic, but our analysis was not designed to capture such changes over time. Lack of clarity from Facebook regarding the relationship between Facebook’s Community Standards and Community Guidelines and its other platforms, such as Instagram, may have resulted in miscoding certain responses and remedies as absent when they were actually present on these platforms. It is possible that our search strategy could have missed certain documents, which could also have resulted in miscoding certain responses or remedies as absent when they might have been present. Finally, the descriptive nature of this analysis precludes us from making any conclusions about the effectiveness or implementation of policies for any platform, but these findings can inform future work on these topics.