The battleground of COVID-19 vaccine misinformation on Facebook: Fact checkers vs. misinformation spreaders

Our study examines Facebook posts containing nine prominent COVID-19 vaccine misinformation topics that circulated on the platform between March 1, 2020 and March 1, 2021. We first identify misinformation spreaders and fact checkers, further dividing the latter group into those who repeat misinformation to debunk the false claim and those who share correct information without repeating the misinformation. Our analysis shows that, on Facebook, there are almost as many fact checkers as misinformation spreaders. In particular, fact checkers’ posts that repeat the original misinformation received significantly more comments than posts from misinformation spreaders. However, we found that misinformation spreaders were far more likely to take on central positions in the misinformation URL cosharing network than fact checkers. This demonstrates the remarkable ability of misinformation spreaders to coordinate communication strategies across topics.


Essay summary
• This study used social network analysis and ANOVA tests to analyze posts (written in English) on public Facebook accounts that mentioned COVID-19 vaccines misinformation posted between March 1 st , 2020 and March 1 st , 2021, and user reactions to such posts. • Our analysis found that approximately half of the posts (46.6%) that discussed COVID-19 vaccines were misinformation, and the other half (47.4%) were fact-checking posts. Of the fact-checking posts, 28.5% repeated the original false claim within their correction, while 18.9% listed facts without misinformation repetition. • Additionally, we found that people were more likely to comment on fact-checking posts that repeated the original false claims than other types of posts. • Fact checkers' posts were mostly connected with other fact checkers rather than misinformation spreaders. • The accounts with the largest number of connections, and that were connected with the most diverse contacts, were fake news accounts, Trump-supporting groups, and anti-vaccine groups. • This study suggests that when public accounts debunk misinformation on social media, repeating the original false claim in their debunking posts can be an effective strategy at least to generate user engagement. • For organizational and individual fact checkers, they need to strategically coordinate their actions, diversify connections, and occupy more central positions in the URL co-sharing networks. They can achieve such goals through network intervention strategies such as promoting similar URLs as a fact checker community.

Overview
The spread of misinformation on social media has been identified as a major threat to public health, particularly in the uptake of COVID-19 vaccines (Burki, 2020;Loomba et al., 2021). The World Health Organization warned that the "infodemic," 3 that is, the massive dissemination of false information, is one of the "most concerning" challenges of our time alongside the pandemic. Rumor-mongering in times of crisis is nothing new. However, social media platforms have exacerbated the problem to a different level. Social media's network features can easily amplify the voice of conspiracy theorists and give credence to fringe beliefs that would otherwise remain obscure. These platforms can also function as an incubator for anti-vaxxers to circulate ideas and coordinate their offline activities (Wilson & Wiysonge, 2020). Our study addresses a timely public health concern regarding misinformation about the COVID-19 vaccine circulating on social media and provides four major implications that can inform the strategies that fact checkers adopt to combat misinformation. Our findings also have implications for public health authorities and social media platforms in devising intervention strategies. Each of the four implications is discussed below.

Prevalence of COVID-19 misinformation
First, our study confirms the prevalence of COVID-19 vaccine misinformation. Approximately 10% of COVID-19 vaccine-related engagement (e.g., comments, shares, likes) on Facebook are made to posts containing misinformation. A close look at the posts shared by public accounts containing vaccine misinformation suggests that there are about an equivalent number of posts spreading misinformation and combating such rumors. This finding contrasts with prior research describing the misinformation landscape heavily outnumbered by anti-vaxxers (Evanega et al., 2020;Shin & Valente, 2020;Song & Gruzd, 2017). 4 This result may be because we focused on popular misinformation narratives that received much attention from fact checkers and health authorities. Additionally, due to social pressure, social media platforms such as Facebook have been taking action to suspend influential accounts that share vaccinerelated misinformation. We acknowledge that fact-checking posts do not necessarily translate into betterinformed citizens. Prior research points to the limitations of fact-checking in that fact-checking posts are selectively consumed and shared by those who already agree with the post (Brandtzaeg et al., 2018;Shin & Thorson, 2017). Thus, more efforts should be directed towards reaching a wider audience and moving beyond preaching to the choir. Nonetheless, our study reveals a silver lining: social media platforms can serve as a battleground for fact checkers and health officials to combat misinformation and share facts. This finding calls for social media platforms and fact checkers to continue their proactive approach by providing regular fact-checking, promoting verified information, and educating the public about public health knowledge.

Repeater fact checkers are most engaging
Second, our study reveals that, on social media, fact checkers' posts that repeated the misinformation were significantly more likely to receive comments than the posts about misinformation. It is likely that posts that contain both misinformation and facts are more complex and interesting, and therefore invite audiences to comment on and even discuss the topics with each other. In contrast, one-sided posts, such as pure facts or straightforward misinformation, may provide little room for debates. This finding offers some evidence that fact-checking can be more effective in triggering engagement when it includes the original misinformation. Future research may further examine if better engagement leads to cognitive benefits such as long-term recollection of vaccine facts.
This finding also has implications for fact checkers. One concern for fact checkers has been whether to repeat the original false claim in a correction. Until recently, practitioners were advised not to repeat the false claim due to the fear of backfire effects, whereby exposure to the false claim within the correction inadvertently makes the misconception more familiar and memorable. However, recent studies show that backfiring effects are minimal (Ecker et al., 2020;Swire-Tompson et al., 2020). Our analysis, along with other recent studies, suggests that the repetition can be used in fact-checking, as long as the false claim is clearly and saliently refuted.

Non-repeater fact checkers' posts tend to trigger sad reactions
Third, our study finds that posts that provide fact-checking without repeating the original misinformation are most likely to trigger sad reactions. Emotions are an important component of how audiences respond to and process misinformation. Extensive research shows that emotional events are remembered better than neutral events (Scheufele & Krause, 2019;Vosoughi et al., 2018). In addition, the misinformation literature has well documented the interactions between misinformation and emotion. For example, Scheufele and Krause (2019) found that people who felt anger from misinformation were more likely to accept it. Vosoughi et al. (2018) found that misinformation elicited more surprise and attracted more attention than non-misinformation, which may be explained by humans having, potentially, evolutionarily developed an attraction to novelty. Vosoughi et al. (2018) also found that higher sadness responses were associated with truthful information. Our study finds similar results in that non-repeater fact checkers' posts are significantly more likely to trigger feelings of sadness. One possible explanation is that the sad reaction may be associated with the identities (the type of accounts such as nonprofits, media, etc.) of post providers. Our analysis shows that, among all types of accounts, healthcare organizations and government agencies are most likely to provide fact-checking without repeating the original misinformation. The sadness reaction may be a sign of declining public trust in these institutions or the growing pessimism over the pandemic. Future studies may compare a range of posts provided by healthcare organizations and government agencies to see if their posts generally receive more sad reactions. Overall, since previous studies suggest that negative emotions often lead people's memories to distort facts (Porter et al., 2010), it is likely that non-repeaters' posts may not lead to desirable outcomes in the long run.
Taken together, our findings suggest that, on a platform such as Facebook, fact-checking with repetition may be an effective messaging strategy for achieving greater user engagement. Despite the potential to cause confusion, the benefits may outweigh the costs.

Network disparity
Finally, our study finds that, despite the considerable presence of fact checkers in terms of their absolute numbers, misinformation spreaders are much better coordinated and strategic. It is important to note that the spreading and consumption of misinformation is embedded in the complex networks connecting information and users on social media (Budak et al., 2011). URLs are often incorporated into Facebook posts to provide in-depth information or further evidence to support post providers' views. It is a way for partisans or core community members to express their partisanship and promote their affiliated groups or communities. The structure of the URL network is instrumental for building the information warehouses that power selective information sharing.
We find that those public accounts 5 that spread misinformation display a strong community structure, likely driven by common interests or shared ideologies. In comparison, public accounts engaging in factchecking seem to mainly react to different misinformation while lacking coordination in their rebuttals. Johnson et al. (2020) found that anti-vaccination clusters on Facebook occupied central network positions, whereas pro-vaccination clusters were more peripheral and confined to small patches. Consistently, our study also finds this alarming structural pattern, which suggests that the posts of misinformation spreaders could penetrate more diverse social circles and reach broader audiences.
This network perspective is vital to examining misinformation on social media since misinformation on platforms such as Facebook and Twitter requires those structural conduits in order to permeate through various social groups, while fact checkers also need networks to counter misinformation with their posts (Del Vicario et al., 2016). Thus, contesting for strategic network positions is important because such positions allow social media accounts to bridge different clusters of publics and facilitate the spread of their posts.
Based on this finding, social media platforms might need to purposely break the network connections of misinformation spreaders by banning or removing some of the most central URLs. In addition, fact checkers should better coordinate their sharing behavior, and boost the overall centrality and connectivity of their content by embracing, for instance, the network features of social media, and leveraging the followership of diverse contacts to break insular networks. Fact checkers should go beyond simply reacting to misinformation. Such a reactive, "whac-a-mole" approach may largely explain why fact checkers' networks lack coordination and central structure. Instead, fact checkers may coordinate their efforts to highlight some of the most important or timely facts proactively. This recommendation extends beyond the COVID-19 context and applies to efforts aimed at combating misinformation in general (e.g., political propaganda and disinformation campaigns).

Findings
Finding 1: The landscape of misinformation and fact-checking posts is very much intertwined.
We found that the landscape of vaccine misinformation on Facebook was almost split in half between misinformation spreaders and fact checkers. 46.6% of information sources (N = 707) addressing COVID-19 vaccine misinformation were misinformation spreaders, referring to accounts that distribute false claims about COVID-19 vaccine without correcting them. The other 47.4% were fact checkers with 28.5% of them (N = 462) repeating the original misinformation and 18.9% (N = 307) reporting facts without repeating misinformation (see Figure 1 for example posts). There were 3.5% of accounts that have been deleted by the time of data analysis.
In addition, among public accounts that discussed COVID-19 vaccine misinformation, 81.5% were organizational accounts, with 25.1% nonprofits and 21.7% media (media here is broadly defined to include any public account that claims to be news media or perform news media functions based on their selfgenerated descriptions) as the most prominent organizations. A factorial ANOVA-i.e., an analysis of variance test that includes more than one independent variable, or "factor"-found significant differences among organization types and their attitudes towards misinformation (F(9, 1428) = 29.57, p <.001). Healthcare agencies and government agencies were most likely to be fact checkers without repeating misinformation, whereas anti-vaxxers and the news media were most likely to be misinformation spreaders. Among the 15.9% individual public accounts, the most prominent individuals were journalists (4.1%) and politicians (2.1%).
We also found that different information sources' posts yielded different emotional and behavioral engagement outcomes. Under each Facebook post, the public could respond by clicking on different emojis. Each emoji is mutually exclusive, meaning that if a user clicks, for instance, on the sad emoji, they cannot click on another emoji, such as haha. Specifically, we ran an ANOVA test to see if there were significant differences in terms of the public responses to the posts from different sources. Among behavioral responses, we found significant differences in terms of comments (F(3,732) = 2.863, p = .036). A Tukey post-hoc test (used to assess the significance of differences between pairs of group means) revealed that there was a significant difference (p = .003) between the number of public comments on misinformation spreaders (M = 1.114) and fact checkers who repeat (M = 1.407). That is, the publics were more likely to comment on posts from fact checkers who repeat misinformation and then correct it. We also found that different types of misinformation posts yielded different public emotional responses. Among emotional responses, there was a significant difference in terms of sadness reaction (F(3, 355) = 3.308, p = .02). A Tukey post-hoc test revealed that there was a significant difference (p = .003) between how people responded with sad emoji to misinformation spreaders (M = .617) and non-repeater factcheckers (M = .961). Another significant difference (p = .031) was also observed between how people responded to non-repeater fact checkers (M = .961) and fact checkers who repeated misinformation (M = .682). In general, Facebook users were most likely to respond with the sad emoji to non-repeater fact checkers.

Finding 3: Different types of accounts held different network positions.
Results showed that different types of accounts held different network positions on Facebook. More specifically, as Figure 2 illustrates, we found that misinformation spreaders (green dots) occupied the most coordinated and centralized positions in the whole network, whereas fact checkers with repetitions (yellow dots) took peripheral positions. Importantly, fact checkers without repetitions (red dots, and many of them are healthcare organizations and government agencies) were mostly talking to themselves, exerting little influence on the overall URL co-sharing network, and conceding important network positions to misinformation spreaders.

Figure 2. COVID-19 vaccine misinformation URL co-sharing network. Green dots represent misinformation spreader, yellow dots represent fact checkers with repetitions, and red dots represent fact checkers without repetitions. Links represent URL co-sharing relationships among nodes.
Additionally, Figure 3 visualizes the whole network of accounts that connected multiple misinformation themes. Interestingly, the accounts that enjoyed central positions were fake news accounts that spread conspiracy theories (e.g., "Or Bar Magazine," "Orwellian Times Daily"), or groups that support Donald Trump (e.g., "Biafrans in Support of Donald Trump," "Trump Cat," "Asians for Donald Trump," "Light up Trump's Christmas Caboose," "Mesa County Patriots"). To further examine their network characteristics, we calculated betweenness (i.e., the extent to which an account in the network lies between other accounts), hub centrality (i.e., the extent to which an account is connected to nodes pointing to other nodes), and total degree centralities (i.e., the extent to which an account is interconnected to others) of each account. In addition, as Figure 4 illustrates, in terms of three different network measures (betweenness centrality, hub centrality, and total degree centrality), 6 the top accounts were fake news accounts, Trump supporting groups, and anti-vaccine groups (e.g., "Children's health defense," "The Microchipping Agenda").

Total Degree Centrality
Overall, posts containing misinformation were prevalent on Facebook. Out of all COVID-19 vaccine-related posts extracted in a year (between March 1 st , 2020 to March 1 st , 2021), 8.97% of the posts were identified to contain misinformation. We note that even though the amount of COVID-19 vaccine misinformation found in this study (8.97%) is concerning, the number is still relatively low compared to what previous studies have found. For instance, according to a systematic review of 69 studies (Suarez-Lledo & lvarez-Galvez, 2021), the lowest level of health misinformation circulating on social media was 30%. It is possible that this difference is due to the fact that our study focused on public accounts rather than private accounts. Public accounts may care more about their reputation than private accounts. In addition, the heightened attention to popular COVID-19 misinformation during the pandemic may have motivated official sources and fact checkers to fight vaccine misinformation with fact-checking to a greater extent. Finally, it may also be due to targeted efforts made by Facebook to combat vaccine-related misinformation.

Sample
To identify COVID-19 vaccine-related misinformation, we first reviewed popular COVID-19 vaccine misinformation mentioned by the most recent articles and CDC (Centers for Disease Control and Prevention, 2021;Hotez et al., 2021;Loomba et al., 2021) and identified nine popular themes. We then used keywords associated with these themes to track all unique Facebook posts that contained these keywords over the one-year period of March 1 st , 2020 to March 1 st , 2021 (between when COVID-19 was first confirmed in the U.S. and when the COVID-19 vaccine became widely available in the country). 7 See Table 3 for a summary of these themes and associated keywords. The tracking was done through Facebook's internal data archive hosted by CrowdTangle, which hosts over 7 million public accounts' communication records on public Facebook pages, groups, and verified profiles. 8 7 Facebook is chosen for two reasons. First, previous studies confirm that there is a substantial volume of misinformation circulating on Facebook (Burki, 2020;World Health Organization, 2020). Second, Facebook's user base is one of the largest and most diverse among all social media accounts, which makes this platform ideal for studying infodemics. 8 As an internal service, CrowdTangle has full access to Facebook's stored historical data on public accounts. Any public accounts that mentioned these keywords in English during the search period have been captured by our data collection. Our sample is thus representative of public accounts that have mentioned the keywords listed in Table 2. Overall, 53,719 unique public accounts mentioned these keywords. Among them, 5,597 unique accounts shared URLs.

Research questions
Our research questions are: 1) Which types of accounts engage in spreading and debunking vaccine misinformation? 2) How do the publics react to different types of accounts in terms of emotional responses and behavioral responses? And 3) Are there any differences among these accounts in occupying positions in the URL-sharing network?
Analytic strategies To answer these questions, we identified accounts that have shared at least one URL across the nine themes and revealed 5,597 unique accounts that met the criteria. Next, we manually coded all accounts into one of three categories: 1) misinformation spreaders, referring to accounts that distribute false claims about COVID-19 vaccine without correcting it, 2) fact checkers who debunk false claims while repeating the original false claim, and 3) fact checkers who provide accurate information about COVID-19 vaccine without repeating misinformation. There were 3.5% accounts that have been deleted by the time of data analysis. Further, when two accounts shared the same URL, we considered two accounts as forming a cosharing tie. We constructed a one-mode network based on co-sharing ties, which form a sparse network (ties=28,648, density=.00091) with many isolates or accounts connected to only another account (known as pendants). Although isolates and pendants also shared URLs, with such low centrality, those URLs were unlikely to be influential. As such, we removed isolates, pendants, and self-loop (accounts sharing the same URL more than one time), which revealed a core network of 1,648 accounts connected by 23,940 ties (density=.00942). This core network was the focus of our analysis. Together, the 1,648 accounts had a total of 245,495,995 followers (Mean=30,331, SD=62853.345). 9 To understand how these accounts discuss misinformation and influence the public's engagement outcomes, 10 we ran ANOVA tests and examined if there were significant differences in how the publics respond to different misinformation posts. To compare these accounts' network positions, we calculated the accounts' network measures and also used network visualization to illustrate how the accounts were interconnected via link sharing.