Ridiculing the “tinfoil hats” Citizen responses to COVID-19 misinformation in the Danish facemask debate on Twitter

We study how citizens engage with misinformation on Twitter in Denmark during the COVID-19 pandemic. We find that misinformation regarding facemasks is not corrected through counter-arguments or fact-checking. Instead, many tweets rejecting misinformation use humor to mock misinformation spreaders, whom they pejoratively label wearers of “ tinfoil hats. ” Tweets rejecting misinformation project a superior social position and leave the concerns of misinformation spreaders unaddressed. Our study highlights the role of status in people ’ s engagement with online misinformation

of humor.Finally, tweets were qualitatively analyzed in terms of themes, style, rhetoric, and addressee.
• Misinformation accounts for a small portion of the overall facemask-related tweets with an almost equal number of misinformation spreaders and rejectors.In the first phase of the pandemic, the number of tweets rejecting misinformation exceeded the number of tweets spreading misinformation; over time, however, tweets spreading misinformation outnumbered those rejecting it.• While other studies show people spread misinformation to appeal to their own social circles, we found that status concerns also characterize tweets rejecting misinformation.In most cases, tweets rejecting misinformation do not engage with substantive claims, but, instead, stigmatize and ridicule misinformation spreaders.• Further studies are needed to assess the generalizability of these patterns, but our analysis suggests that future initiatives to limit online misinformation should consider status-seeking dynamics among both misinformation spreaders and rejectors.

Implications
At the start of the coronavirus pandemic, the World Health Organization warned that an infodemic jeopardized pandemic-quelling efforts and encouraged social media platforms to retaliate against the spread of online misinformation (WHO, 2020).Here, we understand misinformation as verifiably false claims presented as factually true, regardless of the disseminators' cognizance of the falsehood (Allcott & Gentzkow, 2017).By focusing on misinformation rather than disinformation-false information created for the strategic purpose of deceit (Allcott & Gentzkow, 2017)-we explored users' interactions with false claims regardless of the motivation behind their spread.We simply investigated whether the tweet text supported or countered a false claim.These false claims were drawn from the largest independent Danish fact-checking institution, TjekDet.We relied on the facemask debate, the misinformation theme identified via TjekDet that engaged most tweets.Additionally, we suspected this debate to be a fertile ground for misinformation due to the Danish authorities' change of stance on facemasks partway through the pandemic (Krakov, 2020;Statsministeriet, 2020b) and the often-misinterpreted inconclusive mask study (see Abbasi, 2020), leaving citizens to navigate through changing and conflicting statements regarding the efficacy of facemasks.
Most studies of misinformation during the pandemic focus on the disseminators (Caldarelli et al., 2021;Cinelli et al., 2020;Gallotti et al., 2020), while few scholars have explored how citizens combat false information online (Abidin, 2020;Micallef et al., 2020;Pulido et al., 2020).We investigated all tweets engaging with misinformation.We found that misinformation-related discussion accounted for just 5.04% of the Danish Twitter debate on facemasks.Moreover, stigmatizing tweets, either ridiculing or criticizing their opponents, were not only created by people spreading false claims about COVID-19 but also by those rejecting false claims.Our findings stem from a limited Twitter dataset with a narrow focus within the COVID-19 debate in a high trust environment (see Methods section).They may not, therefore, be directly applicable to countries with lower trust, other misinformation topics, or different social media platforms.However, they do suggest an interesting pattern that may have wider implications for our understanding of digital misinformation and the role citizens can be expected to play in quelling it.
Other studies have shown that misinformation is spread by people who are in opposition to the established "system" and seek to defend their social status (Petersen et al., 2020).We found a similar dynamic among tweets rejecting misinformation: they do not correct false information but fortify the poster's status and devalue those who believe in false stories.When tweets reject misinformation, they are, to a large extent, appealing to those already critical of misinformation rather than converting those they label as wearers of tinfoil hats2 or similar derogatory terms.While misinformation spreaders have previously been painted as the bullies of the internet (Petersen et al., 2020), our study suggests that this also holds true for rejectors.Tweets spreading misinformation stories make consistent arguments (on their own skewed terms), but only 28% of those rejecting misinformation explicitly address false or misleading claims.Most tweets rejecting misinformation mock, ridicule, or stigmatize those spreading misinformation stories, often through irony or sarcasm.Using ironic or humorous comments to correct misinformation can be counterproductive.For example, Abidin (2020) shows that originally satirical Instagram memes shared by young people evolved into misinformation among the elderly on WhatsApp.Our results suggest that rejection is largely aimed at the rejector's own audience, not the misinformation spreaders.Future initiatives and research on misinformation would benefit from investigating statusseeking attempts to increase the respect one has in the eyes of others as a driver for people fighting online misinformation (Magee & Galinsky, 2008).
In this study, we identify one key argumentative strategy in tweets rejecting misinformation: stigmatization.Existing literature is inconclusive when it comes to the effects of misinformation correction.Some scholars find that only corrections from public institutions or organizations are effective (Van der Meer & Jin, 2020;Vraga & Bonde, 2017), highlighting the importance of the corrector's credibility.Other studies argue that combating online hostility requires the mobilization of a sense of connection and we-feeling (Berinsky, 2017;Hannak et al., 2014;Malhorta, 2020;Margolin et al., 2018;Munger, 2017).This suggests that misinformation correction could only work if those spreading misinformation perceive correction as a peer dialogue.In contrast, other scholars argue that corrections of citizens' false claims by strangers can have a positive effect.This is important as Micallef et al. (2020) show: 96% of all tweets combating misinformation (and those most retweeted) are effectuated by concerned citizens (not professional fact checkers).Focusing on the arguments presented by the corrector and their group membership, one study finds that effective correction requires a proper explanation for why the claim is false (Nyhan & Reifler, 2015) and that logic-based arguments correct misinformation better than arguments using humor (Vraga et al., 2019).However, Karlsen et al. (2017) show that neither confirmation nor contradiction appear to effectively change people's attitudes.Online debates, instead, tend to reinforce preexisting beliefs.However, presenting people with two-sided arguments can, in some instances, alter people's attitudes.
While counter-arguments are unlikely to be successful in changing the fundamental attitude of misinformation spreaders, perhaps the criticism and ridiculing of misinformation signals to passive observers that spreading misinformation is unacceptable (for the importance of passive audiences, see Marett & Joshi, 2009;Schmidt et al., 2021).However, stigmatization could also harden the position of those being stigmatized (Goffman, 1963).If this is the case, the challenge going forward is to explore how citizens can correct misinformation without stigmatizing or ridiculing opponents.Our findings suggest that there is a need for more research focused on understanding which types of arguments, tactics, and issue positions are most likely to create backfire effects when engaging with misinformation (see Bail et al., 2018).

Findings
Finding 1: Misinformation accounts for a small portion of the overall facemask-related tweets in Denmark during the pandemic; we observed slightly more tweets spreading misinformation than rejecting it.
Out of 9,345 sampled Danish tweets about COVID-19 and facemasks, only 5.04% (471 tweets and retweets) of the tweets engage with misinformation.A closer investigation of these misinformation-linked tweets shows that slightly more tweets spreading misinformation were created than tweets rejecting misinformation (see Figure 1).As we live-collected the tweets (to collect all misinformation tweets before their potential deletion), we did not capture the full reach of each tweet (i.e., the population that has actually seen the tweet).The raw number of tweets (which includes retweets) shows the relative proportions of tweets spreading and rejecting misinformation and was our first step in mapping the arguments used.
When mapping the arguments over time, we saw an initial jump in facemask-related misinformation tweets in late March 2020 with the onset of the pandemic, where the number of facemask-related tweets was low.We saw an additional spike across all arguments in August just before facemasks became mandatory on public transportation (Statsministeriet, 2020b).While, initially, misinformation rejection anticipated the growth of misinformation, the share of misinformation tweets gradually increased just before the first mandatory facemask requirement and continued to increase every time the government announced new facemask-related regulations (Statsministeriet, n.d.).By October, the number of tweets spreading misinformation exceeded those rejecting it.By December, 2.97% of all Danish tweets about facemasks in the time period spread misinformation, and only 2.07% rejected it.Finding 2: There is a near equal number of users spreading and rejecting misinformation, but those spreading misinformation are more active on the topic.
The number of unique users rejecting (n = 161) and spreading misinformation (n = 158) are almost equal.However, student t-tests show that users spreading misinformation tweet significantly more about facemasks ( = 1.76 ± 2.24 tweets) than users rejecting it ( = 1.20 ± 0.67 tweets), t(318) = 3.04 , p = .003,d = 0.34 ).The size of this effect is moderate, and we see no difference between the two groups in their overall frequency of posts.In sum, the groups in our dataset resemble each other in size, but the misinformation spreaders tweet more about the subject.
Finding 3: Tweets rejecting misinformation are three times more likely to use humor than tweets spreading misinformation.
We find that those rejecting misinformation are over three times more likely to use humor (via emoji use, image use, or jokes) (33.2%) than those spreading misinformation (8.3%), t(469) = 7.20 , p < .0001,d = 0.67).Our coders found that posts using humor were difficult to interpret (they are 14.3 times more likely to be coded as difficult to score, t(469) = 15.07,p < .0001,d = 1.62); however, there is no significant difference (p > .01) in the scoring difficulty between tweets spreading and rejecting misinformation.Finding 4: Users spreading misinformation put forward explicit and agitated arguments for why COVID-19 isn't real or, more often, that the use of facemasks is dangerous.
The majority of unique misinformation tweets (non-retweets) warn against using facemasks.As shown in Figure 3, these arguments either claim that facemasks are unnecessary as COVID-19 doesn't exist (39%) or that using facemasks makes one sick (55%).Among the latter, many tweets rely on research and technical explanations to prove their point: "Check the documentation!It's harmful to wear facemasks as CO2 levels rise to toxic levels within seconds of wearing one."A smaller fraction of tweets do not contain concrete arguments against the use of facemasks but instead blame immigrants for improper donning of facemasks (6%).As shown in Figure 2, only 8.3% of misinformation tweets use humor to support their arguments, of which some ridicule those following corona guidelines using words such as "lemmings," "fakefluenze," "selfish boomers," and "corona-mafia."However, these ridiculing comments are mainly present in tweets propagating racist arguments or denying the existence of COVID-19.They are rarely present in tweets claiming that facemasks make one sick.Across all arguments, the discussions can become charged and use profanity (e.g., "All the facemask bullshit is just symbolic politics of the worst caliber," "tear off the mouth diaper and burn it") or aggressive punctuation ("when will we drop the need to wear facemasks?!").Overall, tweets spreading misinformation often have a condescending tone and aim to refute the dominant discourse.
Finding 5: Most tweets rejecting misinformation didn't address the misinformation explicitly; they instead joked to their own followers about people believing in misinformation.
The majority of tweets rejecting misinformation are not aimed at correcting false or misleading claims.As shown in Figure 3, more than half of the tweets rejecting misinformation do not argue explicitly against misinformation; instead, they stigmatize or mock the misinformation spreaders (62%).Some tweets put forward arguments criticizing misguided newspaper articles (10%), and only a quarter of the tweets actually counter-argued misinformation claims (28%).
Crucially, the majority of tweets rejecting misinformation talked about the misinformation spreaders and not with them.In these tweets, misinformation spreaders were described as "idiots" and "wearers of tinfoil hats" or stigmatized as an "anti-facemask-faction."Most of these tweets solely contained ridiculing comments like this example: "Now, due to coronavirus, we don't just need to wear a facemask outside, but also to walk around carrying our tinfoil hats... #URL#."These tweets are not characterized by engaging in debates or putting forward arguments to convince misinformation spreaders of their misguided positions, but rather stigmatizing and ridiculing.

Methods
We studied facemask-related misinformation on Twitter in Denmark.The nature of our single case study limits generalizability (see Appendix A).

Data collection
Our data was collected between February 1 to November 30, 2020.We primarily live-collected data from Twitter via its API to ensure that we did not underestimate the number of tweets spreading misinformation.In recent years, Twitter has introduced several defense mechanisms to hinder the spread of misinformation by banning or deleting content.Therefore, we live-collected as many tweets as possible and relied only on historical tweets from the Premium Twitter API to fill gaps.
The live collection of tweets occurred between April 15 to June 23 and April 3 to November, 2020.3These tweets were queried using the most common Scandinavian words from the Opensubtitles word frequency lists (Lison & Tiedemann, 2016).We removed words non-specific to Scandinavian languages and combined the 100 highest frequency unique words from each language to query live-streamed tweets via the Twitter API using DMI-TCAT (Borra & Rieder, 2014).Finally, the Twitter native classifier was used to identify Danish tweets.
The Premium Twitter API was used to fill in the remainder of the dataset (February 1 to April 15, 2020; June 23 to August 3, 2020, and October 28, 2020).These tweets were queried using most the frequent Danish words obtained from the Snowball library (Snowball, n.d).We used language as a proxy for a country given the local specificity of the Scandinavian languages.
Our dataset may consist of bots and cyborgs.Although methods have been developed to detect bots (Davis et al., 2016;Wojcik et al., 2018), many are not yet available in the Danish language.

Coding
To collect misinformation, we identified all Danish tweets containing at least one COVID-related and one facemask-related keyword,4 leaving 9,345 tweets (5,712 unique tweets) (see Appendix B).
Six coders were trained on a pre-defined codebook to score each unique tweet as irrelevant, spreading, or rejecting misinformation.Additionally, for each unique tweet, they noted if a) it contained humor and b) it was difficult to code (see Appendix D).The codebook describes every verified Danish misinformation story from the fact-checking site TjekDet.dk(see Appendix C & D).
Intercoder reliability was calculated on fifty randomly selected tweets from our dataset; the coders scored a Krippendorffs alpha of 0.81 for the codes spreading misinformation, rejecting misinformation, or irrelevant; a value of 0.80 is sufficient to suggest coder agreement (Krippendorff, 2004).Humor annotation had a Krippendorf's alpha of 1.0, suggesting perfect agreement.Finally, we matched the coded unique tweets to the duplicate tweets to determine the total number of tweets spreading and rejecting facemask-related misinformation.

Quantitative and qualitative analysis
We used Student's t-tests with a p-value threshold of 0.01-the standard test for significant differences in means of two groups-to compare misinformation spreaders and rejectors.We report our t values with the degrees of freedom alongside the obtained p-value, following APA convention.In addition, Cohens d was calculated to determine the effect sizes for each comparison.During our qualitative analysis, each tweet was given only one code category.The six categories were identified through an open coding of a sample of tweets.Based on these categories, we analyzed the full sample of misinformation-related tweets (n = 218 unique tweets) (see Appendix E).Før du går videre, bedes du gå ind på nedenstående link for at se vores bilag.Her kan du laese om de overordnede grupper (temaer), som vi har valgt at dele tweets ind i. Til hvert tema er der nogle konkrete eksempler på, hvilke påstande der har floreret, samt hvad vi ved fra fx.sundhedsstyrelsen.Forhåbentlig vil dette bilag kunne guide dig i dit kommende annoterings-arbejde.

Figure 1 .
Figure 1.The growth of misinformation-related tweets during 2020.Tweets engaging with misinformation only account for a small portion of facemask-related tweets in Denmark.In the first phase of the pandemic, the number of tweets rejecting misinformation exceeded the number of tweets spreading misinformation; over time, tweets spreading misinformation outnumbered those rejecting it.

Figure 2 .
Figure 2. Percentage of humorous tweets across spreaders and rejectors from misinformation.The proportion of tweets usinghumor is larger among tweets rejecting misinformation, compared to tweets spreading misinformation.

Figure 3 .
Figure 3.The distribution of arguments across unique tweets spreading and rejecting misinformation.The majority of tweets spreading misinformation explicitly argue against the use of facemask while the majority of tweets rejecting misinformation donot address these substantial concerns; instead, they ridicule the misinformation believers.