Peer Reviewed
The small effects of short user corrections on misinformation in Brazil, India, and the United Kingdom
Article Metrics

0
CrossRef Citations
PDF Downloads
Page Views
How effective are user corrections in combatting misinformation on social media, and does adding a link to a fact check improve their effectiveness? We conducted a pre-registered online experiment on representative samples of the online population in Brazil, India, and the United Kingdom (N participants = 3,000, N observations = 24,000). We found that in India and Brazil, short user corrections slightly, but often not significantly, reduced belief in misinformation and participants’ willingness to share it. In the United Kingdom, these effects were even smaller and not significant. We found little evidence that fact-check links made user corrections more effective. Overall, our results suggest that short user corrections have small effects and that adding a fact-check link is unlikely to make user corrections much more effective.

Research Questions
- How do user corrections influence the perceived accuracy of social media posts containing false COVID-19 information, and how do they influence participants’ willingness to share them?
- Are user corrections more effective when they contain links to news organisations’ fact checks?
- Do user corrections have effects beyond corrected posts?
Essay Summary
- Our experimental design randomly assigned respondents to one of three conditions. Participants rated nine social media posts about COVID-19 (three true, six false). In the control condition, the posts had no comment. In the correction condition, four false posts included a short user comment debunking them. In the correction with fact-check condition, the user comment included a link to a fact check.
- User corrections had small, and often non-significant, effects on the perceived accuracy of false posts and participants’ willingness to share them. The effects were largest in India and Brazil, while they were smallest in the United Kingdom.
- In India and Brazil, the effect sizes of corrections ranged from 0.07 to 0.17 on the 4-point scale. In the United Kingdom, they ranged from 0.01 to 0.07 on the 4-point scale.
- The difference between corrections without a link and corrections with a link was not statistically significant.
- The corrections had no spillover effects on uncorrected true and false posts.
Implications
Professional corrections performed by fact checkers are effective at correcting misperceptions and belief in false claims (Porter & Wood, 2021, 2024). However, the scale and speed of misinformation production often outpace fact-checking efforts, resulting in corrections that arrive only after much of the damage has been done. Additionally, fact checks face significant dissemination problems. Few people voluntarily seek them out (Graham & Porter, 2025; Guess et al., 2020; Porter & Wood, 2024), and some major social media platforms have recently scaled back systems that automatically display fact checks alongside questionable posts (Kaplan, 2025). Moreover, in contexts where the main modes of information sharing are encrypted chat apps, professional fact checks cannot be issued at scale within these closed networks, not to mention the poor state of content moderation in non-English languages (Okong’o, 2025). In these contexts, the burden of providing fact checks increasingly falls on platform users themselves.
Past work has shown that user corrections (i.e., users correcting misinformation by refuting it on social media) can be effective at reducing misperceptions (Bode et al., 2024; Bode & Vraga, 2018; Yang et al., 2022). Yet, little is known about the effects of user corrections in Global South countries (for exceptions, see: Badrinathan & Chauchard, 2024; Blair et al., 2024)—although expert corrections have been shown to be effective here (Porter & Wood, 2021).
Our findings suggest that short user corrections have only small effects on the perceived accuracy of social media posts containing false COVID-19 information and participants’ willingness to share it. In the United Kingdom, correction effects ranged from 0.01 to 0.07 on the 4-point scale, corresponding to reductions in belief and sharing of 1.2% to 8.4%. In Brazil, correction effects ranged from 0.07 to 0.11 on the 4-point scale, corresponding to reductions in belief and sharing of 5.7% to 11.2%. In India, correction effects ranged from 0.07 to 0.17 on the 4-point scale, corresponding to reductions in belief and sharing of 5.0% to 11.6%. The effects of corrections were particularly small in the United Kingdom, potentially because belief in COVID-19 misinformation was so low that corrections had almost no scope to be effective: Before being exposed to corrections, participants did not believe the COVID-19 misinformation and were not willing to share it online.
We also tested whether adding a link to a fact check from a news organization would strengthen the effect of corrections on the perceived accuracy of misinformation and participants’ willingness to share it. The links may give credence to the correction, either by signaling that it is backed up by reliable sources or that there is evidence supporting the correction. Prior work suggests that some corrections are more effective than others. For instance, between-study evidence from a meta-analysis shows that corrections performed by experts are more effective than those performed by non-experts (Walter et al., 2020). Yet, within-study evidence, in which the source of the corrector is experimentally manipulated (e.g., in one condition the post is attributed to an expert while in the other it is not), tends to show that the content of corrections matters more than their source. For example, corrections performed by the World Health Organization or anonymous Facebook users show similar effects (Vraga & Bode, 2021). In general, messages are more persuasive when they both come from sources people trust and when they are backed up by evidence (Mercier, 2020; Petty & Cacioppo, 1986).
However, we found that links to fact checks are unlikely to make user corrections more effective. As shown in Figure 3, the links to fact checks in user corrections were not merely hyperlinks. When we conducted the study in 2021, Facebook previewed these links, prominently displaying the title of the fact check and its source. This presentation potentially provided complementary information about why the information was false and clearly signaled that a reputable news outlet had refuted it and that there is evidence supporting the correction. Thus, the absence of clear added benefits of fact-check links can hardly be attributed to a lack of visibility or usefulness: They were clearly visible and contained relevant information.
In line with past work (Bode & Vraga, 2018; Coppock, 2023; Martel & Rand, 2024), in Appendix D, we show that the corrections were not more or less effective depending on the tendency to believe in conspiracy theories, trust in social media, or trust in the news. Moreover, while previous research (Pennycook et al., 2020) has shown that fact-checking warnings can have spillover effects on uncorrected posts (e.g., by increasing the perceived accuracy of false posts or decreasing the perceived accuracy of true posts), we found no evidence of spillover effects. In Appendix B, we show that the main conclusion of the article holds when excluding participants who failed the pre-treatment attention check—that is, the effects of user corrections are small, and adding a link to a fact check is unlikely to make them more effective.
The main limitation is that user corrections were short, and participants could not actually click on the links to the fact checks in the experiment. Longer, more detailed corrections and clickable links may have yielded stronger effects. Another limitation is that, like many interventions against misinformation, our treatments are bundled (Guess et al., 2024), meaning that the corrections with and without a link differ in many ways, and our experimental design does not allow us to isolate which specific feature is responsible for any observed effects. For example, comments with a link may not necessarily draw attention to a reputable source but simply make the correction more visible. The differences between our treatment conditions mirror actual platform design: as of June 2025, Facebook continues to display comments with and without links in the same manner as our experimental treatments. And many social media and messaging platforms, like LinkedIn or WhatsApp, also offer a similar link preview with a title and an image. Given the applied focus of our research, we prioritized an intervention that mirrors real-world platform design. Finally, our measures of accuracy and sharing may not reflect people’s actual behaviors on social media. For instance, the mere fact of asking participants to rate the accuracy of a post shifts their attention to accuracy, which is unlikely to be top of mind for people when scrolling through social media. Moreover, it has been shown that prompting participants to think about accuracy increases their sharing discernment (Pennycook & Rand, 2021). It is also far from certain that changes in belief induced by corrections result in changed attitudes or behaviors (Porter & Wood, 2024). Regarding sharing, it is not clear whether self-reported measures of sharing are representative of people’s actual sharing behaviors, given that most social media users are “lurkers” who avoid sharing news or information about politics and social issues (McClain, 2021).
A key implication of our work is that user corrections are no panacea and that efforts to fight misinformation cannot rest entirely on the shoulders of social media users. Effective interventions against misinformation require a combination of strategies as well as reaching and targeting vulnerable populations (Bak-Coleman et al., 2022; Brashier, 2024; Budak et al., 2024). Social media users and ordinary citizens can meaningfully contribute, but institutional and platform-level interventions are likely to be much more impactful. For instance, while it is important to find ways to motivate users to perform corrections online, it may be more important to change the affordances of social media to make those corrections more prominent and impactful. Here, it may be useful to distinguish between organically occurring user corrections (like the corrections in this study) and institutionalized forms of user corrections that are integrated into platforms in ways that affect display decisions. And while the former likely have small effects, the latter may be more impactful. For example, initiatives along the lines of Community Notes (formerly Bird Watch) could, in theory, offer a promising model for users to write corrections while leveraging collective intelligence to filter the highest quality corrections for readers (Drolsbach et al., 2024; Martel et al., 2024; Renault et al., 2024). Such initiatives could display longer corrections and more prominently than organically occurring user corrections, which have shown to be effective at reducing the spread of misinformation under the right conditions (i.e., a politically balanced crowd; Drolsbach et al., 2024; Martel et al., 2024; Renault et al., 2024). While these efforts are not a direct substitute for professional fact-checking collaborations, they can complement institutional measures.
Findings
Finding 1: User corrections had small and inconsistent effects on perceived accuracy of false information.
We first tested whether user corrections decreased the perceived accuracy of COVID-19 misinformation relative to no corrections (see Figure 1). We report the effect of corrections on the 4-point scale (b) and the percentage change relative to the baseline compared to the control condition (∆).
In the United Kingdom, corrections with a link (b = -0.07, p = .16, ∆ = -8.2%) and without a link (b = -0.01, p = .84, ∆ = -1.2%) had no statistically significant effects on belief in misinformation. In Brazil, corrections with a link (b = -0.10, p = .054, ∆ = -6.9%) and without a link (b = -0.08, p = .11, ∆ = -5.7%) had no statistically significant effects on belief in misinformation. In India, corrections with a link significantly reduced belief in COVID-19 misinformation (b = -0.16, p = .016, ∆ = -10.8%), while corrections without a link did not significantly reduce it (b = -0.08, p = .26, ∆ = -5.0%).

Finding 2: User corrections had small and inconsistent effects on participants’ willingness to share false information.
We tested whether user corrections decreased participants’ willingness to share the posts containing false COVID-19 information relative to no corrections. In the United Kingdom, corrections with a link (b = -0.04, p = .46, ∆ = -8.4%) and without a link (b = -0.01, p = .88, ∆ = -1.8%) had no statistically significant effects on sharing intentions. In Brazil, corrections with a link (b = -0.07, p = .23, ∆ = -7.4%) and without a link (b = -0.11, p = .051, ∆ = -11.2%) had no statistically significant effects on sharing intentions. In India, corrections with a link (b = -0.17, p = .021, ∆ = -11.6%) reduced participants’ willingness to share COVID-19 misinformation, while corrections without a link did not significantly reduce it (b = -0.07, p = .32, ∆ = -5.7%). Note that these differences between countries are not statistically significant, even when merging accuracy ratings and sharing intentions.

Finding 3: Corrections with a link to a fact check were not significantly more effective than corrections without it.
We tested whether corrections with a link to a fact check are more effective than corrections with no link. Using the combined data across countries, we did not find any evidence that corrections with a link to a fact check were significantly more effective than corrections without a link to a fact check at reducing belief in COVID-19 misinformation (b = -0.05, p = .10, ∆ = -4.9%) and participants’ willingness to share COVID-19 misinformation (b = -0.03, p = .45, ∆ = -3.2%). The same is true in each individual country.
Finding 4: Corrections do not have spillover effects on uncorrected (true or false) posts.
We also investigated whether correcting some false posts, but not others, increases the perceived accuracy of the uncorrected false posts. Across countries, corrections did not have statistically significant effects on the accuracy ratings of uncorrected false posts (bno link = 0.04, p = .29, ∆ = 3.8%; blink = -0.06, p = .14, ∆ = -5.3%) or on participants’ willingness to share uncorrected false posts (bno link = 0.06, 6.3%, p = .18; blink = – 0.06, p = .13, ∆ = -7.0%).
Second, we examined whether exposure to corrected false posts increases the perceived accuracy of the true posts. Across countries, corrections did not have statistically significant effects on the accuracy ratings of true posts (bno link = 0.001, p = .94, ∆ = 0.1%; blink = 0.04, p = .16, ∆ = 1.9%) or on participants’ willingness to share true posts (bno link = -0.01, p = .79, ∆ = -0.7%; blink = -0.02, p = .67, ∆ = -1.2%). In Appendix C, we explore the determinants of belief in false COVID-19 and participants’ willingness to share it.
Methods
Participants
We accessed Kantar Media’s online survey panels to recruit 1000 participants in the United Kingdom (52% women, Mdnage group = 45–55, Mdneducation = post-secondary), Brazil (55% women, Mdnage group = 35–44, Mdneducation = short-cycle tertiary education, i.e., about two years after high school) and India (46% women, Mdnage group = 25–34, Mdneducation = short-cycle tertiary education). Data collection took place in the United Kingdom and India, March 12–17, 2021, while in Brazil it was conducted March 12–24, 2021. The participants were distributed across the Control, Correction, and Correction with Link conditions as follows: (United Kingdom) 337, 321, 342; (Brazil) 326, 336, 338; (India) 330, 335, 335. Per country, this sample size allowed us to reliably detect effect sizes as small as Cohen’s f ≈ 0.10 (assuming 80% power and α = 0.05). In the combined data, we were able to reliably detect even smaller effect sizes effect (Cohen’s f ≈ 0.057). We used interlocking quotas for age, gender, region, and income in Brazil and the United Kingdom; and age, gender, and region in India (on the Open Science Framework we provide the full breakdown). Quota targets were based on the online population and not the national population to avoid overrepresenting groups that are not connected to the internet—something that is particularly important in countries like India with relatively low internet penetration. The survey was designed by the authors and built using Qualtrics. We focus on these three countries because India is characterized by high levels of belief in conspiracy theories, the United Kingdom by particularly low levels, and Brazil falls in between (Kirk, 2022). Moreover, the authors possess in-depth expertise in each of these countries, including knowledge of the language and cultural context.
Design
Participants were presented with nine real Facebook posts. Six of the posts contained information that had been fact-checked and found to be false, while three contained true information. The false posts were all contemporary real-world examples of misinformation sourced from International Fact-Checking Network (IFCN)-accredited fact-checking organisations that partnered with Facebook to provide ratings that directly inform if and how Facebook labels content. The true posts were sourced from the Facebook pages of health authorities in each country. The posts were country-specific, and the posts displayed to participants in Brazil were in Portuguese; in the United Kingdom and India posts were in English. This makes comparisons between countries confounded, but it increases the real-world relevance of our findings, as the posts were actually circulating in the countries we investigated. Given the applied focus of our research, we chose to prioritize country-specific conclusions over cross-country comparisons.
Participants were randomly assigned to one of three conditions (see Figure 2). Participants in the Control (no correction) condition saw the nine posts (three true, six false) without any corrections. Participants in the Correction condition saw the nine posts (three true, six false) with a user correction under four of the six false posts. Two false posts were left uncorrected to make the experiment more realistic (i.e., only some of the false posts people encounter on Facebook are likely to have been corrected by another user). We discuss these posts in Finding 5. Finally, the Correction with a link to a fact-check condition is identical to the Correction condition, except that the user correction was paired with a link to a fact check from a news organization (see panel C of Figure 2). At the end of the survey, all participants were debriefed, and all false posts were corrected (with links to fact checks).
In the United Kingdom, the fact checks were from the BBC or Reuters. In Brazil, they were from Aos Fatos, O Estado de S. Paulo, or Folha de S. Paulo. In India, they were from the BBC, The Quint, AFP, or Factly.

Measures
We first measured participants’ demographics, trust, attitudes towards the news, news use, and belief in conspiracy theories (the full survey is available on the Open Science Framework). Before and after the treatment, participants passed an attention check (see Appendix B). Then, we measured the perceived accuracy of all posts using the following question, which contained a placeholder for a description of the claim relevant to the post being viewed: To the best of your knowledge, how accurate is the claim that <insert claim>? (0 = not at all accurate, 1 = not very accurate, 2 = somewhat accurate, 3 = very accurate). We measured participants’ willingness to share the posts using the following question: How likely would you be to share this post on social media (e.g., on Facebook, Twitter, WhatsApp, etc.)? (0 = not at all likely, 1 = not very likely, 2 = somewhat likely, 3 = very likely).
We analyzed the data at the response level and used linear mixed-effects models. We included fixed effects for condition and post (and country in the combined data), and random intercepts for respondent ID to account for clustering (i.e., multiple answers per participant). We report the estimates (b). In Appendix F, we show that the effect sizes remain unchanged when using OLS linear regression with clustered standard errors on participants and posts, but that all p-values are smaller, such that in Brazil, the effects of corrections are now significant in three instances.
Topics
Bibliography
Badrinathan, S., & Chauchard, S. (2024). “I don’t think that’s true, bro!” Social corrections of misinformation in India. The International Journal of Press/Politics, 29(2), 394–416. https://doi.org/10.1177/19401612231158770
Bak-Coleman, J. B., Kennedy, I., Wack, M., Beers, A., Schafer, J. S., Spiro, E. S., Starbird, K., & West, J. D. (2022). Combining interventions to reduce the spread of viral misinformation. Nature Human Behaviour, 6(10), 1372–1380. https://doi.org/10.1038/s41562-022-01388-6
Blair, R. A., Gottlieb, J., Nyhan, B., Paler, L., Argote, P., & Stainfield, C. J. (2024). Interventions to counter misinformation: Lessons from the Global North and applications to the Global South. Current Opinion in Psychology, 55, 101732. https://doi.org/10.1016/j.copsyc.2023.101732
Bode, L., & Vraga, E. K. (2018). See something, say something: Correction of global health misinformation on social media. Health Communication, 33(9), 1131–1140. https://doi.org/10.1080/10410236.2017.1331312
Bode, L., Vraga, E. K., & Tang, R. (2024). User correction. Current Opinion in Psychology, 56, 101786. https://www.sciencedirect.com/science/article/pii/S2352250X23002312
Brashier, N. M. (2024). Fighting misinformation among the most vulnerable users. Current Opinion in Psychology, 57, 101813. https://doi.org/10.1016/j.copsyc.2024.101813
Brotherton, R., French, C. C., & Pickering, A. D. (2013). Measuring belief in conspiracy theories: The generic conspiracist beliefs scale. Frontiers in Psychology, 4, 279. https://doi.org/10.3389/fpsyg.2013.00279
Budak, C., Nyhan, B., Rothschild, D. M., Thorson, E., & Watts, D. J. (2024). Misunderstanding the harms of online misinformation. Nature, 630(8015), 45–53. https://doi.org/10.1038/s41586-024-07417-w
Coppock, A. (2023). Persuasion in parallel: How information changes minds about politics. University of Chicago Press. https://doi.org/10.7208/chicago/9780226821832.001.0001
Drolsbach, C. P., Solovev, K., & Pröllochs, N. (2024). Community notes increase trust in fact-checking on social media. PNAS Nexus, 3(7), pgae217. https://academic.oup.com/pnasnexus/advance-article-abstract/doi/10.1093/pnasnexus/pgae217/7686087
Graham, M. H., & and Porter, E. V. (2025). Increasing demand for fact-checking. Political Communication, 42(2), 325–348. https://doi.org/10.1080/10584609.2024.2395859
Guess, A., McGregor, S., Pennycook, G., & Rand, D. (2024). Unbundling digital media literacy tips: Results from two experiments. OSF. https://osf.io/u34fp/download
Guess, A., Nyhan, B., & Reifler, J. (2020). Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour, 4(5), 472–480. https://doi.org/10.1038/s41562-020-0833-x
Kaplan, J. (2025, January 7). More speech and fewer mistakes. Meta Newsroom. https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
Kirk, I. (2022). What conspiracy theories did people around the world believe in 2021? YouGov. https://yougov.co.uk/topics/international/articles-reports/2022/02/08/what-conspiracy-theories-did-people-around-world-b
Martel, C., Allen, J., Pennycook, G., & Rand, D. G. (2024). Crowds can effectively identify misinformation at scale. Perspectives on Psychological Science, 19(2), 477–488. https://doi.org/10.1177/17456916231190388
Martel, C., & Rand, D. G. (2024). Fact-checker warning labels are effective even for those who distrust fact checkers. Nature Human Behaviour, 8(10), 1957–1967. https://www.nature.com/articles/s41562-024-01973-x
McClain, C. (2021, May 4). 70% of U.S. social media users never or rarely post or share about political, social issues. Pew Research Center. https://www.pewresearch.org/fact-tank/2021/05/04/70-of-u-s-social-media-users-never-or-rarely-post-or-share-about-political-social-issues/
Mercier, H. (2020). Not born yesterday: The science of who we trust and what we believe. Princeton University Press.
Okong’o, J. (2025, April 9). Meta is failing to stop dangerous disinformation in the world’s most spoken languages. Poynter. https://www.poynter.org/fact-checking/2025/meta-disinformation-non-english-languages/
Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, 66(11), 4944–4957. https://doi.org/10.1287/mnsc.2019.3478
Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007
Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. In L. Berkowitz (Ed.), Advances in experimental social psychology (pp. 123–205). Academic Press. https://doi.org/10.1016/S0065-2601(08)60214-2
Porter, E., & Wood, T. J. (2021). The global effectiveness of fact-checking: Evidence from simultaneous experiments in Argentina, Nigeria, South Africa, and the United Kingdom. Proceedings of the National Academy of Sciences, 118(37), e2104235118. https://doi.org/10.1073/pnas.2104235118
Porter, E., & Wood, T. J. (2024). Factual corrections: Concerns and current evidence. Current Opinion in Psychology, 55, 101715. https://www.sciencedirect.com/science/article/pii/S2352250X23001604
Renault, T., Amariles, D. R., & Troussel, A. (2024). Collaboratively adding context to social media posts reduces the sharing of false news. arXiv. https://doi.org/10.48550/arXiv.2404.02803
Vraga, E. K., & Bode, L. (2021). Addressing COVID-19 misinformation on social media preemptively and responsively. Emerging Infectious Diseases, 27(2), 396–403. https://doi.org/10.3201/eid2702.203139
Walter, N., Brooks, J. J., Saucier, C. J., & Suresh, S. (2020). Evaluating the impact of attempts to correct health misinformation on social media: A meta-analysis. Health Communication, 36(13), 1776–1784. https://doi.org/10.1080/10410236.2020.1794553
Yang, W., Wang, S., Peng, Z., Shi, C., Ma, X., & Yang, D. (2022). Know it to defeat it: Exploring health rumor characteristics and debunking efforts on Chinese social media during COVID-19 crisis. Proceedings of the International AAAI Conference on Web and Social Media, 16(1), 1157–1168. https://doi.org/10.1609/icwsm.v16i1.19366
Funding
This research was completed as part of the Oxford Martin Program on Misinformation, Science, and Media, funded by the Oxford Martin School and with further support from the BBC World Service as part of the Trusted News Initiative.
Competing Interests
The authors declare no competing interests.
Ethics
The research protocol was approved by the University of Oxford Central University Research Ethics Committee. Participants provided their informed consent.
Copyright
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.
Data Availability
All materials needed to replicate this study are available via the Harvard Dataverse: https://doi.org/10.7910/DVN/MGUSFA and https://osf.io/p6xjs/. The pre-registration is available at https://osf.io/mh5te.