Commentary

Towards the study of world misinformation

Article Metrics
CrossRef

0

CrossRef Citations

0

PDF Downloads

10

Page Views

What if nearly everything we think we know about misinformation came from just a sliver of the world? When research leans heavily on online studies from a few wealthy nations, we risk drawing global conclusions from local noise. A WhatsApp group of fishermen, a displaced community in a refugee camp, or a bustling market in the Global South are not marginal examples of information environments; such contexts call for an evolution of how we study misinformation. In this commentary, I argue that progress in misinformation studies requires expanding methodological reach beyond convenience samples, critically reassessing causal assumptions, engaging in participatory intervention design, and incorporating insights from both encrypted and offline information networks to develop more contextually grounded and globally relevant strategies.

IMAGE BY Freepsdgraphics ON PIXABAY

Introduction

The study of misinformation has become a prominent trend in social sciences (Cook & Lewandowsky, 2015). However, as the topic gains traction, a critical question arises: Do the theories and paradigms used truly represent the 8 billion people on our planet? In the case of misinformation, a critical analysis yields a resounding no, with recent reviews highlighting that more than 80% of the research is conducted in the Global North (Badrinathan & Chauchard, 2024; Blair et al., 2024).

Badrinathan and Chauchard (2024) show that the bulk of misinformation research is framed by theories and methods developed in high-income countries, leaving the Global South largely neglected. Similarly, Blair et al. (2024) warn that applying interventions from the Global North without proper adaptation risks deepening societal divides and misaligning interventions with local realities. Even the few studies that ventured beyond these regions often relied on methodologies originally designed in the Global North and subsequently applied to the Global South (Arechar et al., 2023; Guess et al., 2020), overlooking potential nuances that underlie the proliferation of misinformation across different contexts.

Recent large-scale syntheses and cross-country work signal both progress and limits in our understanding of misinformation. Meta-analyses and taxonomies map growing yet uneven evidence on what works (Kozyreva et al., 2024; Sultan et al., 2024), a cross-national experiment documents substantial variation in how people encounter and evaluate false information (Arechar et al., 2023), and interventions in the Global South show mixed effects—some promising (Ali & Qazi, 2022, 2023; Bowles et al., 2023), others null or context-dependent (Harjani et al., 2023).

To move forward, we must not only recognize who is missing from misinformation research, but also revisit and broaden how we study misinformation—reassessing the channels, social structures, and theoretical assumptions that shape its spread—and embrace locally grounded methods and ethical frameworks to design more inclusive and effective interventions.

Who are left out?

The overlooked and underrepresented extend beyond the confines of the Global South. Inequality operates subtly, with those offline leaving no digital traces yet remaining susceptible to the perils of scientific and political misinformation through other channels. While being offline might reduce exposure to false information, it simultaneously limits access to verification channels, allowing falsehoods to persist.

Across contexts, existing approaches have generated valuable insights on how to counter misinformation, yet many continue to rely on online platforms that primarily sample from narrow and digitally literate demographics. Platforms such as Amazon MTurk or Prolific have revolutionized large-scale experimentation but have also concentrated research on a limited subset of the population—typically young, educated, urban, and digitally literate individuals. Thus, it becomes imperative to bridge the gap and address the diverse array of challenges faced by those often left out of the study of misinformation. Consider, for instance, those residing in conflict zones, areas prone to natural disasters, or in refugee camps, each facing unique challenges in accessing and navigating information channels (Azoulay, 2024; Badji et al., 2024; Dierickx & Lindén, 2024). Importantly, we must distinguish between methodological challenges of inclusion in research (such as recruiting participants with low literacy or limited digital access) and challenges of implementation, which arise more acutely in fragile and authoritarian settings where fact checkers and participants may face coercion, surveillance, or threats to their safety.

Some environments primarily pose methodological challenges for researchers: Individuals with low levels of education and income remain particularly underrepresented in online research platforms, which typically draw from more digitally literate and economically stable populations (Paolacci & Chandler, 2014). Even when studies target low- and middle-income countries, many of the most disadvantaged groups are systematically excluded simply because they lack the internet access or literacy skills needed to be part of online subject pools (Arechar et al., 2023). Economic poverty, the most pervasive invisible barrier, affects even those with the ability to access free information. Those living under such conditions are absorbed into a constant survival struggle, diverting their attention to immediate needs (Burlacu et al., 2022; Burlacu et al., 2023; De Bruijn & Antonides, 2021) and leaving fewer resources for activities like information verification, a luxury often taken for granted in more affluent societies.

Other settings present intervention challenges. Conflict-afflicted contexts present acute challenges for both citizens navigating the infosphere and professionals engaged in fact-checking. In authoritarian environments, fact checkers face legal and political constraints that compromise their ability to challenge state narratives, often operating under threats to their safety and professional integrity (Azoulay, 2024). More broadly, such settings are marked by limited data access, high levels of fear, and volatile information flows that amplify misinformation and hinder timely verification (Badji et al., 2024). In a context like the Russian-Ukrainian war, fact checkers contend with complex, multi-platform propaganda campaigns and logistical barriers to verifying real-time information while also facing digital harassment and operational risks (Dierickx & Lindén, 2024). These constraints not only jeopardize the effectiveness of fact-checking efforts but also highlight the urgent need for context-sensitive methods and protective frameworks that support verification work under extreme conditions. Conflict can reshape how people engage with information. For example, an analysis of Ukrainian tweets before and after Russia’s invasion showed that rising conflict salience intensified the influence of social identity. Users were more likely to seek and share news that reinforced in-group narratives and aligned with their national or ethnic identities, while distancing themselves from content linked to opposing groups or external sources (Kyrychenko et al., 2024). Carlson et al. (2017) similarly explored how information dynamics influence asylum seekers’ access to legal protections. They argued that refugees often underutilize formal asylum channels due to limited and biased information, leading to distrust in authorities and increased reliance on smugglers.

In our drive to generate rapid, scalable evidence, research has understandably prioritized accessible, digitally connected populations, but in doing so, the research community often sidelined population groups that were harder to reach or work with. One example of an important but often understudied group is youth and children. Shtulman (2024) highlighted their particular vulnerability to misinformation and the promise of early educational interventions to foster critical thinking and digital literacy. However, despite some promising experimental work (Breakstone et al., 2021, Breakstone et al., 2022), designing effective, context-sensitive interventions for this group remains a significant challenge (Martini et al., 2025).

This lack of representation reinforces existing inequalities, as the most vulnerable and potentially the most exposed to misinformation are also the least likely to be included in intervention research. This issue has long been documented in behavioral sciences. Henrich et al. (2010) showed that Western, Educated, Industrialized, Rich, and Democratic (WEIRD) populations often differ systematically from the rest of the world across cognitive, social, and behavioral domains. Their findings underscore that expanding samples is an empirical, not merely an ethical, imperative. It is, therefore, reasonable to think that underexplored contexts are those that might most benefit from misinformation interventions. Underrepresented, vulnerable people are likely to be more exposed and less well-equipped with the tools to counter misinformation. For instance, populations in the Global South have, on average, lower levels of digital and media literacy (Ragnedda & Mutsvario, 2019), which could make them more vulnerable to misinformation while simultaneously potentially increasing the impact of interventions to promote digital and media literacy. Limited availability of representative online panels in non-WEIRD samples necessitates field experiments seeking to amplify the voices often unheard in laboratory and online studies.

A new research agenda

Addressing the global dimensions of misinformation requires more than expanding geographic reach: it demands rethinking how we select our study populations, engage with media channels, build theories, analyze data, validate interventions, collaborate across institutions, and uphold ethical standards. Table 1 outlines seven intersecting areas where adapting our research practices can make misinformation studies more inclusive, rigorous, and impactful.

Rethinking sampling

A critical starting point is sampling. Many studies still rely on online convenience samples that systematically exclude rural, offline, conflict-affected, and low-literacy populations (Badrinathan & Chauchard, 2024; Blair et al., 2024). Lab-in-the-field experiments, randomized trials conducted with community-based organizations, and natural experiments triggered by changes in media regulation can provide richer, more grounded insights into the lived realities of those most affected by misinformation but least represented in the literature.

Engaging local media channels

Equally important is devoting attention to the channels through which information spreads. In many contexts, misinformation circulates through closed messaging platforms, local radio, community gatherings, SMS chains, and religious networks (Badrinathan & Chauchard, 2024; Blair et al., 2024). Encrypted messaging apps like WhatsApp and Telegram are often seen as obstacles to empirical research, but they also offer unique opportunities. As shown by Pasquetto et al. (2022), WhatsApp’s flexible formats and the strong social ties within its user networks can be strategically leveraged to increase the reach and impact of debunking messages. In their study, audio-based corrections were found to generate more engagement than text or images, and users were more likely to re-share debunks received from close contacts or political in-group members. Such findings point to the importance of studying these alternative media environments through complementary methods like digital ethnography, multi-format content analysis, and peer-to-peer network mapping. Building on such insights, promising tools like chatbot-based recruitment (Bowles et al., 2025) and WhatsApp data donation initiatives (Garimella & Chauchard, 2025) provide new avenues for studying these environments.

Revisiting theoretical assumptions

Revisiting our theoretical assumptions is also essential. Prevailing models of misinformation diffusion often rest on the idea of viral contagion, where a single influencer drives large-scale spread (e.g., Shao et al., 2018). But complex contagion theory suggests that people may need multiple exposures from peers before accepting or sharing content (Centola, 2010; Centola & Macy, 2007). Whether misinformation spreads more virally or through social reinforcement can vary by setting. In rural communities, where interpersonal trust and close-knit ties dominate, these dynamics may differ sharply from those seen in urban or digital-first contexts. Local leaders, for example, may serve as both critical debunkers and powerful amplifiers of falsehoods, depending on the circumstances.

Advancing data analysis

Improved data analysis methods are helping researchers capture these nuances. Advances in network science and the growing accessibility of large language models (LLMs), artificial intelligence systems trained to generate and understand human-like text, allow us to trace multilingual content, identify influential nodes, and analyze emotional and narrative patterns in ways that were previously out of reach (Thapa et al., 2025). Such methods are particularly valuable for analyzing high-volume corpora from messaging apps. Combining these tools with geospatial analysis and mixed-methods approaches can uncover how misinformation clusters and spreads across specific populations and platforms.

Designing scalable interventions

This more granular understanding is also key to designing scalable interventions. While experimental trials may succeed in a determined context, it remains unclear how well they generalize across regions with different cultural norms or political constraints. Interventions must be modular (i.e., structured in self-contained, interchangeable units) and adaptable and, when feasible, co-designed with target beneficiaries. Modularity and contextualization are not contradictory but complementary: modular design provides external validity and adaptability across settings, while deep contextualization ensures that interventions remain appropriate and effective within local realities. Equally important is assessing sustainability of interventions within the same context. For example, strategies such as nudging attention may work once or twice but risk diminishing returns if overused (Roozenbeek et al., 2024). Overarching studies, conducted across a range of settings and followed over time, can help assess both their effectiveness and resilience to local variations.

Fostering inclusive collaboration

Underpinning all of this is the need to rethink who shapes the research agenda. Inclusive collaboration is not just a matter of fairness; it directly improves research quality by ensuring ethical sensitivity, embedding local knowledge, and enhancing the cultural relevance of interventions. Scholars from the Global South should not be relegated to being local implementers but lead the way (Medie & Kang, 2018). Building a truly global misinformation field means investing in regional conferences, supporting fellowships for underrepresented scholars, and creating structural incentives for equitable co-authorship and cross-institutional collaboration.

Embedding ethics throughout

Ethics must be taken into account in every stage of the process. Moving into new contexts raises additional challenges that require tailored approaches. In vulnerable settings, even modest financial incentives can exert undue influence, and informed consent cannot be taken for granted (Largent & Lynch, 2017). Locally adapted review mechanisms, like the Amref Ethical and Scientific Review Committee (ESRC) in Kenya, can provide grounded, culturally sensitive oversight. Consent procedures must be iterative, inclusive of varying literacy levels, and followed by transparent feedback loops such as community debriefs (Cargo & Mercer, 2008). Interventions, too, must be co-developed with local actors to ensure they are not only effective but legitimate and sustainable.

Together, these proposed shifts do not reject prior advances; rather, they call for complementary work towards greater relevance, inclusiveness, and context sensitivity. The need for such complementarity is empirical: models and interventions developed on narrow populations can systematically underperform or produce harms when deployed in underrepresented groups—documented, for example, in algorithmic healthcare decision tools (Obermeyer et al., 2019), commercial AI systems (Buolamwini & Gebru, 2018), and predictive models with poor external validity (Wynants et al., 2020). Incorporating more representative samples and locally grounded designs is therefore not only a matter of inclusion but of scientific validity: it improves the likelihood that research will yield reliable, applicable solutions across diverse settings.

AspectResearch TargetsPromising Approaches
SamplesUnder-represented populations (e.g., rural, offline, conflict-affected groups, refugees, youth, those with low digital literacy)Lab-in-the-field experiments; RCTs in partnership with community-based organizations; Natural experiments
(e.g., media disruptions, policy rollouts)
Media
Channels
Locally relevant channels (e.g., WhatsApp, Telegram, local radio, community gatherings, SMS-based communication, religious networks)Investigation of closed platforms
dynamics; content analysis across
text, audio, and video formats; network mapping of peer-to-peer communication
Theoretical FrameworksMechanisms of misinformation spread (e.g., simple vs. complex contagion); trust and social norms; offline–online dynamics in belief formationComparative theory-building across
contexts; behavioral models of
misinformation uptake and sharing; experimental validation of social
learning mechanisms
Data
Analysis
Multilingual misinformation content; tracing dissemination patterns; identifying superspreaders; narrative strategies and emotional appealsNetwork analysis of information
flows; large-language-model (LLM)
tools for cross-lingual topic and
sentiment modeling; mixed-methods triangulation; geospatial analysis
External
Validity
Co-designing interventions. Generalizability across regions and political environments; designing interventions adaptable to local norms and institutionsModular, context-sensitive
intervention design;
cluster-randomized trials
across diverse geographies;
longitudinal tracking of outcomes;
cross-site meta-analyses to assess
broader patterns
Research CommunityEmpowering Global South scholars; fostering equitable collaboration; reducing epistemic inequalities in agenda-setting and publishingRegional workshops and conferences; fellowships and funding for southern researchers; co-authorship incentives; partnerships with community-based organizations
Ethical ConsiderationsProtection of vulnerable populations; mitigating unintended consequences; enabling informed consent among diverse literacy levelsLocalized and adaptive IRB protocols;
inclusion of local stakeholders in ethics advisory boards; iterative, multimodal consent processes (e.g., visual or oral);
post-study transparency mechanisms
(e.g., community debriefs, audits)
Table 1. Expanding the global research agenda on misinformation. Note: The table identifies key challenges and promising methodological approaches to improve the representativeness, validity, and ethical rigor of misinformation studies beyond WEIRD populations.

Existing evidence

In recent years, the field of misinformation studies has made commendable strides toward consolidating insights and expanding its empirical boundaries. Researchers have started undertaking field experiments targeting underrepresented populations, cross-national comparisons, meta-analyses, and efforts to systematically classify interventions, all of which is contributing to a broader and more global understanding of how misinformation operates.

Interventions to counter misinformation in the Global South held mixed findings (Guess et al., 2020; Harjani et al., 2023; Badrinathan et al., 2025), underscoring the necessity of contextualizing misinformation interventions to local conditions. For example, Badrinathan (2021) conducted a randomized controlled trial in India testing fact-checking interventions, which failed to reduce belief in false claims, contrasting with findings from Guess et al. (2020) in the United States, where similar approaches yielded significant outcomes. These discrepancies indicate that methods developed in high-income contexts may not seamlessly translate to the Global South. Understanding these differences, and adapting interventions accordingly, requires robust field experiments, including ethnographic fieldwork, interviews with community leaders, and analyses of local communication channels, such as WhatsApp groups, churches, and farmers’ cooperatives.

Despite advances, certain interventions still face contextual challenges. Harjani et al. (2023) made an important contribution by testing inoculation theory and gamification in a rural Indian field trial. Their randomized trial found that, contrary to previous results in Western samples, the gamified inoculation intervention did not significantly reduce belief in or willingness to share misinformation among participants in rural India. However, as Panizza et al. (2025) convincingly argue, the trial’s null effects may reflect both participants’ limited familiarity with game-based digital interfaces and the possibility that inoculation interventions may not work as intended in contexts where metaphors and analogies are culturally uncommon. In regions where only one in four households is digitally literate (Mothkoor & Mumtaz, 2021), unfamiliarity with interactive apps can obscure the cognitive gains that inoculation games aim to deliver. Yet, attributing the shortfall solely to a lack of “internet culture” risks flattening the rich, context‐specific ways communities engage online. Consider, for instance, a Maasai warrior in Kenya: he may defend his village with traditional spear and shield by day and then, by dusk, share his ceremonial dances on his WhatsApp status, demonstrating that connectivity and digital expression follow distinct cultural logics rather than a simple urban rural dichotomy.

These complexities underscore why deep contextual understanding and co-design are not optional, but rather essential to creating interventions that resonate with local realities. A compelling example comes from Hopkins et al. (2023), who co-developed the “Cranky Uncle Vaccine” mobile game across Uganda, Kenya, and Rwanda. Working with UNICEF offices, researchers, and local organizations, they ran iterative workshops where community members tested early prototypes, critiqued character design, and offered feedback on gameplay and messaging. Insights from these sessions shaped everything from the character’s clothing to the tone of humor and corrective narratives, ensuring the final product aligned with regional norms. This model of participatory design illustrates how deeply engaging target audiences and local partners can produce culturally grounded interventions ready for rigorous evaluation.

Similarly, recent studies in Pakistan demonstrate promising results. Ali and Qazi’s work (2022, 2023) offers compelling evidence for the effectiveness of field-based experimental interventions in countering misinformation on social media. Their studies in Pakistan demonstrate that educational interventions and improvements in digital literacy can mitigate vulnerability to misinformation, as many new users, with limited digital literacy, are joining social media due to low-cost smartphones and expanded mobile Internet access.

Other field experiments in Africa offer further insights. Bowles et al. (2023) report results from a six-month WhatsApp-based field experiment in South Africa designed to test whether sustained exposure to fact-checks can improve misinformation discernment and influence related attitudes and behaviors. They found that while few participants engaged with fact-checks without incentives, even modest payments significantly increased uptake, with effects persisting after incentives ended. Fact-check exposure, especially when incentivized, improved skepticism toward both targeted and novel misinformation, with short text messages performing as well as longer or more entertaining formats. Although media consumption and verification habits remained largely unchanged, participants reported slightly greater compliance with COVID-19 measures and increased trust in the government. An earlier WhatsApp intervention in Zimbabwe demonstrated that simple push notifications could reduce belief in COVID‑19 myths in the short term (Bowles, Larreguy, & Liu, 2020), but without the sustained, incentivized framework of the South Africa trial.

Ultimately, meaningful progress in combating misinformation depends on embracing complexity and context, moving beyond one-size-fits-all solutions (Roozenbeek et al., 2024) toward approaches that are as diverse and dynamic as the communities they aim to serve.

Overarching studies

As misinformation research matures, overarching studies that cut across contexts are essential for moving the field beyond its fragmented, Western-leaning foundations, with their value hinging on the depth and diversity of the underlying evidence. Arechar et al. (2023), for instance, conducted an ambitious study across 16 countries on six continents to explore how people understand and respond to misinformation in diverse sociopolitical and cultural settings. This cross-country scope is a much-needed step forward, given the historically Western-centric orientation of the field. However, a closer look reveals the use of online convenience samples across these countries, often composed of individuals with relatively high levels of education and digital access. This may limit the generalizability of the findings. Such samples risk reproducing the very biases the study aims to correct, potentially offering insights that reflect lookalikes of WEIRD populations rather than capturing the true diversity of global experiences and vulnerabilities.

A similarly important contribution is offered by Kozyreva et al. (2024), who presented a comprehensive taxonomy of individual-level interventions aimed at combating misinformation online. This “toolbox” is an essential resource for consolidating the fragmented intervention literature, drawing on studies from around the world. These findings underscore the need for further research to develop, rather than merely adapt, interventions that are informed by the cultural, social, and informational realities of diverse settings. As more evidence from diverse contexts becomes available, this taxonomy can guide researchers and policy maker better understand and target interventions across different settings.

Sultan et al. (2024) advanced the discussion by offering a meta-analysis that identifies key individual-level predictors of misinformation discernment. Their analysis finds that older age and stronger analytical reasoning skills are associated with a better ability to distinguish misinformation from factual content. Interestingly, they find no significant effect of education level, and due to data limitations, the study does not test for socioeconomic status. Moreover, the categorization of education across the 31 studies included in their meta-analysis, limited to Secondary, Undergraduate, and Graduate, may mask critical differences in education quality and access, especially in countries with more pronounced educational inequalities. In settings where a large share of the population has only partial or interrupted schooling, the role of education may differ substantially from that observed in high-income contexts. These omissions point to the need for future studies that explore how broader structural factors, such as the quality of education systems and disparities in socioeconomic status, interact with individual cognitive traits to shape susceptibility to misinformation.

Misinformation interventions targeting the most vulnerable populations may hold the greatest potential for impact, greater, perhaps, than interventions focused on WEIRD samples. Vulnerable groups often face higher risks from misinformation, with consequences that can compound existing inequalities. Although direct evidence on the cost-effectiveness of misinformation interventions among these populations is still lacking, insights from other domains suggest that targeting the vulnerable can yield exceptionally high returns. For instance, vaccination campaigns in low- and middle-income countries offer a powerful parallel: vaccinations delivered between 2001 and 2020 across 73 such countries are projected to avert over 20 million deaths, save 350 billion U.S. dollars (USD) in direct healthcare costs, and generate 820 billion USD in broader economic and social benefits (Ozawa et al., 2017). Moreover, a systematic review finds that in more than half of cost-effectiveness studies, vaccines cost no more than 100 USD per disability-adjusted life-year averted (Ozawa et al., 2012). These figures suggest that well-designed misinformation interventions targeted at high-risk, underserved groups could produce outsized social and economic returns.

Conclusions

Misinformation is a global problem, but our understanding of it is anything but global. Over 80% of what we know comes from studies rooted in the Global North. That means the models we build, the solutions we fund, and the theories we teach are shaped by a narrow slice of the world. They rarely reflect the multifaceted realities of life in underrepresented populations, where access to information is different, trust is shaped by history, and infrastructures follow entirely different rules.

The evidence is overwhelming. Metastudies, taxonomies, and cross-country surveys all point to a glaring lack of contextual diversity. And yet, we still export one-size-fits-all interventions, assuming they will work the same way everywhere. But what works in Silicon Valley may fall flat in rural Uganda. What makes sense in a lab setting may unravel on a busy road in South Asia. Although direct evidence within misinformation research is still limited, the few cross-context studies available show that interventions validated in WEIRD samples often attenuate or shift in effect when deployed elsewhere, underscoring the empirical necessity of expanding who and where we study.

Still, the field is not without direction. We have a clearer sense of the pathways forward: designing research with and for underrepresented populations, leveraging locally relevant media channels, and adapting theories and methods to reflect diverse cultural, political, and technological realities. This also means finding practical ways to reach offline and hard-to-access populations and ensuring that local researchers are central partners in shaping study design and interventions. Promising implications of this approach include the ability to design interventions that not only reach marginalized groups through trusted local networks but also anticipate misinformation surges during crises. Recent studies demonstrate that with context-sensitive approaches such as field experiments in the Global South, co-designed interventions, and tailored media strategies, it is possible to achieve meaningful impact.

This is more than refining methods or enriching datasets. It is about expanding how we approach misinformation research by centering diverse voices, embracing local knowledge, and building solutions that fit real-world complexities. Only by doing this can we create interventions that truly empower populations to navigate and resist misinformation on their own terms.

Topics
Download PDF
Cite this Essay

Ronzani, P. (2025). Towards the study of world misinformation. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-191

Bibliography

Ali, A., & Qazi, I. A. (2022). Digital literacy and vulnerability to misinformation: Evidence from Facebook users in Pakistan. Journal of Quantitative Description: Digital Media, 2. https://doi.org/10.51685/jqd.2022.025

Ali, A., & Qazi, I. A. (2023). Countering misinformation on social media through educational interventions: Evidence from a randomized experiment in Pakistan. Journal of Development Economics, 163, Article 103108. https://doi.org/10.1016/j.jdeveco.2023.103108

Arechar, A. A., Allen, J., Berinsky, A. J., Cole, R., Epstein, Z., Garimella, K., Gully, A., Lu, J. G., Ross, R. M., Stagnaro, M. N., Zhang, Y., Pennycook, G., & Rand, D. G. (2023). Understanding and combating misinformation across 16 countries on six continents. Nature Human Behaviour, 7(9), 1502–1513. https://doi.org/10.1038/s41562-023-01641-6

Azoulay, L. (2024). Truth in the crossfire: The case of Ethiopia and fact-checking in authoritarian contexts. Media and Communication, 12, Article 8785. https://doi.org/10.17645/mac.8785

Badji, S. D., Orgeret, K. S., & Mutsvairo, B. (2024). An exploratory study of fact-checking practices in conflict and authoritarian contexts. Media and Communication, 12, Article 8698. https://doi.org/10.17645/mac.8698

Badrinathan, S., & Chauchard, S. (2024). Researching and countering misinformation in the Global South. Current Opinion in Psychology, 55, Article 101733. https://doi.org/10.1016/j.copsyc.2023.101733

Badrinathan, S. (2021). Educative interventions to combat misinformation: Evidence from a field experiment in India. American Political Science Review, 115(4), 1325–1341. https://doi.org/10.1017/S0003055421000459

Badrinathan, S., Chauchard, S., & Siddiqui, N. (2025). Misinformation and support for vigilantism: An experiment in India and Pakistan. American Political Science Review119(2), 947–965. https://doi.org/10.1017/S0003055424000790

Blair, R. A., Gottlieb, J., Nyhan, B., Paler, L., Argote, P., & Stainfield, C. J. (2024). Interventions to counter misinformation: Lessons from the Global North and applications to the Global South. Current Opinion in Psychology, 55, Article 101732. https://doi.org/10.1016/j.copsyc.2023.101732

Bowles, J., Croke, K., Larreguy, H., Liu, S., & Marshall, J. (2023). Sustaining exposure to fact checks: Misinformation discernment, media consumption, and its political implications. American Political Science Review, 19(4), 1864–1887. https://doi.org/10.1017/S0003055424001394

Bowles, J., Larreguy, H., & Liu, S. (2020). Countering misinformation via WhatsApp: Preliminary evidence from the COVID-19 pandemic in Zimbabwe. PLOS ONE, 15(10), Article e0240005. https://doi.org/10.1371/journal.pone.0240005

Breakstone, J., Smith, M., Connors, P., Ortega, T., Kerr, D., & Wineburg, S. (2021). Lateral reading: College students learn to critically evaluate Internet sources in an online course. Harvard Kennedy School (HKS) Misinformation Review, 2(1). https://doi.org/10.37016/mr-2020-56

Breakstone, J., Smith, M., Ziv, N., & Wineburg, S. (2022). Civic preparation for the digital age: How college students evaluate online sources about social and political issues. The Journal of Higher Education, 93(7), 963–988. https://doi.org/10.1080/00221546.2022.2082783

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In S. Friedler & C. Wilson (Eds.), Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91). PMLR. https://proceedings.mlr.press/v81/buolamwini18a.html

Burlacu, S., Kažemekaitytė, A., Ronzani, P., & Savadori, L. (2022). Blinded by worries: Sin taxes and demand for temptation under financial worries. Theory Decision, 92, 141–187. https://doi.org/10.1007/s11238-021-09820-5

Burlacu, S., Mani, A., Ronzani, P., & Savadori, L. (2023). The preoccupied parent: How financial concerns affect child investment choices. Journal of Behavioral and Experimental Economics, 105, Article 102030. https://doi.org/10.1016/j.socec.2023.102030

Carlson, M., Jakli, L., & Linos, K. (2017). Refugees misdirected: How information, misinformation, and rumors shape refugees’ access to fundamental rights. Virginia Journal of International Law, 57(3), 539–574. https://ssrn.com/abstract=3248122

Cargo, M., & Mercer, S. L. (2008). The value and challenges of participatory research: Strengthening its practice. Annual Review of Public Health, 29, 325–350.   https://doi.org/10.1146/annurev.publhealth.29.091307.083824

Centola, D. (2010). The spread of behavior in an online social network experiment. Science, 329(5996), 1194–1197. https://doi.org/10.1126/science.1185231

Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American Journal of Sociology, 113(3), 702–734. https://doi.org/10.1086/521848

Cook, J., Ecker, U., & Lewandowsky, S. (2015). Misinformation and how to correct it. In R.A. Scott and S.M. Kossly (Eds.), Emerging trends in the social and behavioral sciences: An interdisciplinary, searchable, and linkable resource. Wiley. https://doi.org/10.1002/9781118900772.etrds0222

De Bruijn, E. J., & Antonides, G. (2022). Poverty and economic decision making: A review of scarcity theory. Theory and Decision, 92(1), 5–37. https://doi.org/10.1007/s11238-021-09802-7

Dierickx, L., & Lindén, C. G. (2024). Screens as battlefields: Fact checkers’ multidimensional challenges in debunking Russian-Ukrainian war propaganda. Media and Communication, 12. https://doi.org/10.17645/mac.8668

Garimella, K., & Chauchard, S. (2025). Whatsapp Explorer: A data donation tool to facilitate research on WhatsApp. Mobile Media & Communication, 13(3), 481–503. https://doi.org/10.1177/20501579251326809

Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences, 117(27), 15536–15545. https://doi.org/10.1073/pnas.1920498117

Harjani, T., Basol, M.-S., Roozenbeek, J., & van der Linden, S. (2023). Gamified inoculation against misinformation in India: A randomized control trial. Journal of Trial & Error, 3(1), Article e12. https://doi.org/10.36850/e12

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X

Hopkins, K. L., Lepage, C., Cook, W., Thomson, A., Abeyesekera, S., Knobler, S., Boehman, N., Thompson, B., Waiswa, P., Ssanyu, J. N., Kabwijamu, L., Wamalwa, B., Aura, C., Rukundo, J. C., & Cook, J. (2023). Co-designing a mobile-based game to improve misinformation resistance and vaccine knowledge in Uganda, Kenya, and Rwanda. Journal of Health Communication, 28(sup2), 49–60. https://doi.org/10.1080/10810730.2023.2231377

Kyrychenko, Y., Brik, T., van der Linden, S., & Roozenbeek, J. (2024). Social identity correlates of social media engagement before and after the 2022 Russian invasion of Ukraine. Nature Communications, 15(1), Article 8127. https://doi.org/10.1038/s41467-024-52179-8

Kozyreva, A., Lorenz-Spreen, P., Herzog, S. M., Ecker, U. K., Lewandowsky, S., Hertwig, R., Ali, A., Bak-Coleman, J.,  Barzilai, S., Basol, M., Berinsky, A. J.,  Betsch, C.,  Cook, J., Fazio, L. K., Geers, M., Guess, A. M.,  Huang, H., Larreguy, H., Maertens, R., … & Wineburg, S. (2024). Toolbox of individual-level interventions against online misinformation. Nature Human Behaviour, 8(6), 1044–1052. https://doi.org/10.1038/s41562-024-01881-0

Largent, E. A., & Lynch, H. F. (2017). Paying research participants: The outsized influence of “undue influence.” IRB, 39(4), 1–9. https://pmc.ncbi.nlm.nih.gov/articles/PMC5640154/

Martini, C., Floris, M., Ronzani, P., Ausili, L., Pennacchioni, G., Adorno, G., & Panizza, F. (2025). The impact of interventions against science disinformation in high school students. Scientific Reports15(1), Article 34278. https://doi.org/10.1038/s41598-025-16565-6

Medie, P. A., & Kang, A. J. (2018). Power, knowledge and the politics of gender in the Global South. European Journal of Politics and Gender1(1–2), 37–54. https://doi.org/10.1332/251510818X15272520831157

Mothkoor, V., & Mumtaz, F. (2021). The digital dream: Upskilling India for the future. Ideas for India. https://www.ideasforindia.in/topics/governance/the-digital-dream-upskilling-india-for-the-future.html

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

Ozawa, S., Clark, S., Portnoy, A., Grewal, S., Stack, M. L., Sinha, A., Mirelman, A., Franklin, H., Friberg, I. K., Tam, Y., Walker, N., Clark, A., Ferrari, M., Suraratdecha, C., Sweet, S., Goldie, S. J., Garske, T., Li, M., Hansen, P. M.,… & Walker, D. (2017). Estimated economic impact of vaccinations in 73 low-and middle-income countries, 2001–2020. Bulletin of the World Health Organization, 95(9), 629. https://doi.org/10.2471/BLT.16.178475

Ozawa, S., Mirelman, A., Stack, M. L., Walker, D. G., & Levine, O. S. (2012). Cost-effectiveness and economic benefits of vaccines in low-and middle-income countries: A systematic review. Vaccine, 31(1), 96–108. https://doi.org/10.1016/j.vaccine.2012.10.103

Panizza, F., Camminga, T., Gaillard, S., & Grüning, D. J. (2025). Adopt a cultural-historical perspective to adapt misinformation interventions: Reflecting on Harjani et al. Journal of Trial & Error. https://doi.org/10.36850/415c-479a

Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions in Psychological Science, 23(3), 184–188. https://doi.org/10.1177/0963721414531598

Pasquetto, I. V., Jahani, E., Atreja, S., & Baum, M. (2022). Social debunking of misinformation on WhatsApp: The case for strong and in-group ties. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1–35. https://doi.org/10.1145/3512964

Ragnedda, M., & Mutsvairo, B. (2019). Mapping the digital divide in Africa: A mediated analysis. Amsterdam University Press. https://doi.org/10.2307/j.ctvh4zj72

Roozenbeek, J., Remshard, M., & Kyrychenko, Y. (2024). Beyond the headlines: On the efficacy and effectiveness of misinformation interventions. Advances in Psychology, 2, Article e24569. https://doi.org/10.56296/aip00019

Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), Article 4787. https://doi.org/10.1038/s41467-018-06930-7

Shtulman, A. (2024). Children’s susceptibility to online misinformation. Current Opinion in Psychology, 55, Article 101753. https://doi.org/10.1016/j.copsyc.2023.101753

Sultan, M., Tump, A. N., Ehmann, N., Lorenz-Spreen, P., Hertwig, R., Gollwitzer, A., & Kurvers, R. H. (2024). Susceptibility to online misinformation: A systematic meta-analysis of demographic and psychological factors. Proceedings of the National Academy of Sciences, 121(47), Article e2409329121. https://doi.org/10.1073/pnas.2409329121

Thapa, S., Shiwakoti, S., Shah, S. B., Adhikari, S., Veeramani, H., Nasim, M., & Naseem, U. (2025). Large language models in computational social science: Prospects, current state, and challenges. Social Network Analysis and Mining, 15(4). https://doi.org/10.1007/s13278-025-01428-9

Wynants, L., Van Calster, B., Collins, G. S., Riley, R. D., Heinze, G., Schuit, E., Albu, E., Arshi, B., Bellou, V., Bonten, M. M. J., Dahly, D. L., Damen, J. A., Debray, T. P. A., de Jong, V. M. T., De Vos, M., Dhiman, P., Ensor, J., Gao, S., Haller, M. C., … Van Smeden, M. (2020). Prediction models for diagnosis and prognosis of covid-19: Systematic review and critical appraisal. BMJ, 369, Article m1328. https://doi.org/10.1136/bmj.m1328

Funding

No funding has been received to conduct this research.

Competing Interests

The author declares no competing interests.

Copyright

This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Acknowledgements

I am grateful to Sumitra Badrinathan, Lame Ungwang, Anastasia Kozyreva, Philipp Lorenz-Spreen, Mubashir Sultan, Maria Almudena Claassen, Austėja Kažemekaitytė, Sergiu Burlacu, Carlo Martini, and Folco Panizza for their feedback, insightful conversations, and guidance throughout the development of this work. I am grateful to colleagues at DENeB and at the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin for their valuable feedback.