Commentary
Disparities by design: Toward a research agenda that links science misinformation and socioeconomic marginalization in the age of AI
Article Metrics

0
CrossRef Citations
PDF Downloads
Page Views
Misinformation research often draws optimistic conclusions, with fact-checking, for example, being established as an effective means of reducing false beliefs. However, it rarely considers the details of socioeconomic disparities that often shape who is most vulnerable to science misinformation. Historical and systemic inequalities have fostered mistrust in institutions, limiting access to credible information, for example, when Black patients distrust public health guidance due to past medical racism. Yet, research continues to treat information access as equal for all. This essay argues that recent technological disruptions provide an opportune moment for self-reflection, bringing together AI, science misinformation, and social disparities within one research agenda.

(Mis-)Information for everyone?
Science misinformation—the spread of false or misleading claims about scientific topics—persists despite years of research and intervention efforts (Swire-Thompson & Lazer, 2020; West & Bergstrom, 2021). In their 2024 report, the National Academies of Sciences, Technology, and Medicine identified social disparities as a major factor shaping engagement with misinformation. Socioeconomic and cultural disparities—such as income, education, and language barriers—affect exposure to misinformation and who can challenge it. Digital literacy gaps, systemic mistrust, and economic precarity more often affect marginalized communities due to long-standing structural inequalities (Pew Research Center, 2024; Viswanath et al., 2022). During the 2025 wildfires in the Los Angeles region, unverified claims about climate change circulated widely (Doan & Delzer, 2025). Black and Hispanic communities generally face higher wildfire vulnerability (Davies et al., 2018), hold stronger mistrust in institutions (Bogart et al., 2021), and have less access to digital information (Curtis et al., 2022). Thus, they are increasingly susceptible to misinformation due to multiple self-reinforcing factors rooted in social disparities. For example, low-income Hispanics often relied on TV, texts, or even personal conversations with doctors for wildfire updates. However, the information was often not available in their preferred language, format, or at the right time, leaving many less prepared (Jiao et al., 2025). These are not individual failings but structural conditions amplifying the impact of misinformation (Amazeen et al., 2024). While this section specifically focuses on science misinformation, many of the underlying dynamics, such as digital exclusion, institutional mistrust, and socioeconomic inequality, cut across domains, shaping engagement with misinformation in areas ranging from health to politics.
This reality is further complicated by AI-generated content. While debate continues over AI’s impact on misinformation (De Nadal & Jančárik, 2024; Simon et al., 2023), its potential to spread inaccuracies or outright falsehoods raises concerns (Capraro et al., 2024; Peng & Wang, 2024). Simultaneously, AI offers opportunities to counter misinformation—for instance, by providing fact-checking during crises (Imran et al., 2020) or removing barriers through translation (Zaki & Ahmed, 2024). With platforms like Meta discontinuing third-party fact-checking (Roeder, 2025), equitable access to these tools becomes increasingly important. Without efforts to ensure accessibility across linguistic and digital literacy divides, such technologies risk reinforcing the very marginalization they could reduce.
Research on misinformation and social disparities: AI as the missing link
AI-driven information systems sit at the heart of today’s misinformation landscape, offering powerful tools for detection and response while at the same time deepening the very problems they aim to solve. On one hand, AI helps flag false claims and streamline fact-checking (DeVerna et al., 2024). On the other, algorithms that prioritize clicks often favor sensational content over accuracy, reinforcing confirmation biases and limiting exposure to diverse views (Brossard & Scheufele, 2022). Those who engage with misleading content are more likely to encounter similar material again (Mogdil et al., 2024). This effect is especially harmful for communities already facing information barriers, such as rural areas with poor broadband infrastructure or low-income groups with limited digital literacy (Philip et al., 2017). A history of misrepresentation in mainstream science and media can also reduce trust in official information, making misleading sources appear more credible. For example, communities that have experienced medical racism or environmental neglect may be less inclined to rely on government-backed health or climate messaging (Jaiswal et al., 2020). Whereas such concerns about AI-generated misinformation are growing (Capraro et al., 2024; Shin et al., 2024), we still know far too little about how these technologies may quietly be worsening existing inequalities.
Even with the uncertainty surrounding AI developments, efforts to counter misinformation in science have led to promising interventions. Strategies like psychological inoculation—exposing people to weakened misinformation in advance—can build resistance to falsehoods, while fact-checking improves belief accuracy across political lines (Walter et al., 2020). However, a closer look at this literature reveals that while research on the intersection of science misinformation and socioeconomic status exists, it often treats these disparities as background variables rather than core drivers of vulnerability. Much of the work remains focused on individual-level susceptibility or cognitive interventions without adequately addressing how structural inequalities shape exposure, belief, and the capacity to resist falsehoods (Lin et al., 2022; Walter et al., 2020). On the other hand, there is a growing body of work on the digital and AI divide and its links to social disparities (Li et al., 2024; Wang et al., 2024). This research, however, rarely addresses the specific role of science-related misinformation. A more structural lens is needed to bridge these areas and better understand how AI technologies may differentially impact communities along socioeconomic lines.
Access to reliable knowledge is equally about trying to understand what information is available, as well as ensuring that everyone can meaningfully engage with it. Socioeconomic disparities severely threaten the latter dimension of this: Limited broadband access, language barriers, and digital literacy gaps make it more difficult to verify information or seek out alternative perspectives (Viswanath et al., 2022). At the same time, systemic mistrust—often rooted in historical injustices—shapes how misinformation spreads. For example, nearly 90% of Black Americans report encountering inaccurate media portrayals, which can erode trust in mainstream information sources and lead some to seek alternative outlets that may themselves perpetuate misinformation (Jaiswal et al., 2020; Pew Research Center, 2024). Economic precarity compounds this issue. People under financial stress may be more susceptible to misinformation, particularly when narratives align with their frustrations or anxieties. Studies show that conspiracy theories about economic systems or public health policies often resonate more strongly with those experiencing financial instability (Salvador Casara et al., 2022).
Despite these structural barriers, many marginalized communities actively work to counter misinformation. Latinx and Asian community organizations, for example, have launched fact-checking websites, mobilized volunteer teams, and led media literacy efforts tailored to their communities (Ozawa et al., 2024). These initiatives highlight the importance of community-based strategies and the need for research that takes such grassroots responses into account.
A research agenda connecting AI, misinformation, and socioeconomic disparities
In this state of uncertainty and uneven impact, there is great potential for research to examine how AI-driven misinformation intersects with existing social inequalities and develop approaches that respond to these realities. While methodological rigor and responsible media engagement have shown promise in countering misinformation (Chan et al., 2017; Goldstein et al., 2020), they must be adapted to account for the social contexts in which misinformation spreads. Moreover, since research is not immune to structural biases, there is a risk that findings may overgeneralize or fail to fully capture the experiences of marginalized communities. As AI increasingly shapes what information is shared, seen, and trusted, research must keep pace—not only by identifying technical solutions but by addressing the ways in which these systems affect vulnerable communities (Nieminen, 2024).
Even when setting aside the inherent limitations of available evidence, many common solutions still fail to reflect people’s lived experiences. Most of the suggested approaches—from simple fact-checking tools to AI-assisted interventions—are grounded in research that rarely considers social disparities (Walter et al., 2020). Much of this work is based on Western, Educated, Industrialized, Rich, and Democratic (WEIRD) populations (Henrich et al., 2010), leaving out those most affected by misinformation. The same blind spots are appearing in research on AI tools themselves. Without a broader, more inclusive lens, well-meaning interventions may unintentionally reinforce the very inequalities they seek to address. For example, fact-checking often presumes stable internet access, media literacy initiatives may not fully account for differences in educational background, and AI tools developed primarily in English can struggle to serve linguistically diverse communities. That is why this moment demands a shift in research priorities toward ecologically valid and, as a result, more inclusive approaches. To advance this agenda, we focus on the following key domains that emerge from the convergence of AI, misinformation, and social marginalization—an often-overlooked intersection we refer to as the social margin of AI and misinformation:
Trust in science and institutions. Understanding what builds or erodes trust in different communities is essential, particularly as AI-generated content can either deepen skepticism or foster transparency through fact-checking and accessibility tools (Jaiswal et al., 2020; Saeidnia et al., 2025; Soto-Vásquez, 2023). Future studies could examine how trust varies depending on whether scientific claims are framed through culturally resonant narratives—such as community care in Black churches, historical trauma among Indigenous populations, or distrust of pharmaceutical companies in low-income rural communities. Experimental designs might compare whether AI-generated fact checks that acknowledge historical injustices are more effective than neutral explanations. Longitudinal studies could track how exposure to culturally adapted content affects trust in public health over time.
Intentional targeting and algorithmic amplification. While research has examined how marginalized groups are targeted (e.g., medical racism during COVID-19; Bogart et al., 2021), broader, comparative studies are needed to understand the role of AI-driven systems in shaping misinformation exposure. Algorithmic personalization and recommendation engines can subtly reinforce existing disparities by curating different informational environments based on user profiles. For example, simulating user profiles of Spanish-speaking immigrants, low-income white rural users, and urban youth of color could reveal how platform recommendation systems deliver differing volumes or types of scientific misinformation—ranging from anti-vaccine rhetoric to conspiracy theories about climate change or reproductive health. Paired with audit studies, these simulations could help identify how AI-powered systems pull certain communities into or protect them from echo chambers and misinformation feedback loops (Diamond et al., 2022).
Diversifying data sources and methods. Combining web search data with online discourse or integrating large-scale AI analysis with in-depth qualitative research can help uncover context-specific misinformation patterns (Soto-Vásquez, 2023; Viswanath et al., 2024). This could involve combining search behavior data with content circulating in community-based messaging networks and integrating large-scale data analysis with qualitative methods such as interviews or ethnographic fieldwork. For example, researchers could examine the spread of narratives promoting alternative health remedies by mapping when and where such content emerges and then contextualizing those patterns through conversations with individuals in regions with limited healthcare access.
To ensure real-world impact, this research agenda should involve early collaboration with policymakers, technology companies, and affected communities. Researchers could, for example, co-design platform guidelines with tech developers, contribute to policy briefs on algorithmic transparency, or work with NGOs like Retraction Watch and political foundations such as the Robert Wood Johnson Foundation, which focuses on health equity and evidence-based policy, to build localized media literacy and public health campaigns. The COVID-19 pandemic offered a clear example: In many Black communities, medical racism led to a deep erosion of trust in health institutions, creating gaps that misinformation was quick to fill (Bogart et al., 2021). Researchers should involve these communities directly through interviews or participatory research and collaborate with trusted institutions like churches, clinics, and advocacy groups (Ozawa et al., 2024).
Beyond research: Bridging AI, misinformation, and social disparities
Misinformation in science is deeply intertwined with social disparities and the growing influence of AI. Yet, much of the current research focuses on individual-level solutions, such as improving digital literacy. While valuable, these approaches often assume homogeneity of access and skills, placing disproportionate responsibility on individuals while overlooking broader structural forces—economic disparities, cultural differences, and algorithmic biases—that shape misinformation exposure and engagement.
Our agenda challenges this narrow framing by supporting a research-based shift toward approaches that recognize and address these systemic factors. Tackling science misinformation requires collaboration across disciplines and institutions, supported by structural interventions. While some efforts focus on individual resilience—such as inoculation strategies that help users recognize manipulative tactics (Cook, 2017)—others target structural change. One such approach is technocognition, which integrates psychological and technological insights to redesign digital environments (Lewandowsky & van der Linden, 2021), for instance, by incorporating inclusive recommendation systems that reduce misinformation. Regulatory policy frameworks can further enforce algorithmic transparency (European Commission, 2022). Ultimately, confronting science misinformation requires a multi-layered, collaborative response: Research, education, technology, policy, and design must work together—each attuned to the triad of AI, misinformation, and social marginalization.
Moving forward, research must adopt a more structural and context-sensitive perspective to build inclusive knowledge infrastructures. AI does not merely accelerate the spread of misinformation; it shapes the social dynamics of visibility, trust, and access. Marginalized communities are disproportionately exposed to false or misleading narratives, reinforcing cycles of exclusion and institutional distrust (Jaiswal et al., 2020; Soto-Vásquez, 2023). This piece calls for a shift away from one-size-fits-all interventions toward approaches that account for the diverse ways in which misinformation and inequality intersect. Only by acknowledging and confronting these complexities can we develop solutions that work not just in theory but in the real, uneven landscapes where misinformation takes root.
Topics
Bibliography
Amazeen, M. A., Vasquez, R. A., Krishna, A., Ji, Y. G., Su, C. C., & Cummings, J. J. (2024). Missing voices: Examining how misinformation-susceptible individuals from underrepresented communities engage, perceive, and combat science misinformation. Science Communication, 46(1), 3–35. https://doi.org/10.1177/10755470231217536
Bogart, L. M., Ojikutu, B. O., Tyagi, K., Klein, D. J., Mutchler, M. G., Dong, L., Lawrence, S. J., Thomas, D. R., & Kellman, S. (2021). COVID-19 related medical mistrust, health impacts, and potential vaccine hesitancy among Black Americans living with HIV. JAIDS Journal of Acquired Immune Deficiency Syndromes, 86(2), 200–207. https://doi.org/10.1097/QAI.0000000000002570
Brossard, D., & Scheufele, D. A. (2022). The chronic growing pains of communicating science online. Science, 375(6581), 613–614. https://doi.org/10.1126/science.abo0668
Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., Bonnefon, J.-F., Brañas-Garza, P., Butera, L., Douglas, K. M., Everett, J. A. C., Gigerenzer, G., Greenhow, C., Hashimoto, D. A., Holt-Lunstad, J., Jetten, J., Johnson, S., Kunz, W. H., Longoni, C., (…) & Viale, R. (2024). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus, 3(6), pgae191. https://doi.org/10.1093/pnasnexus/pgae191
Chan, M. S., Jones, C. R., Hall Jamieson, K., & Albarracín, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546. https://doi.org/10.1177/0956797617714579
Cook, J. (2017). Understanding and countering climate science denial. Journal and Proceedings of the Royal Society of New South Wales, 150(465/466), 207–219. https://search.informit.org/doi/abs/10.3316/INFORMIT.388378410941383
Curtis, M. E., Clingan, S. E., Guo, H., Zhu, Y., Mooney, L. J., & Hser, Y. I. (2022). Disparities in digital access among American rural and urban households and implications for telemedicine‐based services. The Journal of Rural Health, 38(3), 512–518. https://doi.org/10.1111/jrh.12614
De Nadal, L., & Jančárik, P. (2024). Beyond the deepfake hype: AI, democracy, and “the Slovak case.” Harvard Kennedy School (HKS) Misinformation Review, 5(4). https://doi.org/10.37016/mr-2020-153
DeVerna, M. R., Yan, H. Y., Yang, K. C., & Menczer, F. (2024). Fact-checking information from large language models can decrease headline discernment. Proceedings of the National Academy of Sciences, 121(50), e2322823121. https://doi.org/10.1073/pnas.2322823121
Diamond, L. L., Batan, H., Anderson, J., & Palen, L. (2022). The polyvocality of online COVID-19 vaccine narratives that invoke medical racism. In S. Barbosa, C. Lampe, C. Appert, D. A. Shamma, S. Drucker, J. Williamson, & K. Yatani (Eds.), CHI’22: Proceedings of the 2022 CHI conference on human factors in computing systems (pp. 1–21). Association for Computing Machinery. https://doi.org/10.1145/3491102.3501892
Doan, L., & Delzer, E. (2025, January 16). Wildfire conspiracy theories are going viral again. Why? CBS News. https://www.cbsnews.com
European Commission (2022). 2022 New strengthened code of practice on misinformation. Publications Office of the European Union. https://data.europa.eu/doi/10.2759/895080
Goldstein, C. M., Murray, E. J., Beard, J., Schnoes, A. M., & Wang, M. L. (2020). Science communication in the age of misinformation. Annals of Behavioral Medicine, 54(12), 985–990. https://doi.org/10.1093/abm/kaaa088
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X
Imran, M., Ofli, F., Caragea, D., & Torralba, A. (2020). Using AI and social media multimodal content for disaster response and management: Opportunities, challenges, and future directions. Information Processing & Management, 57(5), 102261. https://doi.org/10.1016/j.ipm.2020.102261
Jaiswal, J., LoSchiavo, C., & Perlman, D. C. 2020. Disinformation, misinformation and inequality-driven mistrust in the time of COVID-19: Lessons unlearned from AIDS denialism. AIDS and Behavior, 24, 2776–2780. https://doi.org/10.1007/s10461-020-02925-y
Jiao, A., Vargas, A. L., Gluhova, Y. D., Headon, K., Rangel, L., Abdallah, S., Ramsey, E. C., Truong, K., Chal, A. M., Hopfer, S., & Wu, J. (2025). Wildfire risk perception and communication in disadvantaged communities: Insights from Eastern Coachella Valley in Southern California. International Journal of Disaster Risk Reduction, 117, 105186. https://doi.org/10.1016/j.ijdrr.2025.105186
Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32(2), 348–384. https://doi.org/10.1080/10463283.2021.1876983
Li, W., Xu, S., Zheng, X., & Sun, R. (2024). Bridging the knowledge gap in artificial intelligence: The roles of social media exposure and information elaboration. Science Communication, 46(4), 399–430.
Lin, F., Chen, X., & Cheng, E. W. (2022). Contextualized impacts of an infodemic on vaccine hesitancy: The moderating role of socioeconomic and cultural factors. Information Processing & Management, 59(5), 103013. https://doi.org/10.1016/j.ipm.2022.103013
Modgil, S., Singh, R. K., Gupta, S., & Dennehy, D. (2024). A confirmation bias view on social media induced polarisation during Covid-19. Information Systems Frontiers, 26(2), 417–441. https://doi.org/10.1007/s10796-021-10222-9
National Academies of Sciences, Engineering, and Medicine. (2024). Understanding and addressing misinformation about science. The National Academies Press. https://doi.org/10.17226/27894
Nieminen, H. (2024). Why does disinformation spread in liberal democracies? The relationship between disinformation, inequality, and the media. Javnost-The Public, 31(1), 123–140. https://doi.org/10.1080/13183222.2024.2311019
Ozawa, J. V. S., Woolley, S., & Lukito, J. (2024). Taking the power back: How diaspora community organizations are fighting misinformation spread on encrypted messaging apps. Harvard Kennedy School (HKS) Misinformation Review, 5(3). https://doi.org/10.37016/mr-2020-146
Peng, L., & Wang, J. (2024). Algorithm as recommending source and persuasive health communication: Effects of source cues, language intensity, and perceived issue involvement. Health Communication, 39(4), 852–861. https://doi.org/10.1080/10410236.2023.2242087
Pew Research Center. (2024, June 15). Most Black Americans believe U.S. institutions were designed to hold Black people back. https://www.pewresearch.org/race-and-ethnicity/2024/06/15/most-black-americans-believe-u-s-institutions-were-designed-to-hold-black-people-back/
Philip, L., Cottrill, C., Farrington, J., Williams, F., & Ashmore, F. (2017). The digital divide: Patterns, policy and scenarios for connecting the ‘final few’ in rural communities across Great Britain. Journal of Rural Studies, 54, 386–398. https://doi.org/10.1016/j.jrurstud.2016.12.002
Roeder, A. (2025, January 4). Meta’s fact-checking changes raise concerns about spread of science misinformation. Harvard T. H. Chan School of Public Health. https://hsph.harvard.edu/news/metas-fact-checking-changes-raise-concerns-about-spread-of-science-misinformation/
Saeidnia, H. R., Hosseini, E., Lund, B., Tehrani, M. A., Zaker, S., & Molaei, S. (2025). Artificial intelligence in the battle against disinformation and misinformation: A systematic review of challenges and approaches. Knowledge and Information Systems, 67, 3139–3158. https://doi.org/10.1007/s10115-024-02337-7
Salvador Casara, B. G., Suitner, C., & Jetten, J. (2022). The impact of economic inequality on conspiracy beliefs. Journal of Experimental Social Psychology, 98, 104245. https://doi.org/10.1016/j.jesp.2021.104245
Shin, D., Koerber, A., & Lim, J. S. (2024). Impact of misinformation from generative AI on user information processing: How people understand misinformation from generative AI. New Media & Society. https://doi.org/10.1177/14614448241234040
Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School (HKS) Misinformation Review, 4(5). https://doi.org/10.37016/mr-2020-127
Soto-Vásquez, A. D. (2023). A review of academic literature on U.S. Latinos and disinformation. Digital Democracy Institute of the Americas. https://ddia.org/en/review-of-literature-on-us-latinos-and-disinformation
Swire-Thompson, B., & Lazer, D. (2020). Public health and online misinformation: Challenges and recommendations. Annual Review of Public Health, 41(1), 433–451. https://doi.org/10.1146/annurev-publhealth-040119-094127
Viswanath, K., Lee, E. J., & Dryer, E. (2024). Communication inequalities and incomplete data hinder understanding of how social media affect vaccine uptake. BMJ, 385, e076478. https://doi.org/10.1136/bmj-2023-076478
Viswanath, K., McCloud, R. F., & Bekalu, M. A. (2022). Communication, health, and equity: Structural influences. In T. L. Thompson & N. G. Harrington (Eds.), The Routledge handbook of health communication (3rd ed., pp. 426–440). Routledge/Taylor & Francis Group.
Wang, C., Boerman, S. C., Kroon, A. C., Möller, J., & de Vreese, C. H. (2024). The artificial intelligence divide: Who is the most vulnerable? New Media & Society, 14614448241232345. https://doi.org/10.1177/1461444824123234
Walter, N., Cohen, J., Holbert, R. L., & Morag, Y. (2020). Fact-checking: A meta-analysis of what works and for whom. Political Communication, 37(3), 350–375. https://doi.org/10.1080/10584609.2019.1668894
West, J. D., & Bergstrom, C. T. (2021). Misinformation in and about science. Proceedings of the National Academy of Sciences, 118(15), e1912444117. https://doi.org/10.1073/pnas.1912444117
Zaki, M. Z., & Ahmed, U. (2024). Bridging linguistic divides: The impact of AI-powered translation systems on communication equity and inclusion. Journal of Translation and Language Studies, 5(2), 20–30. https://doi.org/10.48185/jtls.v5i2.1065
Funding
This work was supported by the NSF Career Grant No. IIS-1943506 (E.-Á.H. and M.S.).
Competing Interests
The authors declare no competing interests.
Copyright
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.