Commentary
Misinformed about misinformation: On the polarizing discourse on misinformation and its consequences for the field
Article Metrics
0
CrossRef Citations
Altmetric Score
PDF Downloads
Page Views
The field of misinformation is facing several challenges, from attacks on academic freedom to polarizing discourse about the nature and extent of the problem for elections and digital well-being. However, we see this as an inflection point and an opportunity to chart a more informed and contextual research practice. To foster credible research and informed public policy, we argue that research on misinformation should be locally focused, self-reflexive, and interdisciplinary, addressing critical questions about what counts as misinformation and why it does, the vulnerabilities of specific communities, and the sociotechnical and sociopolitical conditions that shape information interpretation. By concentrating on when and how misinformation affects society, instead of whether, the field can provide more precise insights and contribute to productive discussions.
Introduction
For almost a decade, the study of misinformation has taken priority among policy circles, political elites, academic institutions, non-profit organizations, and the media. Substantial resources have been dedicated to identifying its effects, how and why it spreads, and how to mitigate its harm. Yet, despite these efforts, it can sometimes feel as if the field is no closer to answering basic questions about misinformation’s real-world impacts, such as its effects on elections or links to extremism and radicalization.
Many of the conversations that we are having about the role of misinformation in society are incredibly polarizing (Bernstein, 2021), for example, Facebook significantly shaped the results of 2016 elections vs. Facebook did not affect the outcome of the 2016 elections; algorithm recommendations polarize social media users vs. algorithm recommendations do not polarize social media users; deep fakes and other AI generated content are a significant threat to elections, or they are not. On more than one occasion, this zero-sum framing of “the misinformation threat” has led politicians and commentators to point at misinformation as either the origin of all evil in the world or as a rhetorical concept invented by (other) politicians and their allies.
Researching misinformation is challenging at the best of times. But to make things even more difficult, researchers, civil societies, journalists, and academic institutions are often under attack for covering these issues at all (Calo & Starbird, 2024). For example, some governments have passed misinformation laws to prevent journalists from challenging misinformation (Lim & Bradshaw, 2023), and since 2016, more than 120 journalists have been arrested or imprisoned under these laws (Committee to Protect Journalists, 2024). At the same time, academic institutions in the United States have closed following lawsuits against the universities and researchers for studying misinformation (Newton & Schiffer, 2024). Added to this, platforms are shutting down interventions that work and share data with researchers and civil societies, particularly in the Global South, leaving researchers with limited opportunities to study these harms (de Vreese & Tromble, 2023).
For researchers and members of communities affected by misinformation, it is hard not to see the field in crisis. However, we see this as an inflection point and an opportunity to chart a more informed, community-oriented, and contextual research practice. By diversifying perspectives and grounding research in the experiences of those most affected, the field can move beyond the current polarization. In doing so, policy decisions regarding misinformation will not only be better informed and evidence-based, but realistic about what regulations can or cannot do.
How did we get here?
Misinformation research is being conducted in a setting that is highly politicized. On the one hand, there is immense pressure to show definitive, generalizable proof—or disproof—of exactly how exposure to online misinformation influences opinion formation and human behavior, as both academics and policymakers are looking for answers in order to develop regulation (Gallo & Cho, 2021). On the other hand, some of the research findings can be politically inexpedient for certain parties, governments, and even the platforms because misinformation can be profitable, both politically and economically (Moran et al., 2024). In these cases, governments and platforms might have a vested interest in maintaining the status quo.
Another related challenge is access to data that could provide more generalizable evidence about the impact of social media on information consumption and its effects on political beliefs and behaviors. Social media platforms have been reluctant to share data with independent researchers. The recent rollback of platforms like CrowdTangle has made it even more difficult for researchers to study these issues (Scire, 2024). While companies have valid concerns over user privacy, the lack of access to data has hampered efforts to build a robust, evidence-based understanding of misinformation.
Third, it’s difficult to establish clear causal links between misinformation and real-world impacts because our world is so complex. For example, isolating the effects of social media on voting behavior is difficult because many different factors can shape how people think, feel, and behave politically, including traditional media coverage, party dynamics, underlying social and economic conditions, our pre-existing partisan biases about parties and candidates, or other social cues and heuristics we receive from our broader communities (Popkin, 1991). Disentangling these factors makes it difficult for researchers to make definitive claims about the scale and significance of misinformation’s effects on society and can lead to overgeneralization.
Finally, misinformation research has tended to focus narrowly on the impacts on elections and voting behavior, often in Western, English-speaking contexts. However, the spread of misinformation can have profound consequences for other areas, such as public health, social cohesion, and the expression of human rights. There have been calls from researchers to expand the frameworks and approaches used to study misinformation to better understand its effects through multifaceted and localized viewpoints (Ong & Negra, 2020; Udupa et al., 2021). But it is often hard to access the adequate resources, in particular funding, needed to properly study misinformation outside Western contexts.
Embracing complexity, localizing misinformation
As a field, we cannot address all the issues outlined above, particularly the systemic problems around the lack of access to data and funding or the political and economic incentives that prevent effective solutions from being implemented. However, addressing criticism related to the epistemological foundations of the field is within the control of the field itself. To move the field forward and address gaps in theories, definitions, and practices, these are some of the steps that researchers in the field can take:
- Clarify exactly what instantiation of misinformation has been studied in a given research project and the extent to which findings might—or might not—apply to different kinds of content and processes.
- Be specific about the local and context-dependent sociotechnical mechanisms that enable or constrain online misinformation in that given research process to avoid technologically deterministic overgeneralizations.
- Be transparent about the epistemological assumptions that motivate researchers to define a specific type of content as misleading and a population as misled.
That misinformation research needs to be more specific about what constitutes “misinformation” in a given research project is a common criticism of the field (see Williams, 2024) and the easiest to address. While misinformation is commonly described as false or misleading information, this definition can refer to many different kinds of content and can lead to the often polarizing and contradictory findings of the field. For example, Budak et al. (2024) found that the average user has minimal exposure to misinformation by measuring so-called “click-bait.” In this study, because misinformation is narrowly defined as false claims from unreliable click bait news sources, its impact is mainly limited to a small group of people. However, if misinformation is instead defined as hyperpartisan content from mainstream as well as alternative media outlets, then the effects are much larger and systemic and can lead to radicalization or extreme polarization (Benkler et al., 2018). Researchers should, therefore, be more explicit about what they are referring to as misinformation.
But, while necessary, being more specific about what is being researched does not address the tendency to overgeneralize findings on online misinformation and marry technologically deterministic arguments. Technological determinism is the tendency of pointing at technology to explain patterns of human behavior. The emphasis on technology, without considering the social contexts in which it operates, can result in over or underestimating the actual role of technological artifacts in our societies. Technologically deterministic arguments are present across many contemporary tech policy issues, including filter bubbles, echo chambers, algorithmic amplification, and rabbit holing, where technology is seen as the only cause of these problems.
For example, research on the spread of misinformation on social media platforms often attributes this phenomenon solely to the algorithmic recommendation systems of social media platforms. This technologically deterministic view overlooks the fact that users also play an active role in seeking out, engaging with, and sharing misinformation based on their own preferences and predispositions. While algorithms are certainly a (big) part of the problem, we need to pay attention to interactions between users and algorithms and keep in mind that misinformation can spread in spaces that are not algorithmically mediated, like encrypted chat applications.
Affordance theory can provide a more nuanced understanding of the relationship between technology and user behavior. Affordance theory is the idea that the impact of a given technology on a user’s behavior depends on 1) the design of the technology, as much as on 2) users’ predispositions, identities, and needs, as well as on 3) the temporal and spatial contexts in which the technology is used. Users cannot do an unlimited number of things with a given technology due to its design constrains although they can sometimes do things with technology for which it was not specifically designed.
In the context of online misinformation, this translates to the fact that platform users can make use of information technologies—including platform algorithms—in ways that are crafty and unexpected. We know, for example, that young users are often well aware of how recommendation algorithms work and literally train them (or attempt to) to meet their needs. Not by chance, many in the field have focused on studying media manipulation tactics, a concept that refers exactly to this aspect of misinformation: On search engines and social media platforms, users actively work to game platforms affordances, and their online behavior is not passively dictated by how the platform works (Bradshaw, 2019; Tripodi, 2022).
The other aspect to reflect upon when considering how people get misinformed online is that the internet can often be a highly participative environment, especially when it comes to collectively making sense of the world. Within communities that are organized around validating some form of knowledge (e.g., conspiracy theories but also, to stay close to academia, metascience discussions about p hacking), what constitutes “valid truth” is not imposed top down on users who simply passively believe and reshare specific pieces of content, but it is negotiated and constructed via collective efforts (Tripodi, 2018). People believe things together because they do things together.
It is then crucial that misinformation researchers spend time online embedded in the communities that they study in order to understand what values and principles motivate users to discern between acceptable and not acceptable claims about the world (Friedberg, 2020). Different communities are affected very differently by misinformation, and identifying which communities are at higher risk of being misinformed should be part of the job. For example, diaspora communities from non-Western countries living in Western countries are at a high risk of being misinformed as they are the direct target of influence operations coming from both their own and the host country (Aljizawi et al., 2022). Typically, within these communities, misinformation is not easily recognizable as it is embedded within complex narratives and mixed with verifiable information.
One of the most important points to be made is that, simply put, we make sense of the world via stories, not via single facts. This is something that misinformation researchers only recently started to grapple with, such as in Kate Starbird’s (2023) work on frames and disinformation. Starbird (2023) proposes to understand misinformation as a collective sensemaking process in which individuals select what counts as evidence not (solely) based on the quality of such evidence, but also based on pre-existing narratives. Also, within a given narrative, individuals decide what counts as evidence while interacting with each other, not in solitude. This interpretative process happens both online and offline (e.g., talking with family, friends, etc.). Key questions to be asked when researching online misinformation, then, are what stories are shaping the interpretation of online content and where do such narratives come from (Kuo & Marwick, 2021)? Which actors are trying to influence the narrative, online and offline? And how do we account for these frames in experimental settings? As Starbird (2023) notes, when frames are the key to understanding online misinformation, how do we produce and communicate “better frames” in addition to “better facts”?
Sometimes, explanatory frameworks are co-constructed by participatory users over the Internet, and only later amplified by the media (Angelini & Jones, 2024). Sometimes, stories pushed by the mainstream media or politicians (or both) create frames that are used to interpret online information. The relationship between interpretative frames and online false claims is not a given; it is in flux and highly context-dependent. Once again, it is the job of misinformation researchers to unpack existing narratives and understand their origins, which can be rooted in mainstream media, online environments, both, or elsewhere (e.g., political campaigns, etc.).
Thus, research on misinformation should focus on unpacking the sociotechnical mechanisms that enable or constrain information from functioning (or being perceived if you will) as evidence in a given epistemic context. To be clear, we believe that this is what most research on misinformation is actually about, even though, we argue, we have not been explicit about this as a field.
Last but not least, researchers should be more self-reflexive and transparent in regard to how we make decisions about what counts as misleading and how such decisions affect our analyses and explanation of findings. By not defining what constitutes as misleading and why it counts, the field makes itself vulnerable to criticism about the intentions of those working in the field (Bernstein, 2021). The very concept of misinformation presupposes that we are dealing with a contested epistemic situation. For this reason, openly acknowledging what we believe counts as truth (and why it does) (e.g., by adopting positionality statements) is critical for more productive discussions.
As researchers operating in mainstream academia, we are predisposed to believing in, and advocating for, certain evidence and not in others. We have opinions, and we worry, for us, for our families, for the environment. Our concerns regarding harm and consequences of misleading content should be clearly articulated, not taken for granted. This does not mean that such opinions automatically translate into biases. But they might if we do not face them and reflect upon how they influence us. Acknowledging our concerns and related epistemological positions is one way to keep those biases from influencing our explanations of events and processes.
Moving forward: More interdisciplinary, contextualized research on misinformation
The pressure to find generalizable, omniscient knowledge on the role of misinformation in shaping our opinions and behavior is misplaced and, honestly, unnecessary. Several states—including Utah, California, and Arkansas—have introduced or passed bills designed to protect specific vulnerable populations from specific technological designs (in this case, teenagers and algorithmic recommendations, respectively) (Wu, 2023). Asking broad catch-all questions such as “Are social media bad for democracy?” makes little sense and scares away both researchers and commentators. Instead, research design should be driven by questions such as:
- What counts as misinformation in the context under study, and why does it count?
- Which communities and individuals are at higher risk of being misinformed?
- Why are these communities and individuals at higher risks?
- What do we know of their internet habits, practices, and media diets?
- Who counts as an expert within such communities?
- What narratives guide interpretations of information in the context under study?
- Where are these narratives coming from? Are these top-down narratives? Are these narratives co-constructed in a participatory manner?
- How do these narratives shape collective interpretation of events, online and offline?
- What skills and tools do these communities and individuals have at their disposal to aid information search and interpretation?
- How can solutions—such as fact-checking, prebunking, media literacy, etc.—account for such complexities related to the locality of (mis)information practices?
When these questions are not taken into consideration, then we are at higher risk of technological determinism and overgeneralization. Misinformation can surely be understood and measured, but, for the most part, this can only be effectively done locally and via self-reflexive research practices.
Investigating whether misinformation shapes public opinion is overly broad and sets up an impossible task for researchers. Trying to understand when and how misinformation shapes public opinion is both a more achievable goal and provides more practical data for researchers and policymakers. The field should invest in in-depth, interdisciplinary efforts that are aimed at unpacking how misinformation plays out among specific populations, in specific geographical locations, and temporal conditions. As researchers it is our responsibility to carefully lay out our scope conditions, contextualize our findings, and be humble about if and when we can generalize our findings beyond a study’s sample or population.
Bibliography
Al-Jizawi, N., Anstis, S., Barnett, S., Chan, S., Leonard, N., Senft, A., & Deibert, R. (2022). Psychological and emotional war: Digital transnational repression in Canada (Citizen Lab Research Report No. 151). University of Toronto. https://hdl.handle.net/1807/120575
Angelini, G., & Jones, A. (Directors). (2024). The antisocial network [Documentary]. Netflix.
Benkler, Y., Faris, R., & Roberts, & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press. https://doi.org/10.1093/oso/9780190923624.001.0001
Bernstein, J. (2021, September). Bad news: Selling the story of disinformation. Harper’s Magazine, 343(2056), 25–31. https://harpers.org/archive/2021/09/bad-news-selling-the-story-of-disinformation/
Budak, C., Nyhan, B., Rothschild, D. M., Thorson, E., & Watts, D. J. (2024). Misunderstanding the harms of online misinformation. Nature, 630(8015), 45–53. https://doi.org/10.1038/s41586-024-07417-w
Bradshaw, S. (2019). Disinformation optimised: Gaming search engine algorithms to amplify junk news. Internet Policy Review, 8(4), 1–24. https://doi.org/10.14763/2019.4.1442
Calo, R., & Starbird, K. (2024, July 4). American academic freedom is in peril. Science, 385(6704), 7. https://www.science.org/doi/10.1126/science.adr3820
Committee to Protect Journalists. (n.d.). CPJ’s database of attacks on the press. https://cpj.org/data/
de Vreese, C., & Tromble, R. (2023). The data abyss: How lack of data access leaves research and society in the dark. Political Communication, 40(3), 356–360. https://doi.org/10.1080/10584609.2023.2207488
Friedberg, B. (2020). Investigative digital ethnography: Methods for environmental modeling. The Media Manipulation Casebook. https://mediamanipulation.org/research/investigative-digital-ethnography-methods-environmental-modeling
Gallo, J. A., & Cho, C. Y. (2021, January 27). Social media: Misinformation and content moderation issues for Congress. (CRS report No. R46662). https://crsreports.congress.gov/product/details?prodcode=R46662
Kuo, R., & Marwick, A. (2021). Critical disinformation studies: History, power, and politics. Harvard Kennedy School (HKS) Misinformation Review, 2(4). https://doi.org/10.37016/mr-2020-76
Lim, G., & Bradshaw, S. (2023, July). Chilling legislation: Tracking the impact of “fake news” laws on press freedom internationally. Center for International Media Assistance. https://www.skeyesmedia.org/documents/bo_filemanager/CIMA-Chilling-Legislation_web_150ppi.pdf
Moran, R. E., Swan, A. L., & Agajanian, T. (2024). Vaccine misinformation for profit: Conspiratorial wellness influencers and the monetization of alternative health. International Journal of Communication, 18, 1202–1224. https://ijoc.org/index.php/ijoc/article/view/21128/4494
Newton, C., & Schiffer, Z. (2024, June 13). The Stanford Internet Observatory is being dismantled. Platformer. https://www.platformer.news/stanford-internet-observatory-shutdown-stamos-diresta-sio/
Ong, J.C., & Negra, D. (2020). The media (studies) of the pandemic moment: Introduction to the 20th anniversary special issue. Television & New Media 21(6), 555-561. https://journals.sagepub.com/doi/10.1177/1527476420934127
Popkin, S. L. (1991). The reasoning voter: Communication and persuasion in presidential campaigns. University of Chicago Press.
Scire, S. (2024, March 14). A window into Facebook closes as Meta sets a date to shut down CrowdTangle. NiemanLab. https://www.niemanlab.org/2024/03/a-window-into-facebook-closes-as-meta-sets-a-date-to-shut-down-crowdtangle/
Starbird, K. (2023, December 6). Facts, frames, and (mis)interpretations: Understanding rumors as collective sensemaking. Center for an Informed Public. https://www.cip.uw.edu/2023/12/06/rumors-collective-sensemaking-kate-starbird/
Tripodi, F. (2018, May 16). Searching for alternative facts. Data & Society. https://datasociety.net/library/searching-for-alternative-facts/
Tripodi, F. (2022). The propagandists’ playbook: How conservative elites manipulate search and threaten democracy. Yale University Press.
Udupa, S., Hickok, E., Maronikolakis, A., Schuetze, H., Csuka, L., Wisiorek, A., & Nann, L. (2021). Artificial intelligence, extreme speech, and the challenges of online content moderation. AI4Dignity Project. https://epub.ub.uni-muenchen.de/76087/
Williams, D. (2024, June 15). Misinformation researchers are wrong: There can’t be a science of misleading content. Conspicuous Cognition. https://www.conspicuouscognition.com/p/misinformation-researchers-are-wrong
Wu, T. (2023, December 19). Courts are choosing TikTok over children. The Atlantic. https://www.theatlantic.com/ideas/archive/2023/12/netchoice-v-bonta-california-case-social-media-children/676351/
Funding
No funding has been received to conduct this research.
Competing Interests
The authors declare no competing interests.
Copyright
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.
Authorship
The first authors drafted an initial version of this commentary. The second and third authors contributed to the draft with additional content, edited the draft, and left comments. The first author prepared a second version of the commentary based on the second and third authors’ feedback. Then, the three authors met during a series of co-editing sessions to finalize the commentary.