Editorial

All disinformation is local: A reflection on the need and possibility of measuring impact

Volume 1, Issue 6 Editorial

Image by siora photography on unsplash

The title “All disinformation is local” was inspired by a tweet of Joan Donovan, on September 29th 2020

On September 1st, the leadership of the HKS Misinformation Review officially passed to Dr. Natascha Chtena, a former journalist and magazine editor, whose research is in the areas of scholarly publishing, open education, and the economics of information. Dr. Chtena has already demonstrated great organizational and leadership skills, and we could not be happier to have her as our new Chief Editor.

This is obviously a bitter-sweet moment, as I leave the reins of the journal to pursue new professional adventures. Bitter, because it will be impossible not to miss all the uniquely talented individuals I had the pleasure of working with, including our editorial board, authors and peer reviewers. Sweet, because all new beginnings come with new possibilities, which I look forward to exploring as Assistant Professor at University of Michigan School of Information. 

While I was Chief Editor, my team and I worked on the design and production of the journal for over a year before officially launching it in January 2020. Since the launch, we have received over 300 articles, peer-reviewed about a third of them, and published around 50 articles, on a rolling basis. We published authors from many different regions and with different levels of expertise, including senior, mid- and early career scholars. Our peer-reviewers and authors come from a diverse range of academic fields: media studies, communications, psychology, political science, sociology, history, economics, computational social science, computer science, human-computer interaction, to mention just a few. The work published by our authors has been featured on national media outlets such as the Washington Post, The New York Times, CNN, HBO, among others. Dr. Chtena is now working to take the journal to the next level of professionalization, introducing new standards and processes to ensure quality control, continuity and further growth. 

Since 2019, I have been breathing the fast successes, and great anxieties, of the emerging field of mis- and disinformation studies. While the successes are obvious and quite visible (very few other academic fields receive a similar amount of public attention and media coverage), the anxieties are more subtle and generally emerge as new fundings needs to be allocated. I would like to use this last editorial to reflect on one such anxiety that plagues the field of mis- and disinformation studies. 

A recurring, exhausting concern among commentators of the field has been the urge to measure the impact of online disinformation on opinion and behavior changes. Journalists and funders ask this question all the time: What is the actual impact of online disinformation? Why does the question of impact keep coming up? This apprehension has two key origins: the first one resides in the results of the 2016 US elections (and relates to the need/desire to understand  the extent to which Americans have been manipulated online), and the other is intertwined with the race to regulate platforms (i.e., the need to put a number on the impact that platforms have on people’s behavior, to make the case for regulations). In both cases, it seems, people are looking for a black or white answer. “Yes, online disinformation is a problem” – or “No, online disinformation is not a problem, so we can stop worrying about it.” 

I’m not an historian of media, but I’m quite convinced that we have been asking similar questions since the invention of  print. The thing is that, if we asked today  the same question about television or radio, I believe that most of us would be fast to answer that, well, it depends. We easily and intuitively understand that the possibility of television or radio to directly impact someone’s behavior depends not only on the means through which the message is delivered (wink to McLuhan and tech affordances), but also on a variety of factors related to the message itself, such as who is listening and performing (Intro to Communications 101). Most importantly, we know that impact also depends on the broader structures of power in which the podcast is embedded, in addition to historical, socio-geographic and cultural factors. In quantitative terms, we would call these exogenous factors. In other words, today we understand that media such as television and radio do not exist in a vacuum and that measuring their overarching impact is an extremely complex, almost impossible task. Why should it not be the same for the Internet and online disinformation? I guess the provocation that I want to launch is: do we actually need to measure such impact? [by the way, before online disinformation, we were having the same exact debate about online activism]

Instead of trying to isolate and measure the impact of online disinformation, it would make more sense to start our investigations from the recognition that online disinformation (and the Internet more broadly) operates as part of larger media ecosystem, which – depending on the context – can be more or less impactful, within specific narratives of disinformation. Obviously, online disinformation has specific affordances that television and radio do not have. Yet, whatever comes out of the Internet is nevertheless re-interpreted and made sense of within *the same* media ecosystem that television and radio are also embedded into. Some scholars have been saying this for a while now, but – still – the question of impact keeps coming up, over and over again.

I’m no expert of media effects theories either, but what I have in mind and would like to propose is a harm-based, community-driven approach as an alternative to measuring impact. Instead of trying to measure the impact of a modality of communication as a whole, or the impact of that modality on the general population, why don’t we start by identifying those spaces in which online disinformation has indeed already created harm, reconstruct what went wrong and why, to the goal of understanding how can we make those spaces more just and equitable moving forward? 

Qualitative researchers know how to do this kind of work, especially if they come from an interpretative tradition. Methodologies and processes for conducting mixed-methods and qualitative work on online disinformation are getting validated and standardized. For quantitative and, especially, for experimental researchers, a harm-based approach requires considering new starting points for designing experiments that are not only grounded in the literature, or in general trends in the field, but which start from establishing a connection with those affected by harmful practices of online disinformation, as well. At the end of the day, why invest huge resources investigating questions whose answers are extremely hard to manipulate and generalize, when we already know who  the communities that suffer the most from online disinformation are, and we also know who it is that creates and spreads such disinformation. Quantitative work can also have a real impact when it is case specific and community-driven, even if it is not generalizable outside such contexts. 

It might be that, indeed, what we are seeing in the field of online disinformation studies is partially a reflection of changing paradigms within the social sciences at large. There was a time in which sociology was case-specific and had little desire to make generalizable claims. But then the “big data” burst onto the scene, which also established computational social science as a major methodology of inquiry. The next challenge will be to reconcile new methodologies with the depth and specificity of early sociology.   

When policy makers and journalists ask for mere numbers about the impact of online disinformation, the field should be prepared to lead the conversation by pointing at concrete examples, we should be able to be specific in terms of who is harmed by disinformation and how – even if this means giving up easy answers and reassuring generalizations (and their very possibility).

Topics
Cite this Essay

Pasquetto, I. (2020). All disinformation is local: A reflection on the need and possibility of measuring impact. Harvard Kennedy School (HKS) Misinformation Review.