Connect with timely, peer-reviewed research about misinformation. Subscribe to the HKS Misinformation Review newsletter to receive our issues bi-monthly and other news from the community. The HKS Misinformation Review is an open access publication.

Commentary

Misinformed about misinformation: On the polarizing discourse on misinformation and its consequences for the field

Irene V. Pasquetto, Gabrielle Lim and Samantha Bradshaw

The field of misinformation is facing several challenges, from attacks on academic freedom to polarizing discourse about the nature and extent of the problem for elections and digital well-being. However, we see this as an inflection point and an opportunity to chart a more informed and contextual research practice.

Keep Reading

The role of narrative in misinformation games

Nisha Devasia and Jin Ha Lee

Several existing media literacy games aim to increase resilience to misinformation. However, they lack variety in their approaches. The vast majority focus on assessing information accuracy, with limited exploration of socio-emotional influences of misinformation adoption. Misinformation correction and educational games have explored how narrative persuasion influences personal beliefs, as identification with certain narratives can frame the interpretation of information.

Keep Reading
Research Note

Trump, Twitter, and truth judgments: The effects of “disputed” tags and political knowledge on the judged truthfulness of election misinformation

John C. Blanchar and Catherine J. Norris

Misinformation has sown distrust in the legitimacy of American elections. Nowhere has this been more concerning than in the 2020 U.S. presidential election wherein Donald Trump falsely declared that it was stolen through fraud. Although social media platforms attempted to dispute Trump’s false claims by attaching soft moderation tags to his posts, little is known about the effectiveness of this strategy.

Keep Reading
Research Note

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation

Jutta Haider, Kristofer Rolf Söderström, Björn Ekström and Malte Rödl

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research.

Keep Reading

The algorithmic knowledge gap within and between countries: Implications for combatting misinformation

Myojung Chung and John Wihbey

While understanding how social media algorithms operate is essential to protect oneself from misinformation, such understanding is often unevenly distributed. This study explores the algorithmic knowledge gap both within and between countries, using national surveys in the United States (N = 1,415), the United Kingdom (N = 1,435), South Korea (N = 1,798), and Mexico (N = 784).

Keep Reading

Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine

Mykola Makhortykh, Maryna Sydorova, Ani Baghumyan, Victoria Vziatysheva and Elizaveta Kuznetsova

Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine.

Keep Reading