
Does incentivization promote sharing “true” content online?
Hansika Kapoor, Sarah Rezaei, Swanaya Gurjar, Anirudh Tagat, Denny George, Yash Budhwar and Arathy Puthillam
In an online experiment in India, incentives for sharing factual posts increased sharing compared to no incentivization. However, the type of incentive (monetary or social) did not influence sharing behavior in a custom social media simulation. Curbing misinformation may not require substantial monetary resources; in fact, social media platforms can devise ways to socially incentivize their users for being responsible netizens who share true information.
A survey of expert views on misinformation: Definitions, determinants, solutions, and future of the field
Sacha Altay, Manon Berriche, Hendrik Heuer, Johan Farkas and Steven Rathje
We surveyed 150 academic experts on misinformation and identified areas of expert consensus. Experts defined misinformation as false and misleading information, though views diverged on the importance of intentionality and what exactly constitutes misinformation. The most popular reason why people believe and share misinformation was partisanship, while lack of education was one of the least popular reasons.
Older Americans are more vulnerable to prior exposure effects in news evaluation
Benjamin A. Lyons
Older news users may be especially vulnerable to prior exposure effects, whereby news comes to be seen as more accurate over multiple viewings. I test this in re-analyses of three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of mainstream, hyperpartisan, and false political headlines (139,082 observations).

Support for “doing your own research” is associated with COVID-19 misperceptions and scientific mistrust
Sedona Chinn and Ariel Hasell
Amid concerns about misinformation online and bias in news, there are increasing calls on social media to “do your own research.” In an abundant information environment, critical media consumption and information validation are desirable. However, using panel survey data, we find that positive perceptions toward “doing your own research” are associated with holding more misperceptions about COVID-19 and less trust in science over time.
Less reliable media drive interest in anti-vaccine information
Samikshya Siwakoti, Jacob N. Shapiro and Nathan Evans
As progress on vaccine rollout in the United States slowed down in Spring 2021, it became clear that anti-vaccine information posed a public health threat. Using text data from 5,613 distinct COVID misinformation stories and 70 anti-vaccination Facebook groups, we tracked highly salient keywords regarding anti-vaccine discourse across Twitter, thousands of news websites, and the Google and Bing search engines from May through June 2021, a key period when progress on vaccinations very clearly stalled.

Explaining beliefs in electoral misinformation in the 2022 Brazilian election: The role of ideology, political trust, social media, and messaging apps
Patrícia Rossini, Camila Mont’Alverne and Antonis Kalogeropoulos
The 2022 elections in Brazil have demonstrated that disinformation can have violent consequences, particularly when it comes from the top, raising concerns around democratic backsliding. This study leverages a two-wave survey to investigate individual-level predictors of holding electoral misinformation beliefs and the role of trust and information habits during the 2022 Brazilian elections.
How effective are TikTok misinformation debunking videos?
Puneet Bhargava, Katie MacDonald, Christie Newton, Hause Lin and Gordon Pennycook
TikTok provides opportunity for citizen-led debunking where users correct other users’ misinformation. In the present study (N=1,169), participants either watched and rated the credibility of (1) a misinformation video, (2) a correction video, or (3) a misinformation video followed by a correction video (“debunking”).
Examining accuracy-prompt efficacy in combination with using colored borders to differentiate news and social content online
Venya Bhardwaj, Cameron Martel and David G. Rand
Recent evidence suggests that prompting users to consider the accuracy of online posts increases the quality of news they share on social media. Here we examine how accuracy prompts affect user behavior in a more realistic context, and whether their effect can be enhanced by using colored borders to differentiate news from social content.
Search engine manipulation to spread pro-Kremlin propaganda
Evan M. Williams and Kathleen M. Carley
The Kremlin’s use of bots and trolls to manipulate the recommendation algorithms of social media platforms is well-documented by many journalists and researchers. However pro-Kremlin manipulation of search engine algorithms has rarely been explored. We examine pro-Kremlin attempts to manipulate search engine results by comparing backlink and keyphrase networks of US, European, and Russian think tanks, as well as Kremlin-linked “pseudo” think tanks that target Western audiences.
Designing misinformation interventions for all:
Perspectives from AAPI, Black, Latino, and Native American community leaders on misinformation educational efforts
Angela Y. Lee, Ryan C. Moore and Jeffrey T. Hancock
This paper examines strategies for making misinformation interventions responsive to four communities of color. Using qualitative focus groups with members of four non-profit organizations, we worked with community leaders to identify misinformation narratives, sources of exposure, and effective intervention strategies in the Asian American Pacific Islander (AAPI), Black, Latino, and Native American communities.