Peer Reviewed

Lateral reading: College students learn to critically evaluate internet sources in an online course

Article Metrics
CrossRef

23

CrossRef Citations

Altmetric Score

150

PDF Downloads

PDF downloads since July 10, 2023
11146

Page Views

The COVID-19 pandemic has forced college students to spend more time online. Yet many studies show that college students struggle to discern fact from fiction on the Internet. A small body of research suggests that students in face-to-face settings can improve at judging the credibility of online sources. But what about asynchronous remote instruction? In an asynchronous college nutrition course at a large state university, we embedded modules that taught students how to vet websites using fact checkers’ strategies. Chief among these strategies was lateral reading, the act of leaving an unknown website to consult other sources to evaluate the original site. Students improved significantly from pretest to posttest, engaging in lateral reading more often post intervention. These findings inform efforts to scale this type of intervention in higher education.

Image by chenspec on Pixabay

Research Question

  • Can college students learn to effectively evaluate Internet sources in an asynchronous online course?

Essay Summary

  • The COVID-19 pandemic has forced college students to spend more time online. However, research has shown that students are ill-equipped to evaluate information they encounter there (Hargittai et al., 2010; Kirschner & van Merriënboer, 2013; McGrew et al., 2018; Shellenbarger, 2016). In the midst of the pandemic and increasing disinformation, this lack of preparation threatens civic and public health.
  • Prior interventions have shown that middle school, high school, and college students can become more skilled evaluators of digital content through in-person instruction when taught strategies used by professional fact checkers (Brodsky et al., 2019; Kohnen et al., 2020; McGrew, 2020; McGrew et al., 2019; Wineburg et al., 2019; Wineburg & McGrew, 2017, 2019). This study tested whether these strategies could be taught in an asynchronous college course.
  • Students (n = 87) completed 4 one-hour modules. These modules included instructional videos; exercises in which students evaluated online sources about nutrition; and screencasts that modeled how to evaluate the credibility of these sources. The modules also provided instruction that addressed common misconceptions (e.g., that a dot-org domain makes a site trustworthy or that links to authoritative sources, by themselves, confer credibility). A repeated-measures ANOVA showed significant improvements in student scores from pretest to posttest. 
  • At pretest, only 3 of 87 students engaged in lateral reading by leaving the original site and consulting at least one other source. At posttest, 67 of 87 did so.
  • Results suggest that students can learn to evaluate the credibility of online sources through asynchronous instruction embedded in regular course content. As higher education seeks to address digital illiteracy at scale, these findings can inform curricular revisions.

Implications 

Recent events underscore the threat that digital illiteracy poses to public health and democracy. In February 2020, the World Health Organization declared that the coronavirus pandemic had spawned an “infodemic” of dangerously inaccurate health information (Zarocostas, 2020). Despite this warning, disinformation has spread with stunning speed. Millions viewed social media posts about miracle cures, some of which urged the ingestion of chlorine dioxide, a chemical that can cause vomiting, diarrhea, and even death (Eaton et al., 2020; Merlan, 2020). Pernicious disinformation surged during racial justice protests. Facebook groups spread conspiracy theories that demonstrators protesting the killing of George Floyd were paid by George Soros, a spurious claim as well as an anti-Semitic dog whistle (Seitz, 2020). Coordinated disinformation campaigns sought to discourage Black voters from voting in the 2020 election cycle (Halper, 2020).

As COVID-19 threatened public health, much of higher education in the United States shifted to remote instruction. College students are being sent online to complete assignments and do research (“Colleges’ reopening models,” 2020). At the same time, many studies have shown that college students struggle to evaluate Internet sources (Hargittai et al., 2010; List et al., 2016; Lurie & Mustafaraj, 2018; Martzoukou et al., 2020; McGrew et al., 2018; Pan et al., 2007). Still, a misperception tenaciously holds that because young people grew up with digital devices, they know how to evaluate the information that flows across their screens (Prensky, 2001). Without offering evidence or citing research, a recent Politico article claimed that Gen Z’ers “aren’t falling for the same fake news stories that may have duped their parents in 2016” (Choi, 2020). Such claims persist despite a 2019 national survey of 3,446 high school students that revealed major deficiencies in evaluating the credibility of online sources (Breakstone et al., 2019; Mathews, 2019). Fifty-two percent said that a Facebook video claiming to show ballot stuffing during the 2016 Democratic primary elections (a video that came from Russia—a fact easily established by searching for “2016 voter fraud video”) constituted “strong evidence” of U.S. voter fraud. Nine of ten students were unable to come up with a cogent rationale for rejecting the video. Across the survey’s tasks, students overwhelmingly judged websites on the basis of surface-level features: their top-level domain (i.e., whether a site was a dot-com or a dot-org), appearance and design, links to other sites, and information on the About page. Rarely did students leave the original website to consult other sources. Students from all demographic groups fared poorly. The chasm between young people’s perceived competence and their demonstrated performance (Hargittai et al., 2010; Nygren & Guath, 2019; Porat et al., 2018) represents a growing threat when disinformation is ascendant and young adults spend more time on digital devices.

This worrisome mixture of digital illiteracy and misplaced confidence motivated the present study. We investigated whether a curricular intervention implemented asynchronously could improve college students’ ability to evaluate the credibility of online sources. We based our intervention on strategies culled from observations of professional fact checkers recruited from leading fact-checking organizations and prominent news outlets located in New York City and Washington, DC. Fact checkers were videotaped and their screens recorded as they evaluated unfamiliar websites. Fact checkers’ approaches were compared to those of undergraduates from an elite university and history professors from five different institutions (Wineburg & McGrew, 2019). When undergraduates and academics landed on an unfamiliar source, they tended to read it vertically, proceeding from the top of the screen to the bottom, examining the URL, mulling over the prose, clicking on internal links (such as the About page), but rarely leaving the target site. Fact checkers differed dramatically. Landing on an unfamiliar site, they left it almost immediately and opened new tabs across the horizontal axis of their browser, a practice we refer to as lateral reading. By briefly clicking away from an unfamiliar site to consult trusted sources from the broader Web, fact checkers answered a crucial question: Who’s behind the information? In contrast, many of the academics and college students remained glued to the original site, unaware of its real backers. To verify claims online, fact checkers’ judgments were also broadly guided by two other questions: (1) What’s the evidence? (2) What do other sources say? (McGrew et al., 2018). Lateral reading allowed fact checkers to evaluate the credibility of online content more quickly and accurately than either the academics or students. 

Fact checkers’ strategies are akin to the “fast and frugal” heuristics that have enhanced performance across a broad spectrum of fields (Gigerenzer & Gaissmaier, 2011). Their strategies guided our development of curriculum for evaluating online sources. To date, interventions based on fact checkers’ strategies have yielded promising results across a wide age span: middle school (Kohnen et al., 2020), high school (McGrew, 2020; Wineburg et al., 2019), and college (Brodsky et al., 2019; Fielding, 2019; McGrew et al., 2019; Supiano, 2019). 

Despite these encouraging findings, substantial barriers limit widespread adoption. In these interventions, lessons were add-ons to the regular curriculum and were not tailored to course content. Additionally, researchers either delivered instruction themselves or provided teachers with substantial support. Such intensive involvement is obviously impractical to scale. 

For the present study, we wove fact checkers’ strategies into an asynchronous nutrition class at a large state university. We used examples directly tied to the course’s focus on nutrition. Pretest and posttest data showed statistically significant growth in students’ ability to evaluate online sources. Posttest data showed that students engaged in the specific strategy of lateral reading far more often post intervention. These results indicate that students can become more skilled evaluators of digital content through asynchronous instruction embedded in regular course content. 

As higher education grapples with how to prepare students for civic life in an age of information overabundance, our findings suggest a practical way forward. Rather than design entirely new courses, these results suggest that curriculum developers could create subject-specific modules for instructors to integrate into existing curricula. The approach in this study could serve as a template in other disciplines. For example, in a history course, a module could be developed that shows how to debunk claims that thousands of Black Americans took up arms for the Confederacy—false claims that have proliferated on the unvetted Internet (Levin, 2019). Modules could be used in similar courses across institutions, and a national database of open educational resources could serve to disseminate this approach. Subject-matter experts could collaborate with curriculum developers to create modules for frequently taught courses (e.g., Biology 101). Once a bank of modules was developed, asynchronous delivery would allow them to be added to courses without the need for extensive faculty development about how to teach these strategies.

Findings 

Finding 1: Students’ evaluation of online sources improved significantly after a series of course-embedded activities. 

The pretest and posttest were parallel forms of the same assessment. They included the same questions with different online sources. (See Appendix A for parallel versions of a question.) Average scores improved from 3.95 points out of 13 at pretest to 7.08 at posttest, an average gain of 3.13 points. A repeated-measures ANOVA revealed that the gains from pretest to posttest were statistically significant, Meandiff = 3.13; F(1, 85) = 136.03, p < .001. Scores improved significantly regardless of the order in which students took the two forms (see Figure 1). For Section 1, which took Form A at pretest and Form B at posttest, mean scores improved from 3.59 points (SE = .31, 95% CI 2.97 to 4.21) to 7.7 points (SE = .42, 95% CI 6.83 to 8.49). For Section 2, which took the forms in the opposite order, average scores improved from 4.3 points (SE = .32, 95% CI 3.68 to 4.93) to 6.49 points (SE = .42, 95% CI 5.65 to 7.33)

Figure 1. Mean pretest and posttest scores by test order.

Finding 2: Students employed the strategy of lateral reading more often on the posttest. 

Students’ written answers offer insights into how their thinking changed from the beginning of the course to the end. For example, Task 3 (Appendix A) presented websites that go against the scientific consensus on climate change (friendsofscience.org on Form A and co2science.org on Form B) and asked students to respond to this prompt: “Is this website a trustworthy source for learning about global warming?” (Although the instructional modules featured sources related to nutrition, the pretest and posttest included sources about social and political issues. This was done to gauge whether students could evaluate the credibility of Internet sources regardless of content.) The task directions informed students that they were free to “open a new tab and do an Internet search if that helps.” For both websites, lateral reading turns up multiple sources that reveal funding from fossil fuel companies with vested interests in climate change denial.  

At pretest, only 3 of 87 students engaged in lateral reading by leaving the original website and consulting at least one other source (Figure 2). Two students correctly questioned the site’s credibility. The other searched outside of the site but did not locate information about its backers. The remaining 84 students focused exclusively on features that were either irrelevant or could be easily manipulated. Most importantly, students never consulted the broader Web. They focused on the site’s top-level domain (.org), whether there were links to other sites, the layout and graphics, and information provided on the About page. Leaving the site to engage in lateral reading was the least employed strategy. In sum, students’ attention remained focused on the original website, which precluded them from finding information needed to judge its credibility. Typical was this student’s pretest evaluation of friendsofscience.org: 

“First, it seems very disorganized. Way too many colors and boxes on the home page. I also see donation boxes as a red flag–even if it is a nonprofit. They claim validity with ‘professionals’ which kind of swayed me at first, but they fail to mention any of their names for a point of reference. I would like to see that on the about page with their top professionals.” 

This student never left the site. Moreover, the student relied on the site’s About page without considering how groups craft their About pages to reflect positively on their aims.

At posttest, 67 of 87 students engaged in lateral reading by leaving the target website (either friendsofscience.org or co2science.org, depending on the form) and consulting at least one other online source. Thirty-six students correctly raised questions about the website’s credibility (Figure 2). Thirty-one engaged in lateral reading but concluded the site was credible or rejected it for irrelevant reasons. The same student who focused on surface-level features at pretest used lateral reading at posttest and found damning information on Wikipedia that was linked to established news sources: “They are funded by Exxon so automatically that raises flags of the reliability of the info. [Source: en.wikipedia.org/wiki/Center_for_the_Study_of_Carbon_Dioxide_and_Global_Change].”

Post-intervention, lateral reading went from the least to the most used strategy. Students’ reliance on ineffective strategies declined. At pretest, 23 students believed that a dot-org domain conferred reliability, a misconception common not only among college students but among adults (Wineburg & Ziv, 2019). On the posttest, the number fell to seven, a decrease of 69%. At pretest, 21 students maintained that the mere presence of links increased a site’s credibility. At posttest, only seven students did. On the pretest, 14 students evaluated the site based on its appearance compared to four at posttest. Fourteen students relied on the site’s About page at pretest; only two did so at posttest. In designing the instructional modules, we had directly addressed why each of the above strategies could lead to erroneous conclusions.

Figure 2. Evaluation strategies used before and after students completed modules.

Methods 

Research question

Can college students learn to effectively evaluate Internet sources in an asynchronous online course?

Course context

Curriculum materials were integrated into two sections of an online nutrition course at a public research university in the southwestern United States. In Summer 2020, the course was offered during two 5-week sessions with the same instructor and content. All instruction was delivered asynchronously using an online learning management system. 

The class introduced students to the basics of human nutrition, including a review of nutrients and how food choices impact health and risk of chronic disease. The course sought to provide students with the tools to make informed decisions in the nutrition marketplace and was required for undergraduates majoring in human development and family science. For students in other majors, the course fulfilled a general education requirement. 

Intervention

The intervention included four modules. The first was a brief introduction to evaluating digital information and the problems of online disinformation. The next two focused on the strategy of lateral reading. The last one provided instruction on how to evaluate the quality of online evidence. Across the modules, we also addressed common misconceptions about assessing digital sources, such as trusting a site because it carried a dot-org top-level domain or accepting at face value information on a site’s About page. The decision to emphasize lateral reading was based on prior research (e.g., Hargittai et al., 2010; McGrew et al., 2018) that showed students’ tendency to evaluate a website by remaining on it, without ever turning to the open Web to vet it. 

Each module included three types of activities. First, videos provided direct instruction about evaluating the credibility of Internet sources. Developed as part of an earlier project, these videos were produced in collaboration with John Green and Crash Course, creators of popular educational YouTube series (Crash Course, 2019a). Videos broadly addressed how to judge the credibility of online sources and were not specific to nutrition. After viewing, students completed multiple-choice questions about their content. Next, students completed guided evaluations of online sources. Students answered questions about various sources and how to evaluate them. Finally, they watched screencasts created by the research team that demonstrated how to evaluate these same sources using fact checkers’ strategies. Screencasts were a form of cognitive modeling, an instructional approach that makes expert strategies visible to novice learners (Collins et al., 1989, 1991; De La Paz et al., 2016). 

For example, Module 2 introduced students to lateral reading, beginning with a Crash Course video (Crash Course, 2019b). After viewing, students answered multiple-choice questions and then evaluated an article from the American Council on Science and Health (ACSH), an organization that describes itself as a “pro-science consumer advocacy organization and a 501(c)(3) nonprofit” (American Council on Science and Health, 2020). ACSH receives funding from corporations that have vested interests in the debates ACSH seeks to influence, such as proposed taxes on sugary drinks or the requirement that restaurants post nutrition information. Students then watched a screencast of a member of the research team using lateral reading to evaluate the same article. Finally, students practiced reading laterally using an article on proposed soda taxes from The Odyssey Online, a crowd-sourced website known for producing clickbait (Porter, 2017). 

Each module took about an hour to complete. Students were assigned one module per week. The modules were required, and the instructor awarded points for completing them. However, the quality of students’ work on the modules did not affect course grades. 

Participants

Eighty-seven undergraduate students completed all parts of the study. Forty-four students were in one section of the course and 43 in the other. Table 1 provides an overview of participants’ race, gender, and ethnicity. Although the modules were a required part of the course, students’ participation in the pretest and posttest was voluntary. Students received a $5 gift card as a token of appreciation for completing the pretest and a second $5 card for completing the posttest. 

Table 1. Race, gender, and ethnicity of participants.

Outcome measures

Each assessment form included nine items that asked students to evaluate the credibility of different types of online sources. (See Appendix B for descriptions of the items.) As part of a prior project, the research team developed the questions through an iterative process of prototyping, expert review, piloting, and think-aloud interviews (McGrew et al., 2018). The items assessed a range of the approaches to evaluating the credibility of online sources taught in the instructional modules. Four items were constructed-response; five were multiple-choice. (See Appendix C for an example of a multiple-choice question.) Prompts were identical across forms, but the questions featured different online stimuli. (See Appendix A for an example of parallel versions of Task 3.) Distractors for the multiple-choice items reflected common errors observed when students in a prior study completed constructed-response versions of the same tasks (McGrew et al., 2018). 

Students could earn a total of 13 points on the assessment. Multiple-choice items were worth 1 point and constructed-response questions 2. Constructed responses were evaluated using a three-level rubric (Beginning – 0; Emerging – 1; Mastery – 2). In Mastery responses, students evaluated online content by investigating the source of information, interrogating the evidence presented, or seeking out information from other reliable sources. Emerging responses were on the right track but were partially incorrect or did not fully articulate sound reasoning. Beginning responses relied on incorrect or irrelevant strategies. (See Appendix D for a sample rubric.) 

Design and analysis

Students in one section took Form A as a pretest and Form B as a posttest. Students in the other section completed the forms in the opposite order. Counterbalancing reduced the risk that the findings would be affected by differences in the difficulty of the two forms. If students showed significant improvement from pretest to posttest in both sections, we could be confident that gains were not attributable to one form being more difficult than the other. 

Two raters independently scored student responses. Scores were identical for multiple-choice items, and weighted kappa was used to estimate inter-rater reliability for constructed-response scores on both forms, Form A weighted κ = .956 (95% CI, .934 to .979), p < .001, Form B weighted κ = .968 (95% CI, .948 to .988), p < .001. 

Two independent raters also coded the strategies students used to evaluate the climate change denial websites (Task 3). Responses were coded for each strategy used, so a single response could receive multiple codes. Codes were applied regardless of how a response was scored. For example, at posttest, many responses received a lateral reading code but were not scored as Mastery. Intercoder reliability was high at both pretest and posttest, Cohen’s 𝜅pre = .921 (95% CI, .881 to .961), p < .001, Cohen’s 𝜅post = .949 (95% CI, .916 to .982), p < .001.

A repeated-measures analysis of variance (ANOVA) was used to determine whether students showed significant improvement in evaluating the credibility of Internet sources. This analysis tested whether the average posttest score was significantly different than the average pretest score. The analysis also controlled for potential effects of counterbalancing administration of the forms. Controlling for order effects leads to a more accurate estimate of the size of the observed treatment effect.

Limitations and future directions

Further research is needed to investigate the efficacy of embedding web credibility modules across varied disciplines in the college curriculum. Additionally, it will be important to investigate the effect of including controversial subject matter in order to better understand how motivated reasoning influences student behavior. 

Although this study suggested that students could become more skilled evaluators of online sources, we don’t know the durability of these changes. Nor do we know whether students carry these evaluative strategies into their everyday lives. Additional research on both fronts would provide a more robust understanding of the efficacy of these types of interventions.   

Topics
Download PDF
Cite this Essay

Breakstone, J., Smith, M., Connors, P., Ortega, T., Kerr, D., & Wineburg, S. (2021). Lateral reading: College students learn to critically evaluate internet sources in an online course. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-56

Links

Bibliography

American Council on Science and Health. (2020). About ACSH. https://www.acsh.org/about-acsh-0

Breakstone, J., Smith, M., Wineburg, S., Rapaport, A., Carle, J., Garland, M., & Saavedra, A. (2019). Students’ civic online reasoning: A national portrait. Stanford History Education Group. https://purl.stanford.edu/gf151tb4868

Brodsky, J., Brooks, P. J., Scimeca, D., Todorova, R., Galati, P., Batson, M., Grosso, R., Matthews, M., Miller, V., Tachiera, T., & Caulfield, M. (2019, October 3-5). Teaching college students the four moves of expert fact-checkers [Paper presentation]. Technology, Mind, & Society, Association for Psychological Science Conference, Washington, DC, United States.

Choi, M. (2020, October 11). When Gen Z is the source of the misinformation it consumes. Politico. https://www.politico.com/news/2020/10/11/gen-z-misinformation-politics-news-conspiracy-423913

Colleges’ reopening models. (2020, October 1). The Chronicle of Higher Education. https://www.chronicle.com/reopening

Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking visible. American Educator, 15(3), 6-11, 38-46.

Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). Lawrence Erlbaum Associates.

Crash Course. (2019a). Navigating digital information. https://thecrashcourse.com/courses/navigatingdigitalinfo

Crash Course. (2019b). Check yourself with lateral reading: Navigating Digital information #3. [Video]. YouTube. https://youtu.be/GoQG6Tin-1E

De La Paz, S., Monte-Sano, C., Felton, M., Croninger, R., Jackson, C., & Piantedosi, K. W. (2016). A historical writing apprenticeship for adolescents: Integrating disciplinary learning with cognitive strategies. Reading Research Quarterly, 52(1), 31-52. https://doi.org/10.1002/rrq.147

Eaton, M., King, A. B., Dalmayne, E., & Seigler, A. (2020, April 26). Trump suggested ‘injecting’ disinfectant to cure coronavirus? We’re not surprised. The New York Times. https://www.nytimes.com/2020/04/26/opinion/coronavirus-bleach-trump-autism.html

Fielding, J. A. (2019). Rethinking CRAAP: Getting students thinking like fact-checkers in evaluating web sources. College & Research Libraries News, 80(11), 620-622. https://doi.org/10.5860/crln.80.11.620

Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451-482. https://doi.org/10.1146/annurev-psych-120709-145346

Halper, E. (2020, August 6). A ‘war room’ that arms Black and Latino voters against disinformation. Los Angeles Times. https://www.latimes.com/politics/story/2020-08-06/war-room-arms-black-latino-voters-against-disinformation

Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010). Trust online: Young adults’ evaluation of web content. International Journal of Communication, 4, 468–494. https://ijoc.org/index.php/ijoc/article/view/636/423

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48(3), 169-183. https://doi.org/10.1080/00461520.2013.804395

Kohnen, A. M., Mertens, G. E., & Boehm, S. M. (2020). Can middle schoolers learn to read the web like experts? Possibilities and limits of a strategy-based intervention. Journal of Media Literacy Education, 12(2), 64-79. https://doi.org/10.23860/JMLE-2020-12-2-6

Levin, K. (2019). Searching for Black Confederates: The Civil War’s most persistent myth. The University of North Carolina Press.

List, A., Grossnickle, E. M., & Alexander, P. A. (2016). Undergraduate students’ justifications for source selection in a digital academic context. Journal of Educational Computing Research, 54(1), 22–61. https://doi.org/10.1177/0735633115606659    

Lurie, E., & Mustafaraj, E. (2018). Investigating the effects of Google’s search engine result page in evaluating the credibility of online news sources. WebSci ’18: Proceedings of the 10th ACM Conference on Web Science, 107–116. https://doi.org/10.1145/3201064.3201095

Martzoukou, K., Fulton, C., Kostagiolas, P., & Lavranos, C. (2020). A study of higher education students’ self-perceived digital competences for learning and everyday life online participation. Journal of Documentation, 76(6), 1413-1458. https://doi.org/10.1108/JD-03-2020-0041

Mathews, J. (2019, November 17). You can’t believe everything you read online. Many students don’t seem to know that. The Washington Post. https://www.washingtonpost.com/local/education/you-cant-believe-everything-you-read-online-many-students-dont-seem-to-know-that/2019/11/17/06a171f2-0670-11ea-ac12-3325d49eacaa_story.html

McGrew, S. (2020). Learning to evaluate: An intervention in civic online reasoning. Computers & Education, 145, 1-13. https://doi.org/10.1016/j.compedu.2019.103711

McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2018). Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory and Research in Social Education, 46(2), 165–193. https://doi.org/10.1080/00933104.2017.1416320

McGrew, S., Smith, M., Breakstone, J., Ortega, T., & Wineburg, S. (2019). Improving students’ web savvy: An intervention study. British Journal of Educational Psychology, 89(3), 485-500. https://doi.org/10.1111/bjep.12279   

Merlan, A. (2020, April 28). Bleach ingestion advocates are thrilled by Trump’s ‘disinfectant’ comments. Vice. https://www.vice.com/en_us/article/884wgv/bleach-ingestion-advocates-are-thrilled-by-trumps-disinfectant-comments

Nygren, T., & Guath, M. (2019). Swedish teenagers’ difficulties and abilities to determine digital news credibility. Nordicom Review, 40(1). https://doi.org/10.2478/nor-2019-0002

Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google we trust: Users’ decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 12(3), 801–823. https://doi.org/10.1111/j.1083-6101.2007.00351.x

Porat, E., Blau, I., & Barak, A. (2018). Measuring digital literacies: Junior high-school students’ perceived competencies versus actual performance. Computers & Education, 126, 23-36. https://doi.org/10.1016/j.compedu.2018.06.030

Porter, J. (2017, February 6). Thousands of college kids are behind a ‘clickbait’ publishing platform. CNBC. https://www.cnbc.com/2017/02/06/thousands-of-college-kids-are-behind-odyssey-clickbait-publishing-platform.html     

Prensky, M. (2001). Digital Natives, Digital Immigrants Part 1. On the Horizon, 9(5), 1–6. https://doi.org/10.1108/10748120110424816

Seitz, A. (2020, July 5). Facebook groups pivot to attacks on Black Lives Matter. AP News. https://apnews.com/article/ca8c15794c65b1ae8e176deb9be5d718

Shellenbarger, S. (2016, November 21). Most students don’t know when news is fake, Stanford study finds. The Wall Street Journal. https://www.wsj.com/articles/most-students-dont-know-when-news-is-fake-stanford-study-finds-1479752576

Supiano, B. (2019, April 25). Students fall for misinformation online: Is teaching them to read like fact checkers the solution? The Chronicle of Higher Education. https://www.chronicle.com/article/students-fall-for-misinformation-online-is-teaching-them-to-read-like-fact-checkers-the-solution/

Wineburg, S., Breakstone, J., Smith, M., McGrew, S., & Ortega, T. (2019). Civic Online Reasoning: Curriculum evaluation. Stanford History Education Group. https://purl.stanford.edu/xr124mv4805

Wineburg, S., & McGrew, S. (2017). Lateral reading: Reading less and learning more when evaluating digital information. Stanford History Education Group. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3048994

Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121(11), 1-40.

Wineburg, S., & Ziv, N. (2019, December 5). The meaninglessness of the .org domain. The New York Times. https://www.nytimes.com/2019/12/05/opinion/dot-org-domain.html

Zarocostas, J. (2020, February 29). How to fight an infodemic. The Lancet, 395(10225), 676. https://doi.org/10.1016/S0140-6736(20)30461-X                         

                 

Funding

This research was supported by the Spencer Foundation, Grant #201900060, Sam Wineburg, Principal Investigator. The content is solely the responsibility of the authors and does not necessarily represent the views of the Spencer Foundation.

Competing Interests

The authors have no potential conflicts of interest.

Ethics

The research was approved by an institutional review board, and human subjects provided informed consent.

Copyright

This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

All materials needed to replicate this study are available via the Harvard Dataverse: https://doi.org/10.7910/DVN/N6HY37