key: cord-0849993-3hvgdthw authors: Greenspan, Rachel Leigh; Loftus, Elizabeth F. title: Pandemics and infodemics: Research on the effects of misinformation on memory date: 2020-11-23 journal: Hum Behav Emerg Technol DOI: 10.1002/hbe2.228 sha: b0eda7d509ca6a676eb7b4224849163cf46b7581 doc_id: 849993 cord_uid: 3hvgdthw On social media and in everyday life, people are often exposed to misinformation. Decades of research have shown that exposure to misinformation can have significant impacts on people's thoughts, actions, and memories. During global pandemics like COVID‐19, people are likely exposed to heightened quantities of misinformation as they search for and are exposed to copious amounts of information about the disease and its effects. This media environment, with an abundance of both accurate and inaccurate information, is often called an “infodemic.” In the current essay, we discuss the consequences of exposure to misinformation during this infodemic, particularly in the domain of memory. We review existing research demonstrating how inaccurate, postevent information impacts a person's memory for a previously witnessed event. We discuss various factors that strengthen the impact of misinformation, including repetition and whether the misinformation is consistent with people's pre‐existing attitudes or beliefs. We conclude by describing how social media companies and individual users can help prevent the spread of misinformation and the ways in which cognitive science research can inform these approaches. discuss the consequences of exposure to misinformation during this infodemic, particularly in the domain of memory. We review existing research demonstrating how inaccurate, postevent information impacts a person's memory for a previously witnessed event. We discuss various factors that strengthen the impact of misinformation, including repetition and whether the misinformation is consistent with people's pre-existing attitudes or beliefs. We conclude by describing how social media companies and individual users can help prevent the spread of misinformation and the ways in which cognitive science research can inform these approaches. continued influence effect, infodemic, memory, misinformation, misinformation effect, misinformation prevention, pandemic, social media, warnings As the world copes with the ongoing crisis of SARS-CoV-2 spreading around the world, citizens are quite naturally consuming copious quantities of information about the disease's lethality, about preventing infection, and more. At the same time, concerns continue about people's exposure to misinformation (i.e., misleading or inaccurate information) about the virus from news outlets, social media, and through person-to-person interaction. Exposure to misinformation can have serious consequences. Several 5G cell phone towers in Europe were destroyed after misinformation circulated that these towers could spread the coronavirus (Ahmed, Vidal-Alaball, Downing, & Lopez Segui, 2020) . The World Health Organization (2020) has described the current media climate-with its abundance of both accurate and inaccurate information-an "infodemic." This creates a unique circumstance. During the pandemic, people naturally search for accurate, trustworthy information but due to the infodemic this information may be obscured as it is intermingled with misinformation. Indeed this concept of an infodemic is not novel to the COIVD-19 pandemic. The concept of "infodemiology" was first discussed by Gunther Eysenbach in 2002. He originally defined an infodemic as "an excessive amount of unfiltered information concerning a problem such that the solution is made more difficult" (Eysenbach, 2009, p. 1) . The ongoing discussion of infodemics is likely heightened by the profusion of information people can access online. Particularly in the domain of health-related behavior, a great deal of information-seeking occurs online. In 2012, over 70% of Americans reported searching for health information online, making it one of the most popular online activities behind checking email and researching a product for purchase (Fox & Duggan, 2013; Fox & Fallows, 2003) . During these searches, people are likely to encounter some form of misinformation, and exposure to misinformation can have serious negative consequences (Southwell et al., 2019) . Nearly two-thirds of Americans report that fake news or misinformation has left them confused about basic facts (Barthel, Mitchell, & Holcomb, 2016) . More Americans report viewing made-up news as a bigger problem in the country than other important issues such as illegal immigration and violent crime (Mitchell, Gottfried, Stocking, Walker, & Fedeli, 2019) . Concerns about the prevalence and impact of misinformation mushroomed after the 2016 U.S. presidential election. Indeed, in the past several years, terms like fake news, post-truth, and misinformation have been featured as the "word of the year" in the Collins Dictionary, the Oxford Dictionary, and Dictionary.com (Brashier & Schacter, 2020) . Correspondingly, there has been an increase in empirical research about the effects of misinformation on people's thoughts, beliefs, and behaviors (Wang, McKee, Torbica, & Stuckler, 2019) . Although the study of the presence and consequences of healthrelated misinformation on social media is more recent, cognitive scientists have been studying misinformation in a different context for nearly 50 years. Specifically, they have examined how inaccurate, postevent misinformation affects people's memories (e.g., see work by scholars including Stephan Lewandowsky, Henry Otgaar, or reviews by Brainerd & Reyna, 2005; Loftus, 2005) . This literature on the effect of misinformation on memory may help provide insight into the spread and effects of misinformation online during the COVID-19 pandemic. To further this aim, we discuss here the varying definitions of misinformation, research from cognitive scientists about whether and when people are susceptible to misinformation, and possible technology-based solutions to prevent misinformation spread. Despite the increased relevance of misinformation in both everyday language and in the research literature, there is no one, universally used, definition of misinformation. Misinformation can generally be defined as information that turns out to be inaccurate (Cook & Lewandowsky, 2011) or "information that is contrary to the epistemic consensus of the scientific community regarding a phenomenon" shows what happens when people experience some event and are later exposed to misleading information about that event. In a typical study of this kind of misinformation, people see some event (e.g., a simulated crime or accident), and then are deliberately exposed to misinformation about what they saw. Sometime after that, they are tested on their event memory, and many people will incorporate elements from the misleading material into their memory for the original event. So people might have originally seen a video of a mock crime in which a thief steals a woman's wallet and hides it in his jacket pocket. But later, after being exposed to misinformation, they remember they saw the thief steal a cell phone, rather than a wallet. This phenomenon now has a scientific name: "the misinformation effect" (see Loftus, 2005 Even people with extraordinary memory abilities, who can recall accurate details of their lives all the way back to childhood, still remain susceptible to the misinformation effect (Patihis et al., 2013) . There are however some known individual differences and circumstances in which misinformation susceptibility is reduced or enhanced. One of these is age. The misinformation effect occurs more strongly in young children and older adults than young adults (Sutherland & Hayne, 2001; Wylie et al., 2014) . On social media, older adults are particularly likely to share fake news articles compared to younger age groups (Guess, Nagler, & Tucker, 2019) . This may occur as older adults are less digitally literate than younger adults and so may struggle more in discerning misinformation (Brashier & Schacter, 2020) . The social climate in which older adults use social media also differs from other age groups. Older adults often use social media for socialization rather than information-gathering and so information accuracy may not be a salient goal (Brashier & Schacter, 2020) . Memory research has also shown that people are more susceptible to misinformation when it fits with their preexisting attitudes or beliefs. In one study, participants saw negative, doctored photographs of Democratic and Republican politicians (Frenda, Knowles, Saletan, & Loftus, 2013) . Political affiliation significantly predicted whether people reported remembering these fictional events. Participants who identified as politically conservative were more likely to remember a fictional image of a liberal politician engaging in a negative action (e.g., President Obama shaking hands with the Iranian president), while those who self-reported as politically liberal were more likely to remember a false image of a conservative politician engaging in a negative action (e.g., President Bush vacationing with a baseball star during Hurricane Katrina). Thus demonstrating that misinformation is particularly powerful when it aligns with a person's preexisting beliefs. Various aspects of the misinformation itself can affect the likelihood of its impact. Repetition is one influential factor in the persuasiveness of misinformation. Repeated exposure to information makes that information feel more familiar and thus more accurate (Foster, Huthwaite, Yesberg, Garry, & Loftus, 2012; Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012) . Even simply having people repeat a false rumor to themselves increases the sway of the rumor (Berinsky, 2017) . Repetitions of misinformation are particularly pernicious as people are more likely to feel comfortable sharing misinformation they have been exposed to several times (Effron & Raj, 2020) . Moreover, misinformation that is repeated is more difficult to correct for (Walter & Tukachinsky, 2020) . Importantly, this body of research about when and for whom misinformation is most influential has helped in developing ways to protect against its distorting effects (Lewandowsky et al., 2012) . Unfortunately, effectively retracting misinformation can be quite difficult. Once a person has been exposed to misinformation, it can be extremely challenging to eliminate its effects fully and accurately. Indeed, even when misinformation is withdrawn, it can still continue to influence people in a phenomenon known as the continued influence effect (Ecker, Lewandowsky, & Tang, 2010) . Thus, approaches to combatting misinformation that focus on preventing its spread are likely to be more effective than those that attempt to debunk misinformation after people have been exposed to it. One commonly proposed solution is warnings. That is, informing people either before or after reading misinformation that some of the information they read may be inaccurate. Many social media companies use warnings to prevent the spread of misinformation. On Twitter, if a tweet contains disputed coronavirus information-defined as "statements or assertions in which the accuracy, truthfulness, or credibility of the claim is contested or unknown"-that has a high potential for harm, then a warning is applied to that content (Roth & Pickles, 2020) . For instance, a text box may appear over the tweet informing readers that "some or all of the content shared in this Tweet conflicts with guidance from public health experts regarding COIVD-19." Research has shown warnings can be effective in reducing the misinformation effect if people are explicitly warned that the information they are about to read may be inaccurate (Lewandowsky et al., 2012) . Generally, warnings are more effective when they are delivered prior to people being exposed to misleading information rather than after the fact. However, postwarnings can be effective especially when they not only inform people of the presence of misinformation but also provide an explanation about why the misinformation was present (Blank & Launay, 2014) . Despite this optimistic news, there may be some potential downsides of warnings. As warnings become more common on social media platforms, users might develop an expectation that stories containing misinformation are detected and labeled. Thus, stories containing misinformation that do not contain a warning may actually be considered more accurate (Pennycook, Bear, Collins, & Rand, 2020) . In addition to warnings, Facebook uses a feature to address misinformation called "related articles." Instead of providing a warning that misinformation may be present in a news story, the related articles feature suggests additional, related reading on a person's News Feed before they click on a link to an article (Smith, Jackson, & Raj, 2017) . Initial evidence suggests related articles that correct for misinformation in an initial posting do reduce reliance on misinformation (Bode & Vraga, 2015) . The related articles feature can also be implemented with accurate information (i.e., providing related articles that confirm the information in an initial post), thus avoiding potential negative connotations of fact-checking (Smith et al., 2017) . Beyond warnings, simply providing additional messaging may help prevent the spread of misinformation. Recently, Facebook implemented a feature providing additional context to articles before a user shares them (Hegeman, 2020) . For instance, if a person were to read and attempt to share an article published many years ago, Facebook would provide a pop-up message informing the reader that the article was several years old. The reader can then acknowledge this message and choose to share the article or not. This kind of system could be expanded. Social media algorithms could be modified to provide additional messages when a person tries to share misleading content. This notification could even be placed when a person opens an article rather than only when they choose to share it. For instance, messaging when a user clicks on a misleading article they have read in the past to reduce misinformation repetition. In addition to system-level changes implemented by social media companies, individuals can also engage in actions that reduce the spread of misinformation throughout their community. Recent research has shown that one reason people may inadvertently share misinformation on social media is because accuracy is not a salient goal (Pennycook, McPhetres, Zhang, Lu, & Rand, 2020) . Thus, actions that cause people to pause and reflect before sharing can help reduce the likelihood of misinformation spread (Fazio, 2020) . Social media and other online news sources can evoke this reflection process by asking participants to acknowledge messages regarding the content they wish to share. But individual users can also develop these habits on their own. The goal would be to slow down the information superhighway, turning it into a two-lane dirt road, for a good cause. The act of pausing and reflecting before sharing information online can specifically reduce the likelihood that misinformation is shared (Bago, Rand, & Pennycook, 2020; Fazio, 2020) . In many ways, people's everyday lives during this pandemic mirror a misinformation experiment. Even with technological advances that attempt to stem the flow of misinformation, it abounds on social media and in day-to-day life. This problem may only grow in the future with more sophisticated types of misinformation including doctored photographs and videos. Detecting this type of doctored information is particularly difficult and thus it may be especially impactful. Decades of research have shown that misinformation, particularly health-related misinformation, can affect people's lives in a myriad of consequential ways. Misinformation can impact beliefs about a disease's impact, effective preventive behaviors one can take, and even people's memories about their own past experiences. Thus, combating infodemics, now and in the future, is probably best accomplished with an interdisciplinary approach bringing together research and expertise from fields like technology, journalism, public policy, and cognitive science. No data were created or analyzed for the current essay. COVID-19 and the 5G conspiracy theory: Social network analysis of twitter data Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines Many Americans believe fake news is sowing confusion Rumors and health care reform: Experiments in political misinformation How to protect eyewitness memory against the misinformation effect: A meta-analysis of post-warning studies In related news, that was wrong: The correction of misinformation through related stories functionality in social media The science of false memory https Aging in an era of fake news. Current Directions in Psychological Science The debunking handbook Explicit warnings reduce but do not eliminate the continued influence of misinformation Misinformation and morality: Encountering fake-news headlines makes them seem less unethical to publish and share Infodemiology: The epidemiology of (mis)information Infodemiology and infoveillance: Framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the internet Pausing to consider why a headline is true or false can help reduce the sharing of false news Repetition, not number of sources, increases both susceptibility to misinformation and confidence in the accuracy of eyewitnesses Health online 2013. Pew Internet & American Life Project Internet health resources False memories of fabricated political events Less than you think: Prevalence and predictors of fake news dissemination on Facebook Providing people with additional context about content they share Misinformation and its correction: Continued influence and successful debiasing Planting misinformation in the human mind: A 30-year investigation of the malleability of memory Many Americans say made-up news is a critical problem that needs to be fixed False memories in highly superior autobiographical memory individuals The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention Updating our approach to misleading information Designing against misinformation Misinformation as a misunderstood challenge to public health Age-related changes in the misinformation effect Public health and online misinformation: Challenges and recommendations Journalism, "fake news" & disinformation A meta-analytic examination of the continued influence of misinformation in the face of correction: How powerful is it, why does it happen, and how to stop it? Systematic literature review on the spread of health-related misinformation on social media Novel Coronavirus (2019-nCoV): Situation report-13 Misinformation effect in older versus younger adults: A meta-analysis and review