key: cord-296500-hrxj6tcv authors: Bunker, Deborah title: Who do you trust? The digital destruction of shared situational awareness and the COVID-19 infodemic date: 2020-08-04 journal: Int J Inf Manage DOI: 10.1016/j.ijinfomgt.2020.102201 sha: doc_id: 296500 cord_uid: hrxj6tcv Developments in centrally managed communications (e.g. Twitter, Facebook) and service (e.g. Uber, airbnb) platforms, search engines and data aggregation (e.g. Google) as well as data analytics and artificial intelligence, have created an era of digital disruption during the last decade. Individual user profiles are produced by platform providers to make money from tracking, predicting, exploiting and influencing their users’ decision preferences and behavior, while product and service providers transform their business models by targeting potential customers with more accuracy. There have been many social and economic benefits to this digital disruption, but it has also largely contributed to the digital destruction of mental model alignment and shared situational awareness through the propagation of mis-information i.e. reinforcement of dissonant mental models by recommender algorithms, bots and trusted individual platform users (influencers). To mitigate this process of digital destruction, new methods and approaches to the centralized management of these platforms are needed to build on and encourage trust in the actors that use them (and by association trust in their mental models). The global ‘infodemic’ resulting from the COVID-19 pandemic of 2020, highlights the current problem confronting the information system discipline and the urgency of finding workable solutions . Information system (IS) artifacts 1 have been developed and used throughout human history and over time we have witnessed their disruption of business, industry and society. The current wave of digital disruption 2 has been caused by the development of mobile phones (that are really computers), resource sharing services such as Uber and airbnb and social media communications platforms that are centrally managed (but socially distributed) like Facebook, Instagram and Twitter. Unfortunately, our IS development trajectory also now sees us https://doi.org/10.1016/j.ijinfomgt.2020.102201 in a time of post-truth 3 , fake news 4 and an 'infodemic' 5 which have wreaked digital destruction and havoc on our shared mental models 6 of what we understand to be real and true i.e. shared situational awareness 7 . It is becoming increasingly difficult to agree on what information represents the reality and truth about crisis events within the echo chamber of social media and the opaque algorithmic biases which underpin platform providers, search engines and data aggregators (Bunker, Stieglitz, Ehnis, & Sleigh, 2019; Himelboim, Smith, Rainie, Shneiderman, & Espina, 2017; Noble, 2018; Sismondo, 2017) . Fig. 1 shows the current level of concern with fake news about the COVID-19 epidemic from a survey conducted in Canada, China, France, Germany, India, Japan, Mexico, Saudi Arabia, S. Korea, U.K. and U.S. (Edelman Trust Barometer, 2020) . This survey highlights that 67 % of respondents worry that there is a lot of fake news being spread about COVID-19 while 49 % of respondents are having difficulty finding trustworthy and reliable information. If these levels of concern are reflected in the global population at large, then managing an adequate pandemic response through shared situational awareness becomes an impossible task. While profiling is not a new approach to the treatment of data, communications and service platform providers and data aggregators have found new ways of combining the techniques of individual user profiling (IUP), data analytics (DA) and artificial intelligence (AI) to monetize 8 the vast amounts of data that have been increasingly generated by their users (Liozu & Ulaga, 2018) . IUP, DA and AI are applied to better understand, influence or manipulate an individual's opinions and social, political and economic behavior through 'nudging' mechanisms (Lanzing, 2019) . This approach to profiling is a powerful tool (Zuboff, 2019) which is used to exploit the individual, their decisions and behavior for financial gain, but which does not effectively address issues of critical and optimal decision making and behavior for societal and group benefit e.g. pandemic management. This is due to the creation of mental model dissonance through the misinformation and rumors that are produced and propagated by this approach. For example, in the current COVID-19 pandemic, in order to stop the spread of the virus, health agencies across the globe are urging us to stay socially distant, wash our hands at every opportunity, wear masks (when necessary), and get tested if we develop symptoms. Unfortunately, rumors propagated on social media platforms quite often reinforce multiple and conflicting mental models of virus conspiracies, 'quack treatments' and inaccurate information regarding government motivations for lockdowns. This can severely hamper crisis management efforts. Some examples 9 of misinformation propagated during the current pandemic include: Dissonant mental models are reinforced by recommender algorithms (Lanzing, 2019 ), bots (McKenna, 2020 and trusted individual platform users or influencers (Enke & Borchers, 2019) resulting in alarming levels of digital destruction which is turn undermines social cohesion and creates a barrier to shared situational awareness and effective crisis response. We therefore see a tension and conflict arising from: 1) the need for alignment of mental models and shared situational awareness to support effective crisis management; and 2) the developments of digital disruption, destruction and the facilitation and reinforcement of dissonant mental models through post truth perspectives and conflicting situational awareness. Shared situational awareness is developed through the alignment of our mental models to represent a shared version of truth and reality on which we can act. This is an important basis for effective information sharing and decision making in crisis response (Salas, Stout, and Cannon-Bowers et al., 1994) . Aligned mental models help us to agree about the authenticity, accuracy, timeliness, relevance and importance of the information being communicated and give concurrence, weight and urgency to decisions and advice. Harrald and Jefferson (2007) highlight that shared situational awareness implies that "(1) technology can provide adequate information to enable decision makers in a geographically distributed environment to act as though they were receiving and perceiving the same information, (2) common methods are available to integrate, structure, and understand the information, and (3) critical decision nodes share institutional, cultural, and experiential bases for imputing meaning to this knowledge" (page 3). We know that most crisis management agencies have established, agreed, authenticated and qualified mental models on which they base their internal operational command and control systems. This gives them assurance and governance of the information they produce (Bunker, Levine, & Woody, 2015) and qualifies their decisions and recommended actions to manage crisis situations. It also engenders public trust in these agencies, to provide relevant and critical crisis information and advice for public action. Fig. 2 highlights the current high and increasing levels of trust in government institutions during the COVID-3 "…That truth has been individualized or that individuals have become, to borrow a turn of phrase from Foucault, the primary and principal points of the production, application, and adjudication of truth is one important point. That emotion and personal belief are able now to outflank even objective facts and scientific knowledge is another (the claim that literature, for example, has truths to tell has long fallen on deaf ears). Their articulation is decisive: with the regime's inflection, even inflation, of the indefinitely pluralized and individualized enunciative I who, by virtue of strong feeling, is able at any moment not only to recognize or know but, also, to tell or speak the truth, truth is privatized and immanitized, its universal and transcendental dimensions nullified altogether. Hence, what is true for any one person need not be true for everyone or anyone else; what is true for anyone now need not necessarily be true later" (Biesecker, 2018pp 331-332) . 4 We posit that fake news is, in essence, a two-dimensional phenomenon of public communication: there is the (1) fake news genre, describing the deliberate creation of pseudojournalistic disinformation, and there is the (2) fake news label, describing the political instrumentalization of the term to delegitimize news media " (Egelhofer & Lecheler, 2019page 97) ." 5 "an over-abundance of informationsome accurate and some notthat makes it hard for people to find trustworthy sources and reliable guidance when they need it." (WHO, 2020) 6 "A concentrated, personally constructed, internal conception, of external phenomena (historical, existing or projected), or experience, that affects how a person acts" (Rook, 2013 )page 47. 7 "refers to the degree of accuracy by which one's perception of his current environment mirrors reality." (Naval Aviation Schools Command (032) 19 pandemic (Edelman Trust Barometer, 2020). When digital destruction produces mental model dissonance shared situational awareness between crisis management agencies and the general public becomes impossible to maintain and communicate (both to and from) due to inconsistencies in what constitutes reality and truth, making crisis response unmanageable. Centrally managed communications and service providers and data aggregators treat your personal data as a commodity/resource which they are generally entitled to use as they wish, however, they largely ignore the deeper understanding of the mental models on which data is produced by any one system (Bunker, 2001) . "In the social sciences, in particular, big data can blend wide-scale and finer-grained analytic approaches by providing information about individual behaviour within and across contexts" (Tonidandel, King, & Cortina, 2018)page 531. Data science harnesses the belief that data created by an individual using different applications or platforms, can be seamlessly combined then analyzed using sophisticated data analytics and AI. Conversely, this belief has also fundamentally changed the way that all individuals view and interact with data and information in their daily lives. For instance, our trust in the Google Maps application on our phone to tell us exactly where we are at any given moment, extends to the belief that all information coming to us via our mobile phones and the applications we choose to use, must be some version of reality or the truth. We selfselect our filters (applications and services) for engagement with the wider world and our reliance on mobile technology and applications to navigate the world is now at an all-time high i.e. 3.5 Billion or 45.04 % of the global population. 10 If we are a social media platform user, however, we can be bombarded with paid advertising or 'nudged' by recommender algorithms to make contact with other platform users, information sites and products and services that are deemed to be relevant to us and part of our 'shared reality' (Echterhoff, Higgins, & Levine, 2009 ). Nudging performs three functions : meeting platform user 1) epistemic; and 2) relational needs; and 3) adding to the platform owner's profitability'. For instance, the platform user is directed to people, products, information and communities of interest that help them to "achieve a valid and reliable understanding of the world" (Levine, 2018) -pg 54; this then fulfills their "desire to affiliate with and feel connected to others" (Levine, 2018) -pg 54; and ; platform owners and managers (and their influential users) then make lots of money selling targeted advertising by directing platform users in this way. This might be a desirable situation when sharing situational awareness within a social media platform community of interest where mental models align, but when combined with platform characteristics like user anonymity and lack of information assurance, then treating a social media platform as a trusted information source for shared situational awareness becomes problematic. For example, social media platform users can be a valuable source of eyewitness information for crisis management agencies to enhance the production of shared situational awareness for crisis decision making. Social media information when generated in large volumes in a crisis, however, is difficult to process. The source of the information can take time to identify and authenticate and the information provided by them can be a problem to verify, validate, analyze and systematize. This produces a general lack of trust by crisis management agencies and other social media users, in the crisis information produced on social media platforms. This can have catastrophic consequences for shared situational awareness through failure to detect and use important and relevant information or through the belief in, and the propagation of, mis-information produced on these platforms (Bui, 2019, Ehnis and which can also impact and undermine social benefit and cultural cohesion in times of crisis (Kopp, 2020) . We are currently living in an era of digital disruption which provides many economic and social benefits, but we must also be able to support crisis management based on shared situational awareness. Post truth perspectives, fake news and the resulting infodemic has resulted in wide ranging digital destruction and the enablement and encouragement of mental model dissonance. How can we best address this problem? Seppanen, Makela, Luokkala, and Virrantaus (2013) have outlined the connection characteristics of shared situational awareness in an actor network. Fig. 3 highlights the configuration of the connection which includes three requirements: 1) informationto bridge the information gap through the identification of key information elements; 2) communicationto understand the fluency of how actors communicate through describing this communication in detail; and 3) trustto analyse the role of trust on the quality and fluency of communication. They reason that "if trust could be increased the availability, reliability, and temporal accuracy of information could be improved". Recent research conducted on the use of social media platforms for crisis communication purposes, so far concludes that: 1) trusted agencies have an early mover information advantage in crisis communication on social media platforms such as Twitter (Mirbabaie, Bunker, Stieglitz, Marx, & Ehnis, 2020) ; 2) information communicated by trusted agencies can be amplified and intensified by influential social media users and others to "communicate, self-organize, manage, and mitigate risks (crisis communications) but also to make sense of the event (commentary-related communications)", for example through retweets on Twitter (Stieglitz, Bunker, Mirbabaie, & Ehnis, 2018); 3) trusted agencies and the information they supply is influential in shaping the human response to crisis situations (Mirbabaie, Bunker, Stieglitz, & Deubel, 2019) ; 4) trusted agencies find processing the high volumes of information communicated through social media platforms problematic due to the difficulty in authenticating the information source (user) and establishing the accuracy, timeliness and relevance of the information itself ; and 5) there are a number of tensions which emerge in the use of social media as a crisis communications channel between trusted agencies and the general public. These tensions occur in the areas of: information, generation and use i.e. managing the message; emergence and management of digital and spontaneous volunteers; management of community expectations; mental models which underpin prevention, preparation, recovery and response protocols (PPRR); and management of the development of the large-scale adoption of social media technologies for crisis communications (Elbanna, Bunker, Levine, & Sleigh, 2019) . This knowledge points us to a number of areas of research focus in IS for the future development of data analytics and artificial intelligence to more effectively align mental models for shared situational awareness. These should: • Build on the trust in government and their crisis management agencies, as well as other influential actors in crisis management communications, to provide and amplify advice and information as early as possible in a crisis; • Build frameworks that create algorithmic transparency, information governance and quality assurance for platform and service providers and data aggregators to create and reinforce trust in them as information sources i.e. so they become trusted actors in the communications network; • Address how platform and service developers and government communication system developers can share concepts and build systems that address crisis communication requirements, including those used in IUP, DS and AI; and • Address government failures to provide robust IS services, during the COVID-19 pandemic, and the subsequent impacts this has had on trust in government and their systems for tracking and tracing infections (Chakravorti, 2020) . These areas of focus are important given the negative impacts that are already emerging from the use of AI during the COVID-19 pandemic i.e. varying levels of data quality and comprehensiveness, development of COVID-19 treatments based on the use of this variable data, use of social control and surveillance methods to minimize virus spread (Smith & Rustagi, 2020 , Naughton, 2020b . It is time to critically analyze and evaluate how centrally managed platforms, their data and systems algorithms are being used during the COVID-19 pandemic (and other crises) by the companies who own and run them. How are they using the information they collect i.e. development of services, influencing users, creation of profits etc., (how) are they limiting the spread of post truth arguments and fake news/information and are they exacerbating or assisting with the management of crises? Some platform owners (YouTube, Twitter, WhatsApp) are currently making efforts to be more transparent in their platform operations, data governance and quality assurance (Hern, 2020; Naughton, 2020a) . There have also been growing calls from critics for regulation of these companies and their business practices (Lewis, 2020) . There is a long way to go, however, to address the problems, issues and barriers caused by these companies for the production of shared situational awareness to support crisis management. For instance, the Latvian government wanted to access the Google/Apple designed contact tracing framework (a Bluetooth enabled API) which "can later be translated into COVID-19 exposure notifications" and which are sent to contacts of a COVID-19 positive person. Google and Apple set preconditions to accessing this framework, however, by only allowing the registration of one government/health authority approved contact tracing app per country and by not allowing government health agencies access to the personal details of contacts, due to their evaluation of potential privacy issues (Ilves, 2020) . Multi-jurisdictional legal definitions and treatment of privacy issues is also a complicating factor in this decision. This situation presents a problem to any country wishing the use the API, as contact tracers need to be able to: 1) assess the level of potential exposure to the virus of the contact; and then 2) provide advice to the contact as to what action they should take. This could be anything from "get a test and self-isolate for 14 days" through to "take no action at all, socially distance from others, but watch for symptoms". As COVID-19 health advice can have critical health, economic and social consequences for an individual, the advice needs to be tailored for the individual and be as least impactful as possible. Merely sending an exposure notification to a contact of a COVID-19 infected person, does not guarantee any action, or the correct action being taken by that person. By prohibiting access to data and controlling their API in this way, Google and Apple are not sharing available data with not sharinggovernment health agencies that would allow them to tha perform effective contact tracing which could save many lives while preventing large scale economic hardship and vice versa. This presents us with a difficult legal and ethical situation to ponder i.e. does the individual requirement for data privacy outweigh the opportunity to save lives and livelihoods? We are now in the midst of a pandemic and the 'infodemic' that has followed in its wake and to counter the effects of this overload of inaccurate and misinformation, the WHO has collaborated with the providers of social media platforms (e.g. Facebook, Twitter, etc.) to mitigate the impact of false information on social media (WHO, 2020) in order to support shared situational awareness and effective crisis management. This is an unsustainable and unrealistic situation, however, due to the ongoing cost, level of resources and necessary skills required for such an intervention. While many countries have been unable to adequately deal with the pandemic, there are many success stories of shared situational awareness supporting health agency and public response for effective virus containment through day-to-day decisions and actions. Australian governments (federal and state) have had mixed results in the containment and management of the COVID-19 pandemic. There have been communications and IS missteps along the way e.g. Ruby Princess cruise ship (McKinnell, 2020) and the Melbourne quarantine hotels (Kaine & Josserand, 2020) where quarantine protocols were breached, as well as the current technical problems with the collection and use of the data from the Australian federal government COVIDSafe tracking and tracing app (Taylor, 2020) . We are also currently seeing the rapid development of a COVID-19 outbreak in metropolitan Melbourne (Victoria), where previously there had been a successful pandemic response. This has necessitated the reinstatement of lockdowns, strict social distancing enforcement and the closure of the NSW/Victorian Border. This outbreak has been exacerbated by misinformation circulating on social media targeting specific cultural groups, which has caused general confusion (especially in non-English speaking communities) as well as the promotion of racist tropes and hate speech (Bosley, 2020) . While managing a pandemic is a complex and complicated process with many stakeholders, to achieve a more effective level of crisis management there are benefits to be obtained by shared situational awareness through the alignment of mental models that represent more broadly acceptable situational reality and truth. This alignment would further support our trust in government as well as develop trust in other organizational and individual actors in the communications network. Individual differences in political, social and cultural contexts also add a layer of complexity to the alignment of mental models for shared situational awareness. Government agencies should refine their mental models of situational awareness to accommodate those variations in factors of significance which impact alignment e.g. cultural behavioral practices, housing conditions, working environments and practices, access to services, regard for community leadership, digital literacy, access to technology etc. or they risk the ongoing development and reinforcement of dissonant and alternative mental models and erosion of their trusted status. (Mirbabaie et al., 2020) Both government and platform providers have public interest and safety information communications requirements to satisfy in both the short and long-term, which directly impact our ability to manage pandemics and other types of crises and disasters effectively. There must be collaboration and cooperation (either legislated or voluntary) to build on the trust in government to provide information as early and as often as possible in a crisis (and enable the amplification of, and action taken from this advice) as well as ensure algorithmic transparency, information governance and quality assurance for robust and trusted communication and information services overall. To remain successful in managing the pandemic, however, requires long term vigilance and effort by both the pandemic managers and the International Journal of Information Management xxx (xxxx) xxxx public alike. As we can see by the current COVID-19 outbreak in Victoria, mental model alignment (and realignment) during a crisis is a continual process which requires constant attention, effort and resources. "A critique of how science is produced is very different from the post-truth argument that there are alternative truths that you can choose from. Post-truth is a defensive posture. If you have to defend yourself against climate change, economic change, coronavirus change, then you grab at any alternative. If those alternatives are fed to you by thousands of fake news farms in Siberia, they are hard to resist, especially if they look vaguely empirical. If you have enough of them and they are contradictory enough, they allow you to stick to your old beliefs." Bruno Latourinterview (Watts, 2020) . Guest editor's introduction: Toward an archaeogenealogy of posttruth Reports Melbourne coronavirus cluster originated at Eid party could stoke Islamophobia, Muslim leaders sayThe Guardian (online) 25 Social media, rumors, and hurricane warning systems in Puerto Rico A philosophy of information technology and systems (IT&S) as tools: Tool development context, associated skills and the global technology transfer (GTT) process Repertoires of collaboration for common operating pictures of disasters and extreme events Bright ICT: Social media analytics for society and crisis management Digital contact tracing's mixed record abroad spells trouble for US efforts to rein in COVID-19. The Conversation (online) July Shared reality: experiencing commonality with others' inner states about the world Spring update: Trust and the Covid-19 pandemic Fake news as a two-dimensional phenomenon: A framework and research agenda Repertoires of collaboration: Incorporation of social media help requests into the common operating picture Emergency management in the changing world of social media: Framing the research agenda with the stakeholders through engaged scholarship Social media influencers in strategic communication: A conceptual framework for strategic social media influencer communication Shared situational awareness in emergency management mitigation and response YouTube bans David Duke and other US far-right users. The Guardian (online) Classifying twitter topic-networks using social network analysis Why are Google and Apple dictating how European democracies fight coronavirus? The Guardian (online) 16 Melbourne's hotel quarantine bungle is disappointing but not surprising. It was overseen by a flawed security industry. The Conversation (online) 8 Fake news: The other pandemic that can prove deadly Strongly recommended" revisiting decisional privacy to judge hypernudging in self-tracking technologies Going back to basics in design science: From the information technology artifact to the information systems artifact Socially-shared cognition and consensus in small groups Facebook and Google must move away from the zero-sum game. The Sydney Morning Herald -opinion (online) Monetizing data: A practical roadmap for framing, pricing & selling your B2B digital offers Queensland researchers analysing coronavirus conspiracy theories warn of social media danger Ruby Princess passengers disembarked before coronavirus test results to get flights, inquiry hears Social media in times of crisis: Learning from Hurricane Harvey for the coronavirus disease 2019 pandemic response Who sets the tone? Determining the impact of convergence behaviour archetypes in social media crisis communication Twitter taking on Trump's lies? About time too. The Guardian (online) 31 Silicon Valley has admitted facial recognition technology is toxic -About time. The Guardian (online) 14 Algorithms of oppression: How search engines reinforce racism Defining and measuring shared situational awareness A Kuhnian analysis of revolutionary digital disruptions Mental models: A robust definition. The Learning Organization The role of shared mental models in developing shared situational awareness Developing shared situational awareness for emergency management Editorial 'Post-truth? The problem with COVID-19 artificial intelligence solutions and how to fix them. Stanford Social Innovation Review Sense-making in social media during extreme events Australia's Covidsafe coronavirus tracing app works as few as one in four times for some devices. The Guardian (online) 17 Big data methods: Leveraging modern data analytic techniques to build organizational science This is a global catastrophe that has come from within' Interview. Bruno Latour: The Guardian (online) 7 Novel coronavirus (2019-nCoV) situation report The age of surveillance capitalism: The fight for a human future at the new frontier of power I would like to thank the Communications and Technology for Society Research Group at the University of Sydney Business School,and the Marie Bashir Institute for Infectious Diseases and Biosecurity for their continued support for this work, and Adjunct Associate Professor Anthony Sleigh and Dr Christian Ehnis for their encouragement and invaluable feedback on this paper.