key: cord-0692502-duf5tr7p authors: SKORBURG, JOSHUA AUGUST; FRIESEN, PHOEBE title: Mind the Gaps: Ethical and Epistemic Issues in the Digital Mental Health Response to Covid‐19 date: 2021-09-13 journal: Hastings Cent Rep DOI: 10.1002/hast.1292 sha: b99231c0f32c3e112560186a45f8acf70dd8a160 doc_id: 692502 cord_uid: duf5tr7p Well before the Covid‐19 pandemic, proponents of digital psychiatry were touting the promise of various digital tools and techniques to revolutionize mental health care. As social distancing and its knock‐on effects have strained existing mental health infrastructures, calls have grown louder for implementing various digital mental health solutions at scale. Decisions made today will shape the future of mental health care for the foreseeable future. Here, in hopes of countering this hype, we examine four ethical and epistemic gaps surrounding the growth of digital mental health: the evidence gap, the inequality gap, the prediction‐intervention gap, and the safety gap. We argue that these gaps ought to be considered by policy‐makers before society commits to a digital psychiatric future. 1 W ell before the Covid-19 pandemic, proponents of digital mental health were touting the promise of various tools and techniques, from mHealth to digital phenotyping, that could revolutionize mental health care. As social distancing and its knock-on effects (economic hardship, increased stress, decreased community support) have strained existing mental health infrastructures, calls have grown louder for implementing various digital mental health solutions. Commentaries have urged mental health professionals to "turn the crisis into an opportunity" by widely deploying digital mental health tools. 1 John Torous and colleagues argue that we need to "accelerate and bend the curve on digital health." 2 Dror Ben-Zeev contends that "the digital mental health genie is out of the bottle." 3 And, in fact, there have been record levels of investment in various digital mental health initiatives. One recent estimate suggests that digital behavioral health start-ups raised $588 million in the first half of 2020 alone. 4 At the outset of the pandemic, decisions about the rapid and widespread adoption of various digital health initiatives were necessarily made quickly, under conditions of uncertainty and stress. Medicare rapidly modified their policies to allow clinicians to use (and bill for) telehealth by FaceTime and Skype. Similarly, some Health Insurance Portability and Accountability Act (HIPAA) rules were initially relaxed. Many states also waived requirements for psychiatrists to provide services only to patients in states in which they are licensed. But as the pandemic drags on, policy-makers are faced with difficult choices about these emergency measures. Should they stay in place? If so, for how long? Decisions made in crisis contexts often have a way of gaining a slow and steady momentum and then appearing inevitable in hindsight. Philosophers of science and technology have helpfully described these phenomena in terms of "path dependencies" leading to "lock-in." The case of surveillance technologies following the September 11, 2001 , terrorist attacks are perhaps the clearest example. And indeed, an April 2020 headline asked, "After 9/11, we gave up privacy for security. Will we make the same trade-off after Covid-19?" 5 In much the same way that the 9/11 crisis accelerated and locked in surveillance technologies in the name of national security, so, too, might the Covid-19 crisis accelerate and lock in various digital technologies in the name of health security. Medicine exhibits many path dependencies of this sort. In the United States, hospitals were established as a decentralized and highly competitive system, a structure that, over time, has become widespread and deeply engrained. 6 This has created a fixed system in which reforms oriented toward collaboration and universal coverage are incredibly difficult to achieve, in part because they require not just doing but also an immense amount of undoing. Similarly, path dependencies in the Diagnostic and Statistical Manual of Mental Disorders (the DSM) lead to lock-ins that make substantive revisions nearly impossible, despite widespread dissatisfaction with the categorical classification system. The costs of taking a new path (such as adopting dimensional classifications for personality disorders) are perceived to be too high because they would complicate medical record keeping, create administrative and clinical barriers, require massive retraining efforts, and disrupt longitudinal data collection and meta-analyses. 7 It is easy to imagine a not-too-distant future where this very same logic is applied to various digital health tools, first implemented as emergency measures, then rationalized as the "new normal. " We are thus at a turning point, where the urgency of the pandemic has us rushing headlong toward various digital health "solutions." But decisions made today will put us on paths that shape the future of mental health care for a long time to come. Will the easing of lockdowns have come at the cost of technological lock-ins? As Mélanie Terrasse, Moti Gorin, and Dominic Sisti argued in 2019 in this journal, bioethicists have a crucial role to play in examining how health research and services are being transformed in digital spaces. 8 We agree, and we see bioethicists as uniquely positioned to cut through the hype surrounding digital mental health, which can obscure crucial ethical and epistemic gaps that ought to be considered by policy-makers before society commits to a digital health future. Here, we describe four such gaps. W hile there is a substantial body of evidence supporting the efficacy of telehealth by video conference or phone, many newer digital mental health tools are also gaining traction, including smartphone-delivered therapy, artificial intelligence chatbots, and symptom monitoring via smartwatches and smartrings. It is precisely the scalability of these tools that makes them attractive solutions to the mental health fallout from the pandemic. However, the majority of commercially available mental health apps are not supported by robust empirical evidence. In one study, researchers found that while seventy-three of the most downloaded mental health apps in the iTunes and Google Play stores claim to be effective at improving symptoms, only one of them included a citation to a published study. 9 Yet downloads of these apps have been surging since the start of the pandemic. The best available evidence suggests that smartphone apps, chatbots, and the like may be effective as adjuncts to traditional forms of psychotherapy but, at best, fail to offer significant benefit on their own. Some even lead to worse outcomes. 10 This does not fit neatly with the arguments for the scalability of these digital tools. When the weak evidence base for newer digital mental health tools is weighed against other important ethical considerations, such as data privacy, potential data misuses, and threats to autonomy, many of these digital tools seem inadequate. Thus, before limited health care dollars are allocated, it will be important to ensure that proposed digital mental health solutions demonstrate evidence of directly improving mental health outcomes. The Inequality Gap P roponents of digital mental health regularly tout the power of these tools to reach underserved populations, such as refugees and veterans. 11 However, there is a substantial risk that these technologies will perpetuate existing social biases and inequalities. 12 The Covid-19 pandemic has brought these inequalities into sharp relief, and it is already clear that the mental health fallout will be most significant for those with overlapping vulnerabilities. For example, not only are the elderly more likely to become seriously ill or die from Covid-19, but they are also more likely to be lonely and depressed-experiences that have been exacerbated by the isolation brought on by the pandemic. 13 Principles of justice dictate that we ought to help the least well-off among us. Nevertheless, many of the digital mental health tools in the headlines today seem the least likely to benefit those most in need, as they often lack digital literacy or reliable access to high-speed internet. The latest data from the Pew Research Center shows that, in early 2021, only 64 percent of Americans sixty-five years of age and older had home broadband. Among Americans making less than $30,000 per year, only 57 percent had access. 14 Even if the "evidence gap" is closed, issues with inequality will persist. In the short term, many members of the most vulnerable populations may not be able to reliably access evidence-based forms of remote care. Similarly, low-income families are less likely to have a room where a patient can be alone with the door closed-a privacy requirement for teletherapy. To the extent that we are forging new path dependencies for digital health, then, the inequality gap may widen even further over the long term. Investing in digital mental health technologies may mean that those with fewer digital resources will be excluded from care, making it less likely that their mental health issues will be improved, which could lead to further disadvantages with regard to resources and literacy, and so on. There is a moral imperative for policy-makers to ensure that proposed digital mental health solutions do not widen the gap between the digital haves and have-nots. O ne of the most rapidly growing areas of digital mental health is predictive analytics, which is often depicted as revolutionizing clinical practice in psychiatry. 15 But it is far from clear that this claim will be borne out. Predictive analytic tools find patterns in multimodal data by examining features such as how individuals interact with their cell phones (scrolling or tapping, for example), how they speak (people's pitch, intonation), or how they write (their pronouns, keywords). These features can be highly predictive. For example, people experiencing depression use first-person singular pronouns more often than others. 16 However, while these tools can accurately predict who is likely to experience a mental health crisis, they are unlikely to lead to better interventions. This is because they contribute to predictions, but not explanations, of mental disorders. In philosophy of science, the asymmetrical relationship between predictions and explanations has long been recognized. While a good scientific explanation can help to make accurate predictions, a good prediction does not always lead to an explanation. Barometers are good predictors of storms, but they don't explain the arrival of a storm; this is because the change in pressure that they measure is an indicator, not a cause. 17 So too with linguistic features, and many others, While predictive technologies may support the identification and diagnosis of people who are suffering, they are unlikely to contribute to the development of tools and interventions that can reduce that suffering. used to make predictions about mental health conditions within a population. While pronoun use can be highly predictive, such a feature doesn't point toward novel or effective interventions for treating depression. Teaching someone to use fewer first-person singular pronouns won't make them less depressed. This gap between prediction and intervention is hardly acknowledged in the scientific literature, however, where an announcement of a new predictive technology is often followed by a promise related to improving well-being. To take just one of many possible examples, in reporting on a new automated linguistic analysis that predicts the onset of psychosis in high-risk youth, Corcoran and colleagues suggest that their model can help to "identify linguistic targets for remediation and preventive intervention." 18 Given that the predictions are based on decreased semantic coherence and possessive pronoun usage, it seems that the authors may be suggesting an intervention that teaches youth to speak more coherently and use more possessive pronouns. Such an intervention is unlikely to be successful, because it aims to treat an indicator, not a cause; it is like adjusting one's barometer to stop an incoming storm. In response to such hype, it is essential to keep in mind that predictions do not necessarily lead to interventions, and that this worry is especially salient in the digital realm. While predictive technologies may support the identification and diagnosis of people who are suffering, they are unlikely to contribute to the development of tools and interventions that can reduce that suffering. Further development and deployment of predictive analytic technologies related to mental health may lead to a situation in which more and more people are identified as in need of support, but we lack the tools and resources to offer that support. This knowledge of where need is greatest might contribute to decisions about resource allocation, but it is crucial to keep in mind the limitations of these technologies. Medical researchers and data scientists should not oversell the ability of predictive digital mental health technologies to directly improve mental health outcomes. The Safety Gap C alls for an increased reliance on digital mental health tools are taking place amidst a global reckoning with anti-Black racism. It is essential to consider how digital responses to Covid-19 might disproportionately impact individuals and communities of color, who have long experienced the epidemic of systemic racism and are now, as a direct result, being hit hardest by the pandemic. 19 In some cases, digital mental health services are used not only to detect the presence of risk or suffering or offer support to those seeking care but also to determine when police officers should be dispatched to perform a wellness check. For example, Facebook's suicide prevention program was developed as a last-ditch response for those in crisis. Although there is little public transparency about how this program operates and how decisions are made, a brief sketch can be offered. In essence, Facebook's algorithms constantly scan public and private messages for content that may suggest suicidal intent. If a post or message is flagged as high risk by an algorithm (due to keywords that have been associated with suicidal behavior), it is sent to a (human) moderator for assessment. If the moderator decides that a response is warranted, then local police are alerted and dispatched to intervene. 20 While this may seem like a positive contribution to public health on Facebook's behalf, it is becoming increasingly clear that police wellness checks can do more harm than good. Between 2015 and August 5, 2020, 1,362 people who were experiencing mental health issues were killed by police in the United States. This remarkable number constitutes 23 percent of police fatalities in that time. 21 The March 2020 case of Daniel Prude highlighted for many what has long been understood within racialized and marginalized communities: dispatching the police, particularly in some communities, can be fatal. The United States is not alone in the proliferation of these tragedies; police killings during wellness checks are generating substantial concern across the border in Canada as well. 22 This means that digital technologies meant to increase access to mental health care may also lead to increased policing in already-overpoliced neighborhoods. Before digital mental health solutions are funded by taxpayers, they must be proven to be, in fact, solutions and not themselves part of the problem. Technological responses to the pandemic are ubiquitous. From digital contact tracing and public health surveillance to symptom-monitoring apps and smartphone-delivered doctor's appointments, the new normal is likely to be increasingly digital. Mental health care is no exception, and recent calls for the widespread adoption of digital mental health technologies as a response to the pandemic suggest that this new normal may have already arrived. However, as we have argued, there are good reasons to pause before digital mental health tools are adopted too widely or too permanently. Many epistemic and ethical gaps are yet to be filled in, and the space within them is worrying. Not only is there a lack of evidence for the health benefits to be gained from most novel digital mental health tools, but they also may serve to exacerbate existing inequalities, they may overpromise innovative treatments when they merely succeed in identifying risk, and they may strain overburdened and inappropriate emergency response systems, potentially ending in more lives lost. Turning the Crisis into an Opportunity: Digital Health Strategies Deployed during the COVID-19 Outbreak Digital Mental Health and COVID-19: Using Technology Today to Accelerate the Curve on Access and Quality Tomorrow The Digital Mental Health Genie Is out of the Bottle Digital Behavioral Health Startups Scored $588M in Funding amid COVID-19 Pandemic After 9/11, We Gave Up Privacy for Security. Will We Make the Same Trade-off after Covid-19? Path Dependency, or Why History Makes It Difficult but Not Impossible to Reform Health Care Systems in a Big Way Why Is the Diagnostic and Statistical Manual of Mental Disorders so Hard to Revise? Path-Dependence and 'Lock-In' in Classification Social Media, E-Health, and Medical Ethics Using Science to Sell Apps: Evaluation of Mental Health App Store Quality Claims Standalone Smartphone Apps for Mental Health-a Systematic Review and Meta-analysis Is Big Data the New Stethoscope? Perils of Digital Phenotyping to Address Mental Illness COVID-19 and the Consequences of Isolating the Elderly Predictive Analytics in Mental Health: Applications, Guidelines, Challenges and Perspectives Psychological Aspects of Natural Language Use: Our Words, Our Selves Explanation and Prediction in Evolutionary Theory Prediction of Psychosis across Protocols and Risk Cohorts Using Automated Language Analysis Artificial Intelligence-Based Suicide Prediction Recent Deaths Prompt Questions about Police Wellness Checks