key: cord-0461406-fak5skj6 authors: Nabavi, Ehsan; Browne, Chris title: Five Ps: Leverage Zones Towards Responsible AI date: 2022-04-20 journal: nan DOI: nan sha: d42793b9d0832f55b0aaa989568735b4c94517da doc_id: 461406 cord_uid: fak5skj6 There is a growing debate amongst academics and practitioners on whether interventions made, thus far, towards Responsible AI would have been enough to engage with root causes of AI problems. Failure to effect meaningful changes in this system could see these initiatives to not reach their potential and lead to the concept becoming another buzzword for companies to use in their marketing campaigns. We propose that there is an opportunity to improve the extent to which interventions are understood to be effective in their contribution to the change required for Responsible AI. Using the notions of leverage zones adapted from the 'Systems Thinking' literature, we suggest a novel approach to evaluate the effectiveness of interventions, to focus on those that may bring about the real change that is needed. In this paper we argue that insights from using this perspective demonstrate that the majority of current initiatives taken by various actors in the field, focus on low-order interventions, such as short-term fixes, tweaking algorithms and updating parameters, absent from higher-order interventions, such as redefining the system's foundational structures that govern those parameters, or challenging the underlying purpose upon which those structures are built and developed in the first place(high-leverage). This paper presents a conceptual framework called the Five Ps to identify interventions towards Responsible AI and provides a scaffold for transdisciplinary question asking to improve outcomes towards Responsible AI. People today are increasingly aware of how ingrained Artificial Intelligence (AI) already is in their daily liveswhether it determines what appears in a playlist or suggests potential partners to date-rather than in some distant future. While these seemingly low-risk examples can feel like magic to the user, many more technological advances are also underway that delegate more significant control over decision-making to AI-systems, such as in driving 1 , educating 2 , judicial applications 3 , and providing health care 4 . However, the outputs from these systems can inadvertently erode the shared values of society, such as fairness, justice, safety, security, and accountability. The problems that AI is employed to solve can often exacerbate other societal problems, such as loss of privacy through increased surveillance 5 , and policy decisions that increase social and economic inequality [6] [7] [8] . Recent examples of AI failures and their lack of transparency and traceability have raised disconcerting questions about the 'dark side' 9 of AI use, and the way these systems are developed and deployed 10 . Advances in digital technology, along with debates about biased algorithms and ethical and regulatory challenges of autonomous systems, underscore the fact that AI management is more of a social and political issue rather than an engineering challenge 11 . This realization has caused research and industry actors to take non-technical aspects of AI into account. This conversation has grown beyond the ethical challenges surrounding AI development and use into a broader discussion of 'Responsible AI', encompassing other topics around ethical AI, lawful AI, explainable AI (XAI), trustworthy AI, and accountable AI. Although there is not a consensus over the meaning and implications of the notion of 'responsibility' when it is applied to AI-systems 12 , there is a growing interest to explore the development and use of AI systems from the lens of Responsible AI. Applications range from in fields such as health 13, 14 , finance 15 , urban studies 16 , conservation science 17 , marketing 18 , and military affairs 19 , to more specific cases such as COVID-19 20 . The notion of Responsible AI is not limited to research. As of January 2021, OECD AI Policy Observatory tracks more than 300 AI policy initiatives around the globe in the Responsible AI landscape 21 . The major AI companies have launched their self-regulatory Responsible AI programs, through building tools and software to translate high-level principles such as fairness, explainability, and accountability and use them across engineering groups and clients 22 . Standardization bodies such as ISO, IEEE and NIST also offer guidance by publishing standards and frameworks to support the responsible development of AI. Although transdisciplinary approaches can help us to understand and navigate this socio-technical challenge, the dominant discourses address AI problems are from disciplinary perspectives, predominantly from computer science and engineering. Even within Responsible AI, researchers and practitioners tend to approach the topic from a narrowly disciplinary perspective and develop solutions based on their own epistemological strategies. For AI systems that are perceived as irresponsible, the focus is often addressing visible gaps and tangible problems with quick fixes and technical improvement (particularly in areas such as robustness, privacy, and fairness where technical fixes seem feasible) 23, 24 , rather than examining the drivers of design, development, and deployment. This is the gap that needs to be addressed if we are to ensure transformation of responsible AI systems. Further, companies seeking to improve responsibility in AI systems are limited in their capacity to realize meaningful and necessary technical, social, and environmental change. Previous attempts under the banner of Responsible AI can be described as 'ethical washing' or 'ethics theatre' [25] [26] [27] , intended to (1) show their customers they are doing their best to behave ethically; and more importantly, to (2) minimize regulation. Although it can be argued that efforts thus far are 'good first steps' towards Responsible AI 22 , these efforts can distract from taking a broader view of the problems inherent to Responsible AI. For the purposes of this paper, we take a holistic, systems view of Responsible AI 28 , encompassing broadly the notions of responsible, transparent, and trustworthy AI described above. We argue that although that these initiatives seek to ensure more responsible building and application of AI, they often fail to engage with how to encourage developers to approach the root causes and unintended consequences, who often rush to tweak and update existing systems with new software libraries 29 . Focusing on the engineering solution, they do little to encourage AI developers and users to question underlying assumptions about the vision and the purpose of an AI system. For the practitioner and policymaker alike, the current body of literature on Responsible AI lacks adequate definitions and characterization of how different interactions with AI systems will lead to achieving Responsible AI. Meadows 30 proposes the framework of leverage points, which summarizes the relative power of various policies to enact change-from incremental to transformational-in a system. In the following sections we adapt Meadows' framework to the current debates in AI to help realize the transformational change needed towards Responsible AI. We conclude with a short discussion on the advantages of our proposed approach to advance theoretical and practical discussions on responsible approaches for AI. Meadows identifies twelve leverage points that has been adapted into research and practical work in various disciplines concerning complex socio-technical systems, from food and energy system 31 , to climate change 32 , and health 33 . These leverage points represent at an abstract level some common places to intervene within a system to effect change. Here we adapt the leverage point framework categorized around two domains and four zones. Figure 1 shows a graphical depiction of the Five Ps framework. The two domains-Problem and Response-are represented by a triangle divided into two with the Problem Domain on the left and Response Domain on the right. The horizontal axis represents the relative magnitude of 'effort' and reward for intervening in each of the four zones, shown on the vertical axis in increasing order of 'leverage', from top to bottom: Parameter, Process, Pathway, Purpose. We describe this framework as the 'Five Ps', which includes situating the Problem at the right level, and then considering places to intervene in the system in each of the four zones. The Five Ps is a method for considering and analyzing what the response of a given intervention might be. In this case, we are considering how different initiatives in Responsible AI are conceived in the Problem Domain, and then enacted in the four zones (i.e. Parameter, Process, Pathway, Purpose). This framework recognizes that a problem can be attributed to various parts of a system and, depending on how the problem is framed, different interventions will be chosen, each leading to a different response. To illustrate, the misclassification problem that arises in an AI model could be seen as a Parameter problem that can be resolved by improving parameters within the algorithm. However, the problem could also be seen as a Purpose problem, which could bring into question the paradigm and assumptions from which the algorithm was created in the first place. (3) The goals of the system (2) The mindset or paradigm out of which the system, its goals, power structure, rules, its culture, arises (1) The power to transcend paradigms (12) Constants, parameters, numbers (11) The sizes of buffers relative to their flows (10) The structure of material stocks and flows (9) The lengths of delays, relative to the rate of system change (8) The strength of negative feedback loops (7) The gain around driving positive feedback loops (6) The structure of information flows (access to information) (5) The rules of the system (such as incentives, punishments, constraints) (4) The rules of the system (incentives, punishments, constraints). (3) The power to add, change, evolve, or self-organize system structure PURPOSE ZONE (changing intent, mental models, and paradigms) To illustrate the domains and zones within the Five Ps, we will describe each briefly in relation to Responsible AI. Problems identified in the Parameter zone are tractable (modifiable, mechanistic) characteristics of an AI system that are commonly targeted by AI developers to improve the responsibility of AI. They are typically smaller visible flaws that are usually addressed through engineering solutions such as tweaking algorithms and parameters. The effort to fix these is small, and changes in this zone are incremental and may have a negligible effect on the problem's underlying structure or dynamics. They are important markers of the problem, but they are often symptomatic and not the root cause of the problem. Problems identified in the Process zone consider the wide range of interactions between the feedback elements of an AI system that drive the internal dynamics, including social and technical processes associated with how the AI is designed, built, and deployed. This might include activities that speed up development times, or actively responding to emerging trends in the data. Changes in this zone are likely to result in resolving issues as they emerge or amplifying the effect of assumptions. Problems identified in the Pathway zone consider the ways through which information flows, the rules are set, and the power is organized. For example, improving transparency of how algorithms are employed, the governance or legislation of their use, or putting the ownership of data back into the consumer's hands. These changes are structural to the system that allows the AI to operate, and result in establishing new patterns of behavior and agency. Issues identified in the Purpose zone have the most potential to affect change in a system. These relate to the norms, values, goals, and worldviews of AI developers that are embodied in the system. It includes the underpinning paradigms based on which the system is imagined, and the ability to transform entirely and imagine new paradigms. Framing perceived problems in this zone serves to act as a compass to guide the developers to align with the fundamental purpose of the system. The Five Ps-problem, parameter, process, pathway, and purpose-characterize five ways we can begin to conceptualize our journey towards Responsible AI. Zones within the Five Ps are interrelated, and scale and reach also plays a role in the extent to which the system's behavior changes. These Five Ps are not part of a fixed hierarchy of change but serve as a conceptual tool to categorize strategies to effect change in a system. In the following sections, we consider the Five Ps as an analytical tool and how it could be used as a planning tool to assess current interventions in Responsible AI. Reviewing the ongoing attempts to address Responsible AI, it is common to see that activities are commonly conceptualized, defined, and implemented in the Parameters and Process zones. Many initiatives frame the challenge of Responsible AI as the problem of technical and design flaws requiring engineering fixes or a better design process 29, 34 . The rational is that complex concepts and high-level principles need to be simplified so as to be tangible and computable. The result had been studies that examined isolated factors related to the principles, such as improving model explainability 35 and reducing biases 36 . Change at these levels typically result in incremental change and allow business as usual. There are opportunities to radically shift a movement to Responsible AI by effecting changes in Pathway zone, such as high-level design and structures, and the challenging questions about the underlying assumptions, visions, and the foundational purpose of the system as in Purpose zone. This particularly happens when Responsible AI is understood as the microcosm of cultural and political challenges faced in society 25 , beyond technical and design issues. To illustrate, consider an AI system that is used in a social media company causing misinformation and extremism-similar to the one Facebook is currently experiencing. In a move towards Responsible AI, the company views the problem at the Parameters zone, and creates tools and tweaks algorithms to analyze and address the biases in order to fix the models that come out of them. Another response from the company, could be creating software tools to translate principles of responsible AI, such as fairness, explainability, and accountability to improve the models 22 .By taking these measures, the company seeks to control misinformation in its content-moderation models across the platform, which potentially leads to an improved user experience. These interventions could be described as 'technological solutionism', built on a premise that the challenge of responsibility is a challenge of fixing a design flaw in the algorithms 25, [37] [38] [39] . In this view, the efforts for quantifying, computing, or ma thematizing responsibility are perceived as an apparatus for creating a technocratic rather than democratic solutions, and fixes tend to be short-term and could be described as 'tweaks reaction'. In our example, although visible content moderation could improve, the paradigm under which the platform operates remains unchanged. If the company's business model only concerns with maximizing engagement, tweaking algorithms will then have no direct impact on misinformation circulation-because the AI models that governs the interactions will continue to reward inflammatory content (e.g., controversy, misinformation, and extremism), and operate on structures that systematically reduce the diversity of viewpoints that users are exposed to. Further, engaging changes that undermine the company's paradigms are unlikely to be supported. For example, a for-profit company is unlikely to support initiatives that have potential to reduce revenue streams 40, 41 . The same company could consider the problem in the Process zone, by intentionally promoting diversity and inclusion in development teams, publishing new professional guidelines and promoting training opportunities. As more diverse views are involved in the development of the model, assumptions are questioned and resolved during the development cycle. This would likely see first-order change, adjusting and adapting practices to changes in the operating environment (see Figure 2 ). Extending this, the company could initiate reform in the Pathway zone to achieve second-order change through 'restructures' and 'redesigns'. For example, this could include initiating governance structures within their firm for Responsible AI, such as ethical review boards or introducing new roles and responsibilities for assuring AI products and processes are ethical and aligned with AI principles the company abides by. Collective partnerships can also focus discussion on the development of design principles, guidelines, and best practices for AI 42 . However, a unified and strong regulation does not yet exist which can establish fiduciary duties to the public, and that implies the societies can just hope that reputational risks or company's own values and standards may create more responsible approaches towards AI development and use 43 . Further, partnerships thus far have produced "vague, high-level principles and value statements which promise to be action-guiding, but in practice provide few specific recommendations and fail to address fundamental normative and political tensions embedded in key concepts for example, in fairness and privacy" 25 . Finally, in the Purpose zone, the same company could deploy resources to move to third-order, transformative change by 'reconsidering' or 'redefining' the purpose of their system. In the example of social media company, this could be a change in purpose from maximizing engagement to activities such as truth-seeking or social cohesion. There are, for example, several experimental products, such as a platform called Polis, that highlight diverse views and work towards maximizing 'consensus' rather than engagement, and thereby fundamentally changing the goal of the system. This demonstrates that there are often multiple interactions between leverage zones which can be studied for evaluating the intervention's effectiveness. These zones are not discrete, and for effective implementation of change, we should consider the interactions required across an entire system for change in the deeper leverage zones. Despite efforts to move towards Responsible AI, many of these initiatives, particularly those conducted at the corporate level, have been characterized by critics as 'ethics washing', where industry adopts 'appearances of ethical behavior' for self-serving purposes (for example, to reduce regulatory requirements or maintain selfregulation) 23, 26, 44, 45 . On the other hand, it is argued that steps forward in the right direction, however small, are welcome, and major AI companies who have had concrete plans and actions help the industry move towards responsible AI 22 . As an analytic tool, the Five Ps can be used to view the relative strength of interventions towards Responsible AI. In the following section, we look at how the Five Ps can also be used as a planning tool by those seeking to deliver Responsible AI. Efforts towards Responsible AI thus far have predominantly been narrow in focus, with leverage zones chosen that have failed to address challenges 'deeper' questions about the governing rules, structure, business model, and purpose. To move towards Responsible AI, we argue that interventions should be seen and studied in a holistic manner, not in isolation, to avoid missing linkages between the leverage zones, to prioritize competing efforts, to consider the narrow and broad consequences, and to plan in the short and long term. As a planning tool, the Five Ps can be used to prompt consideration of a given problem at multiple levels to achieve the desired level of 'response'. In Table 1 , we provide a set of questions for each leverage zone that could be considered when considering a potential intervention. These questions should be seen as a general set of considerations; they are not exhaustive, and should be tailored to the situation at hand. By proactively considering questions that address systems-level concerns within each of the leverage zones, the appropriate leverage zone of the problem can be properly assessed, and possible synergies and contradictions that might arise can be considered. By exploring these questions, the Five Ps approach allows decision-makers to better understand the scope of the change they are seeking and avoid engaging with the system in shallow leverage zones, such as focusing on AI Principles alone or developing tools and practices for explainable models. It recognizes and promotes the importance of 'question-asking' and how it can influence the shape of the pathway towards Responsible AI. Second, it shows how focusing interventions within discrete leverage zones can precipitate in others, across various depths. The interdependencies between different leverage zones are important to be recognized and studied. Working from the deeper leverage zones shapes and limits the types of interventions available in shallower leverage zones see 46 . Third, it provides an aid for maintaining a holistic view over the challenges associated with Responsible AI, avoiding 'atomized' and 'siloed' conceptualizations in which social, technical, and governance aspects of AI systems are addressed separately, rather than elements that are tightly interacting togethers. The alternative is that we will remain in the existing paradigm which mostly overlooks the structures, norms, values, and goals underpinning the complex problems Responsible AI is facing at deeper levels. Nevertheless, given the scale of existing social and ethical problems that have emerged in relation to the AI use, there is a strong incentive for major AI companies to adopt new tools and frameworks in order to prevent the development technologies that have the possibility to cause harm 47 . And lastly, it provides a transdisciplinary context for a conversation about Responsible AI. Since AI developers come from varied disciplines (each with their own epistemic culture and ethical standards), to speak about Responsible AI, we need frameworks that can engage all stakeholders in meaningful discussions. This is particularly important as we can expect that experts interested in human and environmental aspects of AI-powered technologies are increasingly joining the conversation 11, 48 . The Five Ps framework provides a new communication tool for a wide range of stakeholders to speak about their ideas and priorities for the future of AI and collaborate using qualitative and quantitative methods. However, along with all these advantages, there are certain issues or challenges that must be addressed carefully when we use this approach. The concept of leverage zones and places to intervene in a system are in their infancy in the field of Responsible AI, and will benefit from added discussion and more research to inform where and how they can be used. The terminology has yet to be further developed and established, so we can develop methods to identify or validate leverage zones at different scales, such as temporal, institutional, network, and management factors, and societal reach, such as global, national, local levels, towards responsible AI. Responsible AI needs to engage with the deep questions to find solutions that can address root causes that have led to negative outcomes in AI products and processes. As such we need to constantly reflect about whether the planned initiatives can realize the system shift required to create an environment conducive towards Responsible AI. To this end, we propose that the Five Ps framework is a useful tool to frame a conversation around the notion of 'leverage zone' as the industry takes actions towards Responsible AI. The key advances that the Five Ps framework presents are in developing a shared understanding of: likely long-term effectiveness of proposed initiatives; interdependencies between initiatives required for long-lasting change; frames of question-asking when considering initiatives; removing barriers around silos of activity; considering the broader implications of initiatives, and; providing a transdisciplinary context for the conversation. Further work is required to study long-term effects of decisions arising from the Five Ps zones as a planning tool. However, as we have noted in other domains, it is highly likely that any efforts towards understanding interventions towards Responsible AI at a more holistic, systems level will see benefits over taking a fragmented, siloed approach. Systematic review of research on artificial intelligence applications in higher education-where are the educators? Artificial intelligence and judicial modernization Artificial intelligence and the future of global health China's Surveillance State Should Scare Everyone Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this Algorithms are making economic inequality worse Social and juristic challenges of artificial intelligence Thinking responsibly about responsible AI and 'the dark side' of AI Revealing Ways AIs Fail: Neural Networks can be Disastrously Brittle, Forgetful, and Surprisingly Bad at Math Why the huge growth in AI spells a big opportunity for transdisciplinary researchers Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context Role of Risks in the Development of Responsible Artificial Intelligence in the Digital Healthcare Domain Responsible AI for Digital Health: a Synthesis and a Research Agenda 2020 IEEE Symposium Series on Computational Intelligence (SSCI) Responsible Urban Innovation with Local Government Artificial Intelligence (AI): A Conceptual Framework and Research Agenda Responsible AI for conservation The Application of the Principles of Responsible AI on Social Media Marketing for Digital Health Tackling COVID-19 through responsible AI innovation: Five steps in the right direction Responsible AI Programs To Follow And Implement-Breakout Year 2021 Companies Committed to Responsible AI: From Principles towards Implementation and Regulation? Proceedings of the 52nd Hawaii international conference on system sciences The ethics of AI ethics: An evaluation of guidelines. Minds and Machines Principles alone cannot guarantee ethical AI Proceedings of the 2020 conference on fairness, accountability, and transparency AI Ethics doesn't exist Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities Tools and Practices for Responsible AI Engineering Leverage points: Places to intervene in a system Leverage points for sustainability transformation: a review on interventions in food and energy systems Identifying leverage points for strengthening adaptive capacity to climate change Leverage points to improve smoking cessation treatment in a large tertiary care hospital: a systems-based mixed methods study Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI Proceedings of the AAAI Conference on Artificial Intelligence Community-in-the-loop: towards pluralistic value creation in AI, or-why AI needs business ethics Data science as political action: grounding data science in a politics of justice The Seductive Diversion of 'Solving' Bias in Artificial Intelligence Facebook's ethical failures are not accidental; they are part of the business model He got Facebook hooked on AI. Now he can't fix its misinformation addiction The global landscape of AI ethics guidelines Bad apples, bad cases, and bad barrels: meta-analytic evidence about sources of unethical decisions at work in Being Profiled: Cogitas Ergo Sum 10 Years of Profiling the European Citizen Constitutional democracy and technology in the age of artificial intelligence Leverage points for sustainability transformation An embedded ethics approach for AI development Artificial Intelligence -For Better or Worse