key: cord-0538680-3dc5py8w authors: Ferreira, Juliana Jansen; Monteiro, Mateus title: Designer-User Communication for XAI: An epistemological approach to discuss XAI design date: 2021-05-17 journal: nan DOI: nan sha: d7a74b563f3b4ce1422acff4be7ee5eb27c6139b doc_id: 538680 cord_uid: 3dc5py8w Artificial Intelligence is becoming part of any technology we use nowadays. If the AI informs people's decisions, the explanation about AI's outcomes, results, and behavior becomes a necessary capability. However, the discussion of XAI features with various stakeholders is not a trivial task. Most of the available frameworks and methods for XAI focus on data scientists and ML developers as users. Our research is about XAI for end-users of AI systems. We argue that we need to discuss XAI early in the AI-system design process and with all stakeholders. In this work, we aimed at investigating how to operationalize the discussion about XAI scenarios and opportunities among designers and developers of AI and its end-users. We took the Signifying Message as our conceptual tool to structure and discuss XAI scenarios. We experiment with its use for the discussion of a healthcare AI-System. Explainable AI (XAI) is becoming an important feature to be considered for any AI technology. When AI is part of a high-stake decision is when XAI is necessary to enable the human-AI partnership to make a decision. The human in the loop in this human-AI partnership cannot be left out of the context to advance research about the impacts of AI on real-world problems [3] [10] [20] . While Machine Learning (ML) techniques and methods are resourcefully dealing with many data, humans' input adds meaning and purpose to that data [3] [11] . XAI design is the bridge to provide people with an understanding of AI's outcomes, results, and even behavior to enable them to use what AI provides to make informed and conscious decisions. End-users, with designers' and developers' collaboration, must perform a hard abstraction exercise to consider how the AI-System will be part of their practices and how it can impact and participate in their decisions. Explanations about AI should be part of this exercise's results. There is a gap between AI outputs' explanations and the explanations people need to make sense of what the AI-System did and how it can impact their actions [4] [18] [22] . Moreover, the societal, moral, and legal expectations of AI explanations should be discussed considering all stakeholders [7] . We are aware that explainable AI must meet the users' needs [21] [22] . However, how can users be able to define and frame their own needs regarding AI explanations? Some frameworks and tools aim to enable the definition of XAI features, but a large part of them focuses on data scientists and ML developers as its end-users [2] [8] [16] . Some approaches, like Google PAIR [17] and IBM Team Essentials for AI framework [9] , focus on supporting the discussion of XAI dimension with end-users from different domains not particularly knowledgeable about AI and its concepts. They provide guidelines to consider in XAI design, but the challenge to operationalize the discussion about XAI is still present in those approaches. The explanation about AI's outcomes, results, and behavior has several dimensions that we should consider for XAI design. 'Who' and 'why' are two of those dimensions that have been the focus of previous research [1] [15] . 'Who' are all people interested in AI explanations, like end-users, decision-makers, affected users, regulatory bodies, AI system builders. 'Why' is related to the motivations and expectations each of the interested people has for explanations. We argue that other dimensions should be considered for XAI discussion [11] [12] that are not considered in the available frameworks and methods [17] . We took the Semiotic Engineering as the theoretical lens for this research [6] . We selected its conceptual tool called SigniFYIng Message [5] to structure the different dimensions that we believe should be considered for XAI scenarios' discussion. We perform an initial experiment using the SigniFYIng Message to structure XAI scenarios for a healthcare AI-System to support the discussion between the AI-System's designer and an end-user's advocate. In this experiment, we propose to add the SigniFYIng Message to operationalize the discussion of XAI between AI's designers and end-user. We present a case with IBM Essentials for AI framework, adding the SigniFYIng Message to aid the discussion in the Knowledge activity of the AI design process. What we learned during this experiment informed a future investigation with healthcare end-users that we could not reach at this moment due to the Covid-19 pandemic. We took the Semiotic Engineering [6] , a comprehensive semiotic theory of Human-Computer Interaction, as the theoretical lens for this research due to its view of HCI as a particular case of computer-mediated human communication between designers and users at interaction time ( Figure 1 ). The content of the message refers to how, when, where, and why the users can or should, themselves, communicate with the system in order to achieve certain goals and effects that are consistent with the designers/developers' vision. With Semiotic Engineering as our theory, we are focusing on two roles each time we structure an XAI scenario for discussion: the AI-Systems designer (represents the AI-Systems development team) and the end-user for the scenario. We selected the SigniFYIng Message [5] as our epistemological tool to structure the XAI scenarios' discussion. It is a conceptual structure to frame the content of exchanged messages, not lose track of what matters and why. The SigniFYIng Message is usually used to inspect software artifacts as part of other methods. [5] However, due to its epistemic nature, we believe it can be a valuable resource to structure the XAI scenarios considering the different dimensions and help to operationalize the discussion about XAI. The SigniFYIng Message considers the following five dimensions as described in Table 1 . Our case scenario 1 involves older adults with mobility difficulties who need a multidisciplinary health team in home care service to handle chronic diseases in Brazil. It was described in a previous study [13] when health professionals co-designed a chatbot named MarIANA, designed to support the caregiver of older adults with hypertension. We use the provided health professionals' context and real cases as the basis for this initial experiment. The event that starts our experiment scenario is a notification from MarIANA to the healthcare professional. During his nightshift, he receives a message from MarIANA -"I am 80% confident that you need to contact João's caregiver. His BP is exponentially high.". MarIANA recommended the healthcare professional who needs more information to decide if he needs to contact the patient's home immediately and if so, he must have the best orientation for the patient. For this experiment, we built a Reasoning statemen ( Figure 2) for MarIANA dashboard. This statement is the input for the last activity of IBM Essentials for AI framework -Knowledge that we considered as an interesting point to use the SigniFYIng Message to operationalize the discussion. Considering that reasoning statement, the AI-System's designer filled the following SigniFYIng Message (Figures 4-8) to discuss with the end-user's advocate: Even without the end-users themselves involved in the discussion (the experiment involved an end-user advocate), the AI's designer perceived that he looked for a broader meaning of how the system could impact the end-user's decision-making cycle [11] in that scenario considering the whole context for the XAI Figure 8 . SigniFYIng Message -When and Where dimension discussion. Figure 3 illustrates part of the prototype generated after the discussion with SigniFYIng Message. To tackle XAI Mediation Challenges [19] and bring together technical and social meanings of AI applications, we need to structure and operationalize the understanding of all roles in the AI-System. The roles that are usually represented in design discussions should have frameworks and tools that keep them aware of the "big picture" related to that AI-System that is under development and all parties affected. What we learned during this experiment, including the filled SigniFYIng Message, will inform a future investigation with healthcare end-users, that we could not reach at this moment due to the Covid-19 pandemic. However, this experiment discussion already motivated the MarIANA's designer to reassess decisions and make changes in the previous design. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI Amazon: Amazon SageMaker Clarify Detect bias in ML models and understand model predictions Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda Explaining explanations in AI The SigniFYI Suite. In Software Developers as Users The semiotic engineering of human-computer interaction Accountability of AI Under the Law: The Role of Explanation International Business Machines Corporation: IBM Fairness 360 International Business Machines Corporation: Team Essentials for AI Course AI and HCI: Two fields divided by a common focus Do ML Experts Discuss Explainability for AI Systems? A discussion case in the industry for a domain-specific solution The human-AI relationship in decisionmaking: AI explanation to support people on justifying their decisions Co-designing Strategies to Provide Telecare Through an Intelligent Assistant for Caregivers of Elderly Individuals TED: Teaching AI to explain its decisions AAAI/ACM Conference on AI, Ethics, and Society Explainability+Trust: Chapter Worksheep Questioning the AI: Informing Design Practices for Explainable AI User Experiences Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications The New 42? In Machine Learning and Knowledge Extraction Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences Explanation in artificial intelligence: Insights from the social sciences