key: cord-0616981-9rtmvobk authors: Borg, Markus; Jonsson, Leif; Engstrom, Emelie; Bartalos, B'ela; Szabo, Attila title: Adopting Automated Bug Assignment in Practice -- A Registered Report of an Industrial Case Study date: 2021-09-28 journal: nan DOI: nan sha: 5eb947382d5847a525747332b3ac870da2d38c0e doc_id: 616981 cord_uid: 9rtmvobk [Background/Context] The continuous inflow of bug reports is a considerable challenge in large development projects. Inspired by contemporary work on mining software repositories, we designed a prototype bug assignment solution based on machine learning in 2011-2016. The prototype evolved into an internal Ericsson product, TRR, in 2017-2018. TRR's first bug assignment without human intervention happened in 2019. [Objective/Aim] Our exploratory study will evaluate the adoption of TRR within its industrial context at Ericsson. We seek to understand 1) how TRR performs in the field, 2) what value TRR provides to Ericsson, and 3) how TRR has influenced the ways of working. Secondly, we will provide lessons learned related to productization of a research prototype within a company. [Method] We design an industrial case study combining interviews with TRR developers and users with analysis of data extracted from the bug tracking system at Ericsson. Furthermore, we will analyze sprint planning meetings recorded during the productization. Our data analysis will include thematic analysis, descriptive statistics, and Bayesian causal analysis. In large development projects, the continuous inflow of bug reports is a considerable challenge [4, 18] . The Bug Tracking System (BTS) is a central repository in contemporary software development organizations. There are two archetypal bug assignment processes, i.e., approaches to distribute bug reports to developers. First, as is common in Open-Source Software (OSS) communities, individual developers can select bug reports to resolve in a pull-based process. Second, a push-based process can be used where a change control board or product manager assigns bug reports to either development teams or individual developers. In our research, we focus on the latter, i.e., push-based bug assignment to development teams. Push-based bug assignment is normally done manually. However, several studies report that manual bug assignment is labor-intensive and error-prone [3, 15] , resulting in "bug tossing" [1, 5, 17] and potentially slower bug resolution. Several researchers have proposed mitigating the challenges by automating bug assignment. The most common automation approach uses supervised Machine Learning (ML), i.e., a classifier is trained to find patterns in historical bug reports to make recommendations for new bugs. Early research on automated bug assignment focused on OSS development communities, especially the Eclipse and Mozilla projects. However, the OSS context differs from proprietary development in several aspects, e.g., organizational structures and developer incentives. A recent study at LGE Brazil constitutes a rare example of an empirical study in a large company [20] . In 2016, we presented a controlled experiment on ML-based bug assignment using five datasets from two companies in telecommunications and process automation [16] . This study was the first step in an incremental design science research process [11] . Our findings in this controlled setting were positive and led to internal productization of a simplified version of the solution within Ericsson. Since 2017, a team in Hungary owns and maintains the solution, referred to as Trouble Report Routing (TRR). To align the terminology, we refer to bug reports as Trouble Reports (TR) in the remainder of this report. We have previously reported lessons learned from deploying TRR in an anecdotal manner [8] . Furthermore, we conducted a quantitative analysis of the prediction accuracy of TRR's assignments [27] . In the latter paper, we concluded that the results were promising, but not yet accurate enough for everyday use in its intended target environment. We continued improving and customizing TRR and in 2019 activated the solution -the very first TR assignment without human intervention happened on April 10, 2019. Since then, TRR has been in continuous operation and automatically routed roughly 30% of the incoming TRs. We are now designing an industrial case study to evaluate the adoption of TRR within its industrial context. Our study will present new perspectives on automated bug assignment in proprietary contexts by moving beyond the prediction accuracy that has been in focus in previous work [20, 27] . Our study aims to provide insights regarding direct as well as indirect effects of deploying this researchbased intervention in an operational setting. Thus, we will add empirical support for (as well as refinements of) the previously proposed technological rule [25] : To achieve more effective assignment of bugs to teams in large scale industrial contexts, use ensemble-based machine learning to automate bug assignment. In the initial proposal, specifically ensemble-based ML approaches were recommended. However, the deployed version needed adaptation to the context and thus our starting point in this study is the more general technological rule: To achieve more efficient and effective assignment of bug reports to teams in large scale industrial contexts, use machine learning to automate bug assignment. This registered report constitutes our case study protocol, developed in line with guidelines by Runeson et al. [26] . A summary of the elements of the research design is presented below, which also presents the structure of this report. • Rationale Evaluate the adoption of TRR within its industrial context. (Section 2.1) • Purpose Provide evidence on the industrial applicability of ML-based bug assignment. (Section 2.1) • The case Automated bug assignment using TRR at Ericsson. (Section 2.3) • Units of analysis The team maintaining TRR, engineers conducting TR assignments, and two teams using TRR with different levels of automation. (Section 2.3) • Theory The design science paradigm [25] constitutes the frame for our research where the general technological rule is a proposition for efficient and effective bug assignment [16] . In this study we add design knowledge related to the industrial adoption of this proposition. Our evaluation is guided by models related to quality [24] , automation [21] , and technology acceptance [10] . (Section 2.4) • Research Questions Four RQs targeting different aspects of adoption of TRR: 1) evolution from prototype, 2) prediction accuracy, 3) added value, and 4) direct and indirect effects. (Section 2.5) • Data collection Quantitative data from the BTS and TRR. Qualitative data from interviews, recorded sprint meetings, and internal documentation. (Section 3.1) • Data analysis Descriptive statistics complemented by causal Bayesian analysis [14] . Thematic analysis [9] to interpret the qualitative data. (Section 3.2) Figure 1 : The context, case, and units of analysis. • Quality assurance Prolonged industry-academia collaboration to ensure relevance [12] . Rigor assured by method and researcher triangulation with member checking. (Section 4) We conduct interpretivist research as the methods of natural science are insufficient for understanding the case in its social reality context [2] . Figure 1 illustrates the context, the case under study, and the units of analysis. As defined by Runeson et al. [26] , "case study in software engineering is an empirical enquiry that draws on multiple sources of evidence to investigate one instance (or a small number of instances) of a contemporary software engineering phenomenon within its real-life context, especially when the boundary between phenomenon and context cannot be clearly specified. " Since the adoption of TRR cannot be isolated from the development context at Ericsson, we design an industrial case study. Our study relies on a flexible design, i.e., the sampling, data collection as well as the data analysis involve components relying on our evolving knowledge about the phenomenon. This registered report presents how we design for flexibility while maintaining rigor. Our overall goal is to evaluate the adoption of TRR within its industrial context at Ericsson (cf. Figure 4) . Several aspects motivate us to pursue this goal. First, we want to follow up on research that was initiated 10 years ago. How does the automated bug assignment solution actually perform in the field? Are the assignments provided by TRR sufficiently accurate to provide value in the industrial context? Do engineers at Ericsson appreciate the support provided by TRR? How has the introduction of TRR influenced the ways of working? Are there any surprising indirect effects that should be reported? As discussed in Section 1, there is a lack of industrial case studies sharing these types of insights. Second, we seek to provide insights regarding the industrial adoption of a research prototype. By conducting this study, we will highlight an example of industry-academia collaboration and technology transfer. The study will contain a retrospective analysis of the evolution from prototype to internal product. We will explore obstacles experienced in the productization and share lessons learned on how they were tackled in the industrial context. We expect our findings to be highly relevant for other software engineering researchers proposing new tools for use in proprietary contexts. As illustrated in Figure 1 , the context is software and systems engineering at Ericsson. Ericsson is a global actor in telecommunications. We characterize the context inspired by the facets proposed by Petersen et al. [23] , focusing on the factors that we believe are the most relevant for our study. Product The products in the analysis consists of two large systems in the Information and Communications Technology (ICT) domain. Various programming languages are used in the products, but a majority of the code is developed in C++ and Java. Other languages such as hardware description languages and tailored domain-specific languages are also used. The two systems are mature with old code bases. Processes The project model used to develop both systems is an adapted agile development process. Development in the ICT domain is heavily standardized, and adheres to standards by regulatory bodies such as 3GPP, 3GPP2, ETSI, IEEE, IETF, ITU, and OMA. Moreover, Ericsson is ISO 9001 and TL 9000 certified. Practices and Techniques The development projects use agile practices that have been customized for the organization, e.g., sprint planning meetings, retrospectives, self-organization, and test automation. The development projects are organized into two-week sprints followed by releases. People Staff turnover is very low in the development organization. Many of the engineers are seniors developers who have been working on the same, or similar, products for many years. Organization Several hundreds of engineers distributed over several countries, e.g., Sweden, Hungary, China, and Canada. In total, Ericsson has 100,000 global employees. The BTS is the central point for organizing the bug handling process. Tracking of analysis, implementation proposals, testing, and verification are all coordinated through the BTS. Market Both systems are deployed at customer sites world-wide in the ICT market. The telecommunications market is currently in a transition from the last generation of 4G networks to 5G. Softwareoriented technology improvements are increasingly flexible highspeed connectivity at ultra-low latency. The case under study is automated bug assignment using TRR in its industrial context. Figure 2 shows how TRR has been integrated in the BTS at the company. Different organizational units submit TRs to the BTS (A). TRR, operating as a BTS plug-in, predicts which development team would be the most likely to resolve the bug and appends this information to the TR. If the prediction has a high confidence value, i.e., above a configurable threshold, the TR is automatically assigned to the corresponding team (B). If the confidence value is lower than the threshold, the assignment process relies on the normal manual approach by one of the TR coordinators (C). The manual approach encompasses a TR coordinator pulling a TR from the BTS, analyzing it (possibly guided by TRR's recommendation [6] ), and pushing the TR to one of the development teams. Bug tossing entails reassignment of a TR to another team (D). Note that the phenomenon of bug tossing is not necessarily caused by an incorrect initial team assignment. On the contrary, it can be a required step when resolving complex bugs that necessitate changes by multiple teams. TRR is currently in operation in a BTS used for development of two major systems consisting of 19 subsystems. The subsystems are developed by corresponding virtual organizational units -in this study we simply refer to them as "teams" for brevity. Nine of the 19 teams have opted in to receive TRs automatically assigned by TRR. For the remaining 10 teams, TRR only attaches its prediction to the TRs as a recommendation. According to Parasuraman et al.'s model of automation [21] , the automatic vs. recommended TR assignment correspond to automation level 8 and level 4, respectively. We plan to use the different levels of TRR automation for comparisons and refer to them as "HighAuto" and "LowAuto". TRR is since 2017 maintained by a team in Hungary, see "TRR Team" listed as a unit of analysis in Figure 1 . Furthermore, we define three additional units of analysis. First, a development team that opted in as early adopters of the HighAuto TRR, heavily involved in the transition from research prototype to operational tool (cf. "HighAuto Team"). Second, a development team that opted out from automatic routing, i.e., representing LowAuto TRR. Third, engineers that act as TR coordinators for either teams using HighAuto TRR, LowAuto TRR or not using TRR at all (cf. "TR Coords."). TR coordinators have different roles within Ericsson, but perform TR assignment as part of their routine work. The general problem of inefficient and ineffective bug assignment was observed in the literature [4, 18, 20] as well as in the specific industrial contexts where this research was conducted [17, 27] . With the solution in mind (to use ML techniques to assign TRs to teams), the characteristics of the targeted problem instance were identified, i.e., we explored the nature of the TRs, the BTS, and the organizational context within a subset of the development at Ericsson. Related work on bug classification as well as on ML techniques was identified [16] , which underpinned the design decisions for the proposed solution. The ML solutions were implemented and trained using the Weka framework [13] . Several alternative solution instances were validated on real data (50,000 TRs) from five projects across two companies/domains. For the specific companies, a design artifact was produced, namely a prototype ensemble-based bug assignment tool built on top of Weka. In our 2016 paper, we stated that the translation from TRR's prediction accuracy to the practical value of the solution might not be linear. Furthermore, we discussed this aspect in terms of the QUPER model [24] , a theoretical construct describing the perceived benefits of different degrees of quality as continuous and non-linear. Figure 3 shows the three quality breakpoints proposed by the QUPER model for TRR: • Utility Engineers start considering TRR as a useful addition to manual bug assignment. • Differentiation Engineers recognize that TRR provides a competitive advantage compared to fully manual work. • Saturation Increasing the quality of TRR beyond this points adds no practical value. We will base our evaluation of the adoption of TRR on three theoretical models. First, we will revisit the QUPER model to assess where TRR belongs on the sliding quality scale. Second, as we also did in the original paper, we will discuss the increased level of automation using the model by Parasuraman et al. [21] . The latter model opens up for an analysis of both direct and indirect effects of increased automation. Third, we will study the Ericsson engineers' impressions of working with TRR from the perspective of the established Technology Acceptance Model (TAM) [10] . As visualized in Figure 4 , the aim of the study is to evaluate the adoption of TRR within its industrial context. We have defined four main research questions, which may all be answered by applying both qualitative and quantitative methods. The lower part of the figure presents data sources and metrics, where the latter are indicated in bold font. RQ1 How did TRR evolve from prototype to deployed tool? RQ2 How accurate are the TRR assignments? RQ3 How much value does TRR provide in the organization? RQ4 How has the adoption TRR influenced the way of working? RQ1 will be answered by studying the design decisions Ericsson engineers made along the way. How and why were potential adaptations to the original solution made? What were the major challenges during the tool introduction, including processes, technology, organizational issues, and human factors? The TRR team, one of four units of analysis, will share a collection of recorded virtual sprint meetings and internal documentation. Furthermore, we will conduct interviews to collect lessons learned. RQ2 involves a quantitative analysis of TRR's prediction accuracy in the light of our previous work [27] . Previously, we studied the accuracy relying on a set of roughly 10,000 TRs. Relying on easily accessible textual and categorical features, we obtained precision and recall values around 80 %. As this was reported as insufficiently accurate for regular use, we proposed to only assign TRs for which the ML classifier was confident. We will now revisit the accuracy RQ to evaluate how TRR performed in the field using historical data since deployment in April 2019, incl. the fraction of TRs resolved by the first assigned team and the length of bug tossing chains. RQ3 targets the utility of TRR and its added value in the organization. We will complement the insights provided by RQ2 with an analysis of the TRR utilization, i.e., whether it has been available (uptime) and sufficiently confident to be effective (fraction of automatic TR assignments and distribution of confidence levels). Moreover, we will complement the analysis with qualitative insights from interviews with members of the HighAuto and LowAuto Teams and a sample of TR coordinators (cf. Figure 1) . Section 3.1.2 presents how we will design interviews supported by theoretical models [10, 21, 24] . Analyzing differences between HighAuto TRR and LowAuto TRR will enable comparisons. RQ4 explores the direct and indirect effects of introducing TRR in the organization. A tool never exists in isolation, i.e., the introduction of tool-oriented interventions ought to be studied through a holistic perspective. Among other things, we seek to understand what made certain teams opt-in to the HighAuto TRR whereas others preferred LowAuto TRR. Analogous to RQ3, RQ4 will be answered using a combination of quantitative metrics and rich information from interviews. Figure 5 shows an overview of the execution plan. Section 3.1 describes how we will proceed with data collection during Q3-Q4 2021. Subsequently, Section 3.2 presents our approach to data analysis. Interviews will be conducted in Q3-Q4 2021 and the corresponding analysis will be concluded during Q1 2022. Quantitative analysis will be done during Q3-Q4 2021. Research synthesis will be initiated in Q1 2022 and the reporting will be concluded in Q2 2022. The study relies on non-probability sampling [2] , i.e., there will be no element of randomness when selecting items in the sampling frame. Instead, we will use a combination of purposive and referralchain sampling to select interviewees. To mitigate selection bias, our initial set of interviewees will include engineers from different levels of the organisation as well as with varying levels of adoption of TRR. Furthermore, we will add questions to the interview guide with the purpose of identifying people having complementary insights. For our artifact analyses, i.e., document analysis and mining software repositories, we will use whole-frame sampling. The BTS is an important source of data that constitutes a valuable target for mining of software repositories [6] . The BTS data contain details of TRs, e.g., assignments, submitters, severity levels, and time stamps. We plan to collect all data related to the development of the 19 teams that either use HighAuto TRR or LowAuto TRR, (cf. A) in Figure 5 ). We will collect 2,5 years' worth of data, i.e., 2019-04-10-2021-10-10. Thus, we apply whole-frame sampling by selecting all items in the sampling frame [2] . TRR logs all its actions and output in the BTS. The logs primarily show the TRR predictions, i.e., the bug assignment output provided by the tool. Furthermore, the logs contain confidence levels accompanying the predictions. Finally, all TRR actions have individual time stamps. 3.1.2 Qualitative Data. We will select interviewees from the four units of analysis based on purposive sampling. Our goal is to identify the candidate interviewees that can provide the richest information, while also complementing the perspectives of the previous interviewees from a heterogeneity perspective, e.g., roles, background, site, age, and gender. As case study research allows a flexible design, we will complement the interviewee selection with referral-chain sampling. In practice, each interview session will conclude by asking the interviewee to refer other members of the population whom they believe would provide valuable perspectives on the adoption of TRR. End of sampling will be determined by the level of saturation reached with respect to the initial coding in the thematic analysis. We will develop an interview guide with some variation points for the four units of analysis. The interview sessions with TRR Team, HighAuto Team, and LowAuto Team will focus on challenges, solutions, and opportunities related to the evolution of TRR from a research prototype to an internal tool at Ericsson. On the other hand, the interview sessions with the TR Coordinators will primarily focus on the user experience and perceived value of TRR (corresponding to perceived ease of use and usefulness in TAM [10] ). However, we anticipate that the interview questions will be intermixed in several interview sessions, i.e., we will perform semi-structured interviews. An initial overview of the interview guide is presented below: (1) A formal introduction including overall purpose, non-disclosure agreements, integrity, security, and research ethics. (2) A brief description of the interviewee's current role and engineering background. (3) (If applicable) Lessons learned related to evolving TRR into an internal tool. (4) (If applicable) Perceived TRR value and ease of use (guided by TAM as done for testing tools by Mezhuyev et al. [19] ). (5) (If applicable) Perceived TRR value in relation to its prediction accuracy (supported by descriptive statistics and the QUPER model [24] , in line with our previous work on change impact analysis [7] ). (6) Reflections on direct and indirect effects when increasing the level of automation in bug assignment (guided by Parasuraman et al. [21] ). (7) Final comments and suggestions for additional interviewees. Interview sessions are expected to last 30-60 min and will be conducted by at least two interviewers. All sessions will be done remotely using MS Teams as Ericsson engineers will be working from home during 2021 due to the Covid-19 pandemic. The three arrows from C) to D) in Figure 5 illustrates how data collected from the four units of analysis enter the thematic analysis. This subsection describes how we will analyze the BTS/TRR data and our approach to qualitative analysis. We open the discussion on quantitative data analysis with an important disclaimer. Bug data is highly sensitive to any development organization. As a result, we will never be able to report any absolute numbers related to TRs. Instead, all bug counts will most likely be presented in relative numbers. First, we will use the extracted data to calculate simple descriptive statistics for both HighAuto TRR and LowAuto TRR (cf. the leftmost arrow from A) in Figure 5 ). The descriptive statistics will be used as input to the interview sessions (cf. the arrow from B) to C) in Figure 5 ). Second, we will iteratively extract data and conduct the corresponding analysis. The cycle between A) and B) in Figure5 highlights that this activity is partly exploratory, i.e., we expect to find new research angles as we get more familiar with the data. Our initial list of metrics, also presented in Figure 4 , are presented below: (1) Uptime will be estimated by calculating the fraction of TRs with missing TRR predictions. TRR is deployed as a BTS plugin rather than a separate web service, thus we will estimate TRR service outage through missing output. (2) Fraction automatically routed will be calculated from the TRR logs. Ericsson estimates that a manual TR assignment takes 2 min on average, i.e., we will naively report TR coordinators' potential time savings. (3) Distribution of confidence levels for the TRR predictions will be collected from the TRR logs. The confidence level is fundamental as it must surpass a certain threshold to allow automated assignments. (4) Fraction of TRs resolved by the assigned team will be calculated by combining BTS data and TRR logs. This represents an ideal case, i.e., the team assigned the TR also resolved it. (5) Average length of bug tossing chains shows the number of TR reassignments. This measure is commonly reported in studies on automated bug assignment [15, 28] . (6) Average time to assign TRs will be calculated from the BTS data, i.e., the average time between TR submission and assignment. (7) Full days saved is an in-house metric used by Ericsson for initial evaluations of TRR. TR Coordination meetings, i.e., manual TR assignment, are scheduled Mon-Fri in the morning hours CET. If a new TR is submitted shortly after this meeting, it would not be assigned until the meeting the next weekday -TRR could then potentially save a full day. While we intend to compare the metrics for HighAuto TRR and LowAuto TRR, we will be conservative about making causal claims. There are two approaches to drawing causal conclusions, Randomized Controlled Trials (RCT) and Bayesian Causal Analysis (BCA) [14] . These are mathematically proven to be equivalent [22] . Unfortunately, an RCT is not a viable option in the organization. To compensate, we will perform a BCA to try to detect a causal effect of opting in to HighAuto TRR vs. opting out (LowAuto TRR). However, we are aware that our qualitative analysis might reveal unmeasurable confounding factors that invalidate HighAuto and LowAuto comparisons in our case under study. We will quantify these in a graphical model and measure the sensitivity to model noise and model misclassification as part of the BCA workflow. Moreover, the comparative descriptive statistics will stimulate discussions during the interview sessions in relation to the qualitative analysis of RQ3, i.e., the value of TRR within Ericsson. Data. To answer RQ1, RQ3, and RQ4 we will analyse documentation, recorded sprint meetings, and interview transcriptions. We will iterate over the five steps of thematic analysis as described by Cruzes and Dybå [9] : 1) extract relevant data, 2) code the extracted data, 3) translate codes into themes, 4) create a model based on the themes, and 5) validate the synthesis. Since the qualitative analysis is exploratory, we do not expect this procedure to be a strict waterfall procedure but steps may be iterated and sometimes merged. An initial coding could for example be done while extracting the relevant data (merging steps one and two) and may need iteration if new codes emerge during the step. For RQ1, our starting point is to identify and code information regarding design decisions when implementing and deploying TRR, while for RQ3 and RQ4 our starting point will be to code effects (indirect and direct) of adopting TRR. For RQ1, which is both descriptive and prescriptive, we expect codes and themes to firstly bring insights into which refinements of the general technological rule may be of relevance for a practitioner when selecting an automation strategy and for a researcher interested in investigating the topic further, and secondly to guide positioning of technological rules (prescriptions). Refinements will be expressed in terms of extended taxonomies for the three facets of the technological rule, context, scope, and intervention. These taxonomies will then provide a basis for proposing refined technological rules. In addition, codes and themes that are more context specific (and thus not good candidates for a general theory) will be used to describe the problem instance (our case under study) to support analytical generalization and assessment of the empirical support this case brings to the proposed technological rules. For RQ3 and RQ4, which are more descriptive we expect a more complex model of various effects and their internal relationships. Finally, we will assess our interpretations by testing the taxonomies, the technological rules, and the effect model on the study participants. The value of design science research may be assessed from three different perspectives [25] , i.e., its relevance, its novelty and its rigor. The design knowledge gained from this research is relevant for practitioners facing the challenge of manually assigning bugs to teams, and for researchers studying industrial adoption of ML approaches for automated bug assignment. Relevance is a subjective value [12] and to support its assessment we will identify and report the context factors that affect the applicability and observed effects of the proposed intervention. Furthermore, the design knowledge is novel in terms of increased maturity of the general technological rule and in proposing refined rules with respect to the scope of validity and the effects of adoption. Rigor will be achieved by following this preregistered case study protocol and by transparently reporting all steps of interpretation in the qualitative analysis. Rigor may in turn be assessed in terms of construct validity, internal validity, and reliability. As we design a single case study, pure statistical generalisation will not be possible. External validity is instead covered by the discussions on relevance above. Construct Validity. Since we will conduct an exploratory study, not all constructs will be known upfront. Our high-level constructs such as "value" and "ways of working" will be refined in the qualitative analysis. The metrics proposed in Figure 4 represent our initial assumptions of how to measure these aspects. We expect the qualitative analysis to reveal additional metrics. To increase the final construct validity, study participants will be asked to assess our interpretations. Finally, we acknowledge that the constructs of TAM have been criticized for being too trivial to result in practical research results. To mitigate this, we will complement TAM's analysis of usefulness with the quality levels provided by QUPER. Internal Validity. As discussed in Section 3.2.1, we will not be able to perform a controlled randomized trial to prove causal relationships within Ericsson -we cannot disable HighAuto TRR for a random subset of teams. As we have to deal with the complexity of in vivo research, we aim to conduct a causal Bayesian analysis instead. Still, we will be careful when proposing any causal relationships. To increase the validity of the propositions, confounding factors need to be identified and reported. Some confounders are already known, e.g., product details, organizational structure, and process adaptations, whereas others will emerge from the qualitative analysis. Reliability. This aspect of rigor concerns to what extent the analysis depends on the specific researchers. We will mitigate threats to reliability through researcher and method triangulation [26] . Additional measures include prolonged involvement, i.e., the long-term relations evolving during the study will support reliable interpretations, and member checking, i.e., participants of the study will validate both data collection and analysis. Reducing the effort of bug report triage: Recommenders for development-oriented decisions Sampling in software engineering research: A critical review and guidelines A bug you like: A framework for automated assignment of bugs Duplicate bug reports considered harmful. . . really? Automated, highly-accurate, bug assignment using machine learning and tossing graphs Changes, evolution, and bugs Supporting change impact analysis using a recommendation system: An industrial case study in a safetycritical context Industry-academia collaboration in software engineering Recommended steps for thematic synthesis in software engineering Perceived usefulness, perceived ease of use, and user acceptance of information technology How software engineering research aligns with design science: a review Practical relevance of software engineering research: synthesizing the community's voice The WEKA data mining software: An update Causal inference: What if Improving bug triage with bug tossing graphs Automated bug assignment: Ensemble-based machine learning in large scale industrial contexts Towards automated anomaly report assignment in large complex systems using stacked generalization Towards the next generation of bug tracking systems The acceptance of search-based software engineering techniques: An empirical evaluation using the technology acceptance model Issue autoassignment in software projects with machine learning techniques A model for types and levels of human interaction with automation. Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans Causality: Models, reasoning and inference Context in industrial software engineering research Supporting roadmapping of quality requirements The design science paradigm as a frame for empirical software engineering Case study research in software engineering: Guidelines and examples Improving bug triaging with high confidence predictions at Ericsson Empirical study on developer factors affecting tossing path length of bug reports