key: cord-0963733-rkewwxys authors: McCoy, Allison B; Russo, Elise M; Johnson, Kevin B; Addison, Bobby; Patel, Neal; Wanderer, Jonathan P; Mize, Dara E; Jackson, Jon G; Reese, Thomas J; Littlejohn, SyLinda; Patterson, Lorraine; French, Tina; Preston, Debbie; Rosenbury, Audra; Valdez, Charlie; Nelson, Scott D; Aher, Chetan V; Alrifai, Mhd Wael; Andrews, Jennifer; Cobb, Cheryl; Horst, Sara N; Johnson, David P; Knake, Lindsey A; Lewis, Adam A; Parks, Laura; Parr, Sharidan K; Patel, Pratik; Patterson, Barron L; Smith, Christine M; Suszter, Krystle D; Turer, Robert W; Wilcox, Lyndy J; Wright, Aileen P; Wright, Adam title: Clinician collaboration to improve clinical decision support: the Clickbusters initiative date: 2022-03-04 journal: J Am Med Inform Assoc DOI: 10.1093/jamia/ocac027 sha: 018cafb31715474ee66f91ed195de32e0da99d4e doc_id: 963733 cord_uid: rkewwxys OBJECTIVE: We describe the Clickbusters initiative implemented at Vanderbilt University Medical Center (VUMC), which was designed to improve safety and quality and reduce burnout through the optimization of clinical decision support (CDS) alerts. MATERIALS AND METHODS: We developed a 10-step Clickbusting process and implemented a program that included a curriculum, CDS alert inventory, oversight process, and gamification. We carried out two 3-month rounds of the Clickbusters program at VUMC. We completed descriptive analyses of the changes made to alerts during the process, and of alert firing rates before and after the program. RESULTS: Prior to Clickbusters, VUMC had 419 CDS alerts in production, with 488 425 firings (42 982 interruptive) each week. After 2 rounds, the Clickbusters program resulted in detailed, comprehensive reviews of 84 CDS alerts and reduced the number of weekly alert firings by more than 70 000 (15.43%). In addition to the direct improvements in CDS, the initiative also increased user engagement and involvement in CDS. CONCLUSIONS: At VUMC, the Clickbusters program was successful in optimizing CDS alerts by reducing alert firings and resulting clicks. The program also involved more users in the process of evaluating and improving CDS and helped build a culture of continuous evaluation and improvement of clinical content in the electronic health record. Early reports of adverse drug events among hospitalized patients have resulted in substantial research into the use of clinical decision support (CDS) to prevent patient harm and reduce costs. [1] [2] [3] Further, federal regulations required institutions to enable drug-drug and allergy interaction checks, implement high priority condition rules, and track CDS compliance. 4, 5 Alerts, a commonly used approach to CDS, can notify clinicians of interactions, changing lab values, or other information. 6, 7 CDS alerts are included in all major commercial electronic health records (EHRs); a large 2015 survey found that 95% of respondents had implemented drug-allergy, drug-drug, or drug-laboratory interaction alerts. 8 Despite initial promise of success and widespread CDS implementations, evaluations of CDS have not consistently demonstrated improved patient outcomes. [9] [10] [11] [12] Alert nonadherence, or clinician overrides, is one key barrier that occurs for 49%-96% of alerts. [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] Overrides may result from an excess of alerts that are repeated or deemed irrelevant (ie, a high alert burden), causing alert fatigue and reducing the efficacy of CDS. 13, 24 Prior research has found that there are opportunities for improving or turning off CDS alerts to improve patient outcomes. [25] [26] [27] [28] Similarly, to facilitate alert evaluations, institutions have implemented various dashboards that organize and present information from the EHR in a way that is easy to interpret. [29] [30] [31] [32] [33] [34] Commercial products also exist to provide tools for evaluating CDS alerts. [35] [36] [37] Although some organizations have successfully implemented these tools and reduced the number of alerts and overrides, 38 other reports have found that many organizations are not following these recommendations, in part due to a lack of consensus for whether or not to turn off or modify, or how to modify, alerts. 27, 28, 39 A key challenge with clinical decision support is knowledge management as guidelines and best practices change over time, [40] [41] [42] as well as governance. [43] [44] [45] Although both knowledge management and governance practices are essential, they can, at times, become unwieldy and slow progress, so balancing them with the need to innovate and improve is critical. The Vanderbilt Clinical Informatics Center (VCLIC) established the Clickbusters initiative to improve safety and quality and reduce burnout through optimization of Vanderbilt's Epic EHR. The first project Clickbusters focused on was improving CDS alerts, including turning unnecessary alerts off, fixing errors, targeting alerts more precisely, or even creating new alerts that were effective and well-targeted. Like many organizations, Vanderbilt University Medical Center (VUMC) previously established a "Physician Builder" program, where interested clinicians can get training on EHR customization and build content in their area of expertise. 46 Despite the name, the program is also open to other appropriately trained professionals, including nurses, pharmacists, physician assistants, nurse practitioners, therapists, and clinical informaticians. VUMC has a large and effective Physician Builder program with 70 participants. These builders were trained and certified to build and maintain clinical content and functions in our Epic EHR. While they had developed documentation tools, order sets, reports, and CDS alerts, they did not focus on optimizing existing alerts. Clickbusters started with VUMC's existing Physician Builder program and built on this foundation by establishing a 10 step Clickbusting process and creating the Clickbuster program. The program included a curriculum with videos and knowledge base articles, a management process and database to support tracking of participant progress, support for participants, and gamification. We also created an inventory of our CDS alerts and prioritized them for review based on firing rate, acceptance rate, and complexity. A core aim of the Clickbusting process is to understand how and whether individual CDS alerts have improved. The process has iterative cycles and multiple steps that align with the Plan-Do-Study-Act (PDSA) model for quality improvement research. [47] [48] [49] The iterative Clickbusting process with 10 discrete steps is depicted in Figure 1 . We review these steps within the PDSA model in detail below. Step 1: Review the current alert logic and function. The first step in analyzing an alert is reviewing what the current alert looks like, when and why it fires, and what actions or acknowledgments are implemented. Some individuals, such as health information technology (HIT) analysts or builders, may be able to view this information directly in the production or build environments of the EHR. Others may request this information from HIT analysts. Step 2: Review the alert firing and acceptance data After reviewing the alert logic and function, it is helpful to review the alert performance. When available, analytics dashboards can facilitate this process. At VUMC, a Tableau (Seattle, WA) Dashboard allows individuals to visualize firing and acceptance patterns for a specific alert by department, specialty, user type, and other variables ( Figure 2 ). 50 HIT analysts can also help extract this data from EHR utilization logs. Step 3: Review issue/project tracking software for history of the alert To learn what work has been done on the alert previously, and who has been involved, it can be helpful to review records in knowledge management, issue, or project tracking software, if available. At VUMC, the CDS team uses Atlassian Jira (Sydney, Australia) to track the history of alerts and identify team members involved in the creation or maintenance of an alert. Other organizations might use Microsoft SharePoint or another tool to track this data, and they may also keep metadata directly in the EHR. Reviewing this data can help clarify design decisions, rationale, and past changes, and also find people who were previously involved in the alert's design and maintenance for input. Step 4: Look at the alert through the eyes of the user Because alerts affect users, it is critical to look at the alert through the eyes of a user. One way to approach this is to review a random sample of alert firings, guided by the CDS Five Rights 51 and humancentered design principles, especially cases where the alert was not accepted. Another approach is to review comments left by users during overrides. 52 This information about why the user felt that the alert did not apply can be used to form a hypothesis for how the alert could be improved. If available, these comments can be viewed using an analytics tool, within the patient's chart, or with the help of an informatics team member. Finally, it can be helpful to contact top recipients of an alert. A sample message that can be sent directly to frequent recipients to ask for feedback is included in the Supplementary material. Contacting users by e-mail individually and customizing the text instead of sending a generic message or using a survey tool is more likely to yield helpful feedback. Step 5: Review the clinical evidence Another key step in improving alerts is reviewing the clinical evidence. Since alerts are general tools, usually a general review of evi-dence will suffice. Resources such as UpToDate (Wolters Kluwer) 53 can be a starting point that provides relevant summaries and guidelines for many alerts. Directly reviewing clinical guidelines, related quality measures, and recent literature relevant to the alert topic or utilizing institutional library sources is also an option. Finally, it is often useful to consult local experts on the topic; however, it is important to balance this input with input from users who will receive the alert-for example, a radiologist, breast surgeon, or oncologist's advice about breast cancer screening should be paired with input from users (most often primary care providers) who are usually the ones to order screening. Step 6: Identify possible improvements to the alert Before trying to improve the alert, it is first necessary to determine whether the alert should stay in the system at all. It is important to consider whether the alert is about an important topic (especially if it was built in the past), whether there is a better way to achieve the goal, and if there is evidence that the alert is prompting the intended outcome. Often, evaluations of alerts identify defects or areas of optimization, and this information can be used to alter the alert logic. Options for doing so can include correcting logic or build errors; changing the trigger, timing, or lockout period of the alert; and suppressing the alert for patient-specific, user-specific, or situational factors that might make the alert inappropriate. It might also be possible to alter the alert to get users to act on it more often. Options for doing this include adhering to good human-centered design prin- ciples such as clarifying the text of the alert to specify why it fired, the expected action, and relevant clinical data; making the alert actionable; and firing at different times in the workflow or for different users. Step 7: Discuss possible improvements with stakeholders Once opportunities to improve or turn off the alert have been identified, it is important to discuss these potential changes with key leaders and stakeholders in both clinical and informatics roles. Identifying clinical champions in the clinical area of the alert affects can be useful. When reaching out to these individuals, it can first be useful to share data that the alert is not being accepted, is overly burdensome, is firing in inappropriate situations, is not having the intended effect on clinical outcomes, or is otherwise not working as designed, as this information can help make the case for the necessary changes. Finally, it is important to remember that the status quo is not necessarily safe. There are tools and processes that facilitate rapid improvement of alerts, monitoring of the effects, and reverting of changes, so it may be possible to pilot a change or new version of the alert for a small time or a pilot group of users while monitoring for unexpected effects. This can also be a good time to obtain feedback from end-users about the design of the alert. After this process, it should be possible to arrive at a decision for changes that can be implemented. Step 8: Make changes to the alert in development environment After determining which, if any, changes need to be made to the alert, the next step is to make the changes. Most often, changes are made in development or build environment. When necessary, HIT analysts can facilitate or help with this process. It is good practice to make one or a few changes at a time, with multiple releases if necessary, and use available versioning tools to preserve history in the event that a roll-back is necessary. Step 9: Test and release the alert into the production environment It is important to test alerts after making changes in the build or test environment, prior to releasing the alert into production. 54 At this phase, testing should focus on whether the alert handles expected special cases correctly. In general, the goals of testing are to ensure that the alert shows up when it should, does not show up when it should not, looks as expected, and offers appropriate actions that perform as expected. It can also be helpful to have peers review the modifications and provide feedback. Monitoring should also be carried out after release into production, providing additional opportunities to identify problems. Once the alert has been tested, it can be moved to the production environment, usually with the help of a HIT analyst. Step 10: Evaluate and share the results The final step is evaluating the results of the change and sharing the results. On the day the alert is released, first ensure that the change really made it into production. It then is important to monitor the alert performance; big changes may have an immediate effect, but smaller changes might not be noticeable right away. It can also be useful to review override comments entered by users and to reach out to users that provide feedback. Repeating surveys and talking to users who receive the alert to see if there is a noticeable difference can also be valuable. It is also important to monitor for effects on related clinical quality measures. Finally, partnering with an academic informatics or quality improvement department to more fully evaluate and disseminate the results can be beneficial. When an alert is improved, sharing this through academic channels is helpful, as it is likely that other organizations have a similar alert and could benefit from the experience. Developing the curriculum We first assessed the current knowledge of our Physician Builders, who would be the first group of Clickbusters participants. We learned that while they had all received some training from Epic on alert build, most had not used the related tools since their training, and they felt that they needed a refresher. To help with this, we developed a series of videos (one for each step of the Clickbusting process), which demonstrated how to use the tools necessary to carry out each step, and we identified links and references to additional sources of help. We also created a collaborative wiki for Clickbusting documentation using the Confluence platform, and we prepared articles that would help Clickbusters complete their work, such as tutorials on using groupers or value sets and creating test patients. After developing the curriculum, we retrieved data about all alerts in production to create an inventory of alerts to be targeted for improvement; alerts were excluded if they were not displayed to users or if they were already targeted for changes by the operational CDS team. We created groups for related alerts, such as all alerts targeting influenza vaccination or pediatric hypertension in the ambulatory setting. We retrieved firing data about all alerts, including total number of firings, interruptive firings, and acceptances, and specialties, provider types, and individuals who received the alerts most often. We also retrieved metadata about all alerts, including building specialty, responsible HealthIT team, builder, and subject matter expert. To help participants select alerts to target, and to facilitate scoring, we created 2 scores: burden and complexity. To calculate an alert's burden, we summed the number of total firings and interruptive overrides multiplied by 10, then assigned a rank between 1 and 10 according to the total score. For example, the group of alerts targeting influenza vaccinations had 39 740 total firings, 7394 interruptive firings, and 6681 interruptive overrides for a total score of 106 550, the 11th highest out of 141, and burden rank of 10. The alert targeting pediatric bronchiolitis had 518 total firings, 518 interruptive firings, and 388 interruptive overrides for a total score of 4398, the 58th highest out of 141, and a burden rank of 6. To calculate an alert's complexity, we summed the number of alerts in the group, the number of logic statements in the build, the number of available actions, and other build restrictions, and again assigned a rank between 1 and 10 according to the total score. For example, the group of alerts targeting influenza vaccinations contained 6 alerts, which included dynamic display text (ie, an Epic SmartLink instead of static display text), a provider feedback link, 9 linked criteria records with an additional 1 exclusion, and 14 inclusion filters, 9 acknowledgment options, 1 follow-up order, and 9 linked criteria records for a total score of 216, the 4th highest out of 141, and a complexity rank of 10. The alert targeting pediatric bronchiolitis included dynamic display text, a provider feedback link, 3 linked criteria records with 5 additional exclusion and no inclusion filters, and no acknowledgment options or follow-up orders, for a total score of 10, the 72nd highest out of 141, and a complexity rank of 5. We created wiki pages for each alert, as well as a separate wiki page that listed all of the alerts and relevant information to track participation. Figure 3 depicts the project tracking wiki page, and Figure 4 depicts an individual alert wiki page. A table of all alert groupings with metadata, as well as burden and complexity scores and ranks, is included in the Supplementary material. Recruiting participants is a critical component of the Clickbusters program. Participants most likely to succeed will have an understanding of CDS alerts in general, although a more thorough understanding of specific alert build requirements and processes can also be helpful. Knowledge of the relevant clinical workflow and targeted improvements is also helpful. Individuals who participate in clinician builder or similar programs can be ideal participants. Clinician champions, clinicians with training or certification in informatics, or even clinicians with an interest in improving the EHR may also be successful in reviewing and improving alerts. HIT analysts and informaticists play an important role working with participants to review or complete build, when necessary, and can also individually participate in the program. To recruit participants, we e-mailed relevant groups (eg, the email list of Physician Builders and members of VCLIC) and shared in newsletters (eg, VCLIC and HealthIT newsletters). We also delivered presentations to these groups about the program, the effort involved, and incentives for participation. Finally, we sent tailored emails to specific individuals who were more likely to successfully participate given prior involvement in similar initiatives. After individuals agreed to participate, they chose an alert to bust from the list of targeted alerts on the wiki. We found that it was helpful to suggest alerts that may be relevant to make the selection process less overwhelming to some participants; for example, we frequently suggested alerts that fired most often to the participant's specialty or role. Oversight VCLIC, including our project manager and informatics faculty, provided as much support as participants desired as they moved through the Clickbusting process. Following recruitment and selection of an alert or set of alerts to analyze, our staff reached out via email, assigning the participant to the appropriate wiki page with details regarding the alert. Next, the participants received additional alert firing data and access to the Tableau dashboard for improved visualization of this data, and they were connected with the alert builder and clinical SME. Finally, staff provided participants with a blank Clickbuster Worksheet to fill out, pointed them to the Clickbusting procedure and additional help articles on wiki and set up a meeting a few weeks into the process to help answer questions and ensure they were proceeding appropriately. Our staff also created project tracking documentation for the analysis of the alert, whether changes were made or not, and noted how often we got in touch with participants regarding their progress. We generally reached out to individual participants every other week to make sure they had everything they needed and were working through the process without too many challenges. To incentivize participation, we gamified the process, including establishing a point system and leader board and awarding prizes. Participants received up to 15 points for each alert improved through Clickbusting: 4 points for analysis, 2 points for design, 4 points for build, and 5 points for evaluation. This score was then multiplied by the sum of the burden and complexity determined during the alert identification process to determine the final number of points. For example, if a participant completed the analysis, design, and build, but no evaluation, of an alert with a burden score of 10 and a complexity score of 8, the score would be 180 (see equation below). The 3 individuals with the most points received a customdesigned "golden mouse" trophy. The first-place winner also received a $250 Amazon gift card, the second-place winner received a $150 Amazon gift card, and the third-place winner received a $100 Amazon gift card. We also selected winners to receive a ribbon and $50 Amazon gift card for the most innovative improvement, most clicks reduced, and judge's choice. Individuals were not eligible to receive awards in consecutive rounds of the program. All participants received framed certificates and the option of having Clickbusters leadership send a letter of commendation to their department chair. Study setting and methods VUMC is an academic, tertiary care medical center in Nashville, TN, with a 1000-bed general medical and surgical facility and nearly 2 000 000 patient visits annually at 120 clinic and outpatient sites. Since November 1, 2017, providers at VUMC have used the Epic EHR (Verona, WI). The implementation included multiple alert types, including "BestPractice Advisories" (BPAs) provided by Epic and VUMC. VUMC has a long history of developing custom CDS, and efforts were made prior to go-live to include VUMC-specific alerts and to curate and customize Epic content. We conducted 2 rounds of Clickbusters, which lasted about 3 months each. For each round, we invited Physician Builders, clinical directors, clinician champions, and other individuals interested in clinical informatics by e-mail to participate in the program. Participants selected one or more alerts and followed the 10-step Clickbusting process described previously. We performed descriptive analyses using logs of all alerts displayed to EHR users to assess the burden of the alerts on the users. We reviewed the total number of alerts displayed, number of interruptive alerts, and number of alert acceptances before Clickbusters and after each round. To control for confounding changes to the alerts (eg, seasonal variation, unrelated alert modifications, and COVID-19 pandemic-related changes), we determined the reduction in firings for alerts not modified during Clickbusters after the round and multiplied the number of reduced clicks by that number (ie, unmodified alerts fired 20% less often after Clickbusters, so we determined the number of clicks actually busted to be 80% of the absolute difference in firings for modified alerts after Clickbusters). Prior to beginning the Clickbusters initiative, VUMC had 419 BPA alerts in production, with 488 425 firings (42 982 interruptive) each week. Among these, 1.5% of total alerts and 9.1% of interruptive alerts were accepted. The alerts were placed into 141 logical groups (eg, alerts related to suicide screening were grouped together). For the rankings, the number of total and interruptive firings contributed most to the burden, and the number of alerts in the group contributed most to the complexity (Supplementary material). We conducted 2 rounds of the Clickbusters program: one from March through May 2020, and the other from June through September 2020. In the first round, 8 participants selected 18 alert groups (29 total alerts) for Clickbusting. This round resulted in 13 alerts modified and 4 alerts turned off, with 49 026 weekly clicks busted (10.38%). In the second round, 20 participants selected 24 alert groups (55 total alerts) for Clickbusting, resulting in 29 alerts modified, 6 alerts turned off, and 22 201 weekly clicks busted (5.05%) ( Table 1) . The first-, second-, and third-place award recipients earned 969, 382, and 220 points in the first round and 458, 140, and 120 points in the second round, respectively. The recipients of the most clicks reduced awards in Rounds 1 and 2 busted nearly 8000 and 700 daily clicks, respectively. The most innovative awards were given for narrowing the scope and improving the actionability of a pediatric bronchiolitis alert and replacing an interruptive alert for pediatric ambulatory hypertension with an indicator by the patient's blood pressure reading in the sidebar of the patient's chart. Recipients of the Judge's choice awards made 26 improvements to an admission medication reconciliation alert and improved the readability and accuracy of an alert for live viruses in immunocompromised patients. After 2 rounds, the Clickbusters program resulted in detailed, comprehensive reviews of 84 CDS alerts, which comprised 20% of the rule-based alerts implemented at VUMC, and reduced the number of weekly clicks by more than 70 000 (15.43%). While modest, these results occurred in a short period and involved motivated users that had not previously participated in CDS review. Although more participants reviewed more alerts in the second round, the total number of clicks reduced was lower, in large part due to a single unnecessary alert for chronic care management turned off in the first round that accounted for 40 813 weekly clicks reduced (83% of the total for that round). In addition to the direct improvements in CDS, the initiative also increased user engagement and involvement in CDS. After Clickbusters, the VUMC CDS team benefited from a new corps of users with increased interest, engagement, and knowledge of CDS, who could also serve as liaisons to clinical departments as diverse as pediatrics, general medicine, oncology, cardiology, surgery, nursing, and pharmacy. This program helped build a culture of continuous evaluation and improvement of clinical content in the EHR. A key component of the Clickbusters program is partnership with the operational HealthIT department. Clickbusters participants followed standard processes for documenting changes they made and worked within existing governance frameworks. Because changes made by Clickbusters participants were generally welldesigned and followed a standardized analysis process, they could be reviewed efficiently and were generally approved quickly and without many required modifications. Further, our informatics team reviews each BPA on an annual or semiannual basis, and we determined that a Clickbusters review satisfied this requirement in many cases. Additionally, some HealthIT staff chose to participate in the Clickbusters program, in addition to their regular job responsibilities. This partnership helped the Clickbusters program to succeed, and it also created new relationships between Clickbusters participants and HealthIT staff that have led to subsequent collaborative efforts. The Clickbusters program represents a novel approach to evaluating and optimizing CDS alerts. One of the most frequently described approaches described in prior literature has involved individuals or committees reviewing EHR utilization logs or dashboards, as these are very effective at identifying individual alerts with high override rates, as well as alert trends across departments or provider type. 27, 28, 34, 38 However, efforts required to develop reports or dashboards can be high, and personnel effort required to review the findings are substantial. This effort usually falls to the HealthIT or operational informatics teams, but Clickbusters allowed us to bring in new participants, who were highly invested to help with the review process both as end users receiving the alerts and Clickbusters participants motivated through the gamification in the program. Additional efforts have focused on identifying opportunities for improvement by obtaining feedback from clinicians receiving the alerts, such as sentiment analysis of comments entered by clinicians when overriding alerts 52 and feedback links within displayed alerts. Such feedback approaches can successfully identify individual alerts with optimization opportunities, as well as alerts with potentially suboptimal design or that are no longer functioning as designed. These approaches have a slightly lower barrier with build and personnel effort, as they utilize a crowdsourcing approach to gaining the information and only require personnel to monitor feedback given; however, they are limited by only evaluating alerts that clinicians see and to which they respond. Another approach uses anomaly detection to identify alerts that are no longer performing as intended. 55, 56 Like utilization logs, this approach can successfully identify alerts that are no longer functioning as designed, but with less personnel effort due to the machine learning approach; however, it does not identify specific opportunities for improving alerts that were suboptimally designed at their onset. Compared to other approaches, Clickbusters has moderate technical or build requirements and high personnel effort requirements. However, the personnel effort varies and can be shared across a larger group of individuals by engaging clinical builders and other participants beyond operational informatics teams. Clickbusters leadership (ABM, EMR, AW) devoted approximately 80 hours over 4 weeks to developing the curriculum and 40 hours over 2 weeks to retrieving alert data, ranking alerts, and creating the wiki pages. Because this information can be shared and has been made available, subsequent implementations of the program would require significantly less effort. Discussion with Clickbusters participants after completion of the program revealed that some participants spent as little as 2 hours completing their work, while others devoted 20 or more hours. Some reasons for this wide variation included the number of alerts selected for Clickbusting and the number of Clickbusting steps completed; participants who selected more than one alert and made changes to the alerts spent more time compared to those who only selected a single alert and only reviewed the alert. It is also advantageous in that it can incorporate the previously described approaches by providing that information (eg, utilization logs, dashboards, feedback) to the Clickbusters participants to facilitate their evaluation. A critical limitation of the Clickbusters approach is that it requires an active Physician Builder program or other core group of nonoperational informatics team members that are able to select alerts and carry out the Clickbusting process. In some settings, it may be more effective to engage a smaller group of clinical content builders or operational analysts who are knowledgeable about the alert build process and can engage the appropriate clinician subject matter experts. Sites considering implementation of a Clickbusters program must evaluate whether the safety and efficiency improvements for end users of the program justify the use of costly resources like a Physician Builder program. It may still be possible to achieve some success with the Clickbusters program in settings where these groups are not present, such as nonacademic medical centers, though the process may take more time and operational effort. Many of our Clickbusters participants (5 of the 24) were not certified Physician or Clinical Content Builders, but they were still able to complete reviews of the alerts and make suggestions for improvement based on information we provided; the single alert turned off in the first round resulting in more than 40 000 fewer clicks was busted by a participant who was a primary care physician, but not a certified Physician Builder. Members of VCLIC or the operational informatics team subsequently made the changes suggested by nonbuilder participants. We developed the Clickbusters at VUMC. The program succeeded in reducing alert firings and resulting clicks. The program also brought more users into the process of evaluating and improving CDS and helped build a culture of continuous evaluation and improvement of clinical content in the EHR. The process could be readily replicated at other clinical sites and applied to other functions of the EHR, such as order sets, clinical documentation tools and information displays. ABM and AW designed the research, performed the data analysis, and interpreted the results. All authors participated in the research and contributed to the writing and final review of the manuscript. Supplementary material is available at Journal of the American Medical Informatics Association online. None declared. The data underlying this article are available in the article and in its Supplementary material. Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group Effect of clinical decision-support systems: a systematic review Brigham and Women's Hospital CPOE Working Group. Return on investment for a computerized physician order entry system The "meaningful use" regulation for electronic health records Medication-related clinical decision support in computerized provider order entry systems: a review Development and evaluation of a comprehensive clinical decision support taxonomy: comparison of front-end tools in commercial and internally developed electronic health record systems Clinical decision support capabilities of commercially-available clinical information systems Effect of computerized provider order entry with clinical decision support on adverse drug events in the longterm care setting Randomized clinical trial of a customized electronic alert requiring an affirmative response compared to a control group receiving a commercial passive CPOE alert: NSAID-warfarin co-prescribing as a test case Increasing the detection and response to adherence problems with cardiovascular medication in primary care through computerized drug management systems: a randomized controlled trial Unintended effects of a computerized physician order entry nearly hard-stop alert to prevent a drug interaction: a randomized controlled trial Overriding of drug safety alerts in computerized physician order entry Physicians' decisions to override computerized drug alerts in primary care Characteristics and consequences of drug allergy alert overrides in a computerized physician order entry System Frequency of inappropriate medical exceptions to quality measures Reasons provided by prescribers when overriding drug-drug interaction alerts Drug safety alert generation and overriding in a large Dutch university medical centre Practitioners' views on computerized drug-drug interaction alerts in the VA system Overrides of medication alerts in ambulatory care Improving acceptance of computerized prescribing alerts in ambulatory care A study of the frequency and rationale for overriding allergy warnings in a computerized prescriber order entry system Physicians' reasons for failing to comply with computerized preventive care guidelines American Medical Informatics Association. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA High-priority drug-drug interactions for use in electronic health records Drug-drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records Turning off frequently overridden drug alerts: limited opportunities for doing it safely What, if all alerts were specific -estimating the potential impact on drug interaction alert burden A review of analytics and clinical informatics in health care A dashboard model for monitoring alert effectiveness and bandwidth Alerting strategies in computerized physician order entry: a novel use of a dashboard-style analytics tool in a children's hospital Real-time pharmacy surveillance and clinical decision support to reduce adverse drug events in acute kidney injury: a randomized, controlled trial Adopting real-time surveillance dashboards as a component of an enterprisewide medication safety strategy Clinical decision support alert appropriateness: a review and proposal for improvement Stanson Health: clinical decision support designed to reduce costs Home Page Optimization of drug-drug interaction alert rules in a pediatric hospital's electronic health record system using a visual analytics dashboard Variation in high-priority drug-drug interaction alerts across institutions and electronic health records Identifying best practices for clinical decision support and knowledge management in the field. Stud Health Technol Inform Grand challenges in clinical decision support The state of the art in clinical knowledge management: an inventory of tools and techniques Governance for clinical decision support: case studies and recommended practices from leading institutions Recommended practices for computerized clinical decision support and knowledge management in community settings: a qualitative study Standard practices for computerized clinical decision support in community hospitals: a national survey Clinical Informatics Training During Emergency Medicine Residency: The University of Michigan Experience A primer on leading the improvement of systems A primer on PDSA: executing plan-do-study-act cycles in practice, not just in name Study designs for PDSA quality improvement research. Qual Manag Health Care Data-Driven Approaches for Improving Clinical Decision Support Across Multiple Healthcare Organizations Improving Outcomes with Clinical Decision Support: An Implementer's Guide, 2nd edn Cranky comments: detecting clinical decision support malfunctions through free-text override reasons Smarter Decisions. Better Care Best practices for preventing malfunctions in rule-based clinical decision support alerts and reminders: results of a Delphi study Using statistical anomaly detection models to find clinical decision support malfunctions Analysis of clinical decision support system malfunctions: a case series and survey