key: cord-1035358-6kp64vze authors: Lindqwister, Alexander L.; Hassanpour, Saeed; Lewis, Petra J.; Sin, Jessica M. title: AI-RADS: An Artificial Intelligence Curriculum for Residents date: 2020-10-16 journal: Acad Radiol DOI: 10.1016/j.acra.2020.09.017 sha: 9822a9968ede85a5e3588f629a2aa0ea7c480eec doc_id: 1035358 cord_uid: 6kp64vze RATIONALE AND OBJECTIVES: Artificial intelligence (AI) has rapidly emerged as a field poised to affect nearly every aspect of medicine, especially radiology. A PubMed search for the terms “artificial intelligence radiology” demonstrates an exponential increase in publications on this topic in recent years. Despite these impending changes, medical education designed for future radiologists have only recently begun. We present our institution's efforts to address this problem as a model for a successful introductory curriculum into artificial intelligence in radiology titled AI-RADS. MATERIALS AND METHODS: The course was based on a sequence of foundational algorithms in AI; these algorithms were presented as logical extensions of each other and were introduced as familiar examples (spam filters, movie recommendations, etc.). Since most trainees enter residency without computational backgrounds, secondary lessons, such as pixel mathematics, were integrated in this progression. Didactic sessions were reinforced with a concurrent journal club highlighting the algorithm discussed in the previous lecture. To circumvent often intimidating technical descriptions, study guides for these papers were produced. Questionnaires were administered before and after each lecture to assess confidence in the material. Surveys were also submitted at each journal club assessing learner preparedness and appropriateness of the article. RESULTS: The course received a 9.8/10 rating from residents for overall satisfaction. With the exception of the final lecture, there were significant increases in learner confidence in reading journal articles on AI after each lecture. Residents demonstrated significant increases in perceived understanding of foundational concepts in artificial intelligence across all mastery questions for every lecture. CONCLUSION: The success of our institution's pilot AI-RADS course demonstrates a workable model of including AI in resident education. A rtificial intelligence has rapidly emerged as a field poised to affect nearly every aspect of medicine, especially radiology. 1À3 A PubMed search for the terms "artificial intelligence radiology" demonstrates an exponential increase in publications on this topic in recent years. Additionally, radiologists, radiology residents, medical students have increasingly recognized the need for a basic understanding of artificial intelligence. 4À6 Despite these impending changes, medical education designed for future radiologists have only recently begun. 7À9 A number of resources have emerged attempting to specifically address this concern for radiology professionals taught to varying levels of ability. 10À16 While these resources are tailored to imaging applications, as of writing we have found few examples of formal integration into residency training. We present our institution's efforts to address this problem as a model for a successful introductory curriculum into artificial intelligence in radiology titled AI-RADS. Our pilot course was created with the goal of imparting an intuitive understanding of the strengths and limitations of machine learning techniques along with an intellectual framework to critically evaluate scientific literature on the subject. This integrated artificial intelligence curriculum (AI-RADS) was designed for learners at any level of computational proficiency, assuming only an understanding of basic statistics. Each lecture consisted of an algorithm in artificial intelligence along with supporting fundamental concepts in computer science. Algorithms were introduced as a string of observations surrounding common problems modern computing has attempted to solve and were initially introduced as familiar anchoring examples such as spam filters, movie recommendations, fraud detection, etc. (Fig. 1 ). The lectures built on each other's concepts sequentially and were presented as logical extensions of one another (Fig. 2) . Algorithms were selected based on how commonly they are employed or as introductions to more complex models, acting as an intellectual foundation for data representation. No prework was expected for didactic sessions. Didactic sessions were reinforced with active discussion by a concurrent journal club highlighting the algorithm discussed in the previous lecture (Appendix A). To circumvent the often verbose and intimidating technical descriptions, study guides for these papers were produced to define unfamiliar terms and dissect complicated mathematical expressions into simple terms. Learners were expected to have familiarized themselves with the paper and to have read over the study guide prior to journal club. The pilot AI-RADS course was incorporated as a regular part of resident didactic sessions following faculty approval from the resident education committee. These were integrated into the preexisting schedule if there was an available 1 hour timeslot. Lectures 5, 6, and 7 were held at the end of the work day, where residents were excused from clinical duties 1 hour early to attend. These lectures were held once per month for a total of 7 months; each 2-hour journal club was held 2 weeks after its corresponding lecture. By scheduling the course this way, residents had substantial artificial intelligence exposure every other week. This pacing was designed to promote retention while not overwhelming the learners. After AI-RADS, learners should be able to: 1) Describe foundational algorithms in artificial intelligence, their intellectual underpinning, and their applications to clinical radiology. 2) Proficiently read journal articles on artificial intelligence in radiology. 3) Identify potential weaknesses in artificial intelligence algorithmic design, database features, and performance reports. 4) Identify areas where artificial intelligence techniques can be used to address problems. 5) Describe different ways information can be abstractly represented and exploited. 6) Demonstrate a fluency in common "buzzwords" in artificial intelligence. Surveys were administered before and after lectures to assess both quality and learner confidence in key concepts at hand. Four content questions were asked in each survey (see Appendix B for question list) where attendees would rate their ability to describe each topic on a scale from 1 to 5. They were also asked in both prelecture and postlecture surveys, their degree of comfort in reading medical literature centered around the algorithm. Concurrent journal clubs were held within 2 weeks of each didactic session; participants completed questionnaires following each discussion, rating their perceived understanding of the paper as well as their confidence in reading a different paper that utilizes the same computational technique. All lectures and journal club study guides were written and delivered by the medical student fellow in radiology. Course demonstrations along with all figures attached were rendered using the Python 3 online shell, Jupyter. Content was reviewed by author SH, professor of computer science who specializes in artificial intelligence. Survey information was analyzed using the statistical analysis package SciPy version 1.2.3. Wilcoxon signed-rank tests were used in comparasion of pre-and postsurvey results. The results are provided in Figures 3 and 4 and Tables 1 and 2. AI-RADS was well-received amongst trainees at our institution. From our metrics of quality, trainees overwhelmingly feel that the content depth of the AI-RADS lecture series is ideal, and the examples used are helpful vehicles to understand key concepts in artificial intelligence. Exit surveys demonstrated a high degree of learner satisfaction, with an aggregate rating of 9.8/10. Resident interest in artificial intelligence has remained at a stable high, suggesting that this course has not deterred learners from the field. With the exception of Lecture 7, resident confidence in their ability to read an artificial intelligence related journal article in radiology statistically increased after each lecture (Fig. 3 ). Only five residents were able to attend Lecture 7, likely contributing to its borderline results. Parametric tests, Figure 4 . Pre-and Postlecture content mastery questions. Each pre-and postlecture survey contains four questions highlighting key lecture concepts that were mapped to learning objectives. Learners rated their confidence in ability to describe these concepts on a scale from 1 to 5. There is a statistically significant difference between all pre-and postlecture question results (p < 0.04) by Wilcoxon Sign-rank test. such as Students t Test, yielded significant p-values in all lectures, however, given the low sample size, Wilcoxon Sign Rank was felt to be more accurate, though less statistically powerful. Anecdotally, discussions during these sessions have been robust and the questions residents ask suggest a deeper understanding of the underlying computational methods. With the content-related questions, learners' perceived ability to explain these concepts showed a marked increase from before to after each didactic session (Fig. 4) . In lieu of formal assessment, these results demonstrate an increased understanding of core principles of artificial intelligence. This, in conjunction with perceived confidence in reading new articles related to the algorithm discussed, strongly imply an increased sense of comfort when dealing with techniques in artificial intelligence. Future iterations of this course may entail anonymous content-specific multiple-choice quizzes to further evaluate concept mastery. Journal club was initially successful, but demonstrated a progressive decline in learner preparation, which was evident in later discussions. Residents indicated that added clinical responsibilities and lack of free time were main contributing factors to lack of engagement. In written commentary, residents stated that the journal club was very helpful in solidifying the concepts presented in lecture, and despite limited preparation, found the conversations illuminating. Many proposals have been regarding how journal clubs will be managed when this course is redeployed, including resident-led paper presentations with faculty support, take-home assignments focusing on specific articles, or integration of paper discussion into the didactic session. Limitations of the course include heterogeneity of learner attendance, usually as a result of the clinical duties of certain rotations, scheduling, and resident burnout. It is important to note the likely contribution that lecture scheduling had in resident attendance. For the final three lectures, there was no availability in the normal didactic timeslot, requiring residents to be dismissed early from clinical duties to attend an end-of-day session. This was suboptimal and likely contributed to the relatively lower turnout. Additionally, this may have influenced overall comprehension, as the last three lectures are conceptually highly interrelated. Transitioning to an online platform would ideally circumvent these issues, as learners would be able to complete modules at their own pace. However, this may come at the cost of large group discussion opportunities. An interactive learn to code session was proposed, with preexisting datasets and skeleton code available such that learners may try to implement the algorithms they learned. While surveyed resident interest was high, many voiced concerns over time constraints with an already heavy schedule. As such, these sessions were felt to be more appropriate for future iterations once the basic curricula was completely incorporated. Lastly, while analysis by trainee demographics (such as sex, PGY, etc.) would add increased resolution to specific learner needs and satisfaction, the size of our program would effectively eliminate feedback anonymity for some learners. As a measurement of satisfaction, learners were asked to report their interest in AI, the content depth of the lecture (3 = just right), and the quality of the examples used. Number of responses correspond to the number of residents who arrived on-time to receive the survey link. For this, demographic information was not obtained to encourage trainee candor in their response. To ensure longevity and sustainability of this course as a unique hallmark of our institution's residency training program, the department is working on establishing online infrastructure to permanently house this resource. Future plans entail publishing all materials and freely sharing this educational series for all interested learners. These videos will be uploaded to the following YouTube channel as they become available: https://rb.gy/ychu2k As artificial intelligence continues to reshape the world of medicine, it will become imperative that physicians are familiar with fundamental algorithms and techniques in artificial intelligence. This will become an essential skill for interpreting medical literature, assessing potential clinical software augmentations, formulating research questions, and purchasing equipment. By having an intuitive foundation of machine learning based around fundamental algorithms, learners will likely be better equipped to understand strengths and weaknesses of various techniques and be empowered to make more informed decisions. In summary, residency programs are only beginning to employ basic computing concepts in their training, a skill that will become essential for the radiologists of tomorrow; proficiency in artificial intelligence will be a required skill in the near future of imaging services. We present our institution's efforts to address this problem as a model of a successful introductory curriculum into the applications of artificial intelligence on radiology. The radiologist's conundrum: benefits and costs of increasing CT capacity and utilization Error in radiology Artificial intelligence in radiology Attitudes toward artificial intelligence in radiology with learner needs assessment within radiology residency programmes: a national multi-programme survey Medical Students' attitude towards artificial intelligence: a multicentre survey Impact of the rise of artificial intelligence in radiology: what do radiologists think? Machine learning and medical education The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload Introducing artificial intelligence training in medical education Data science in radiology: a path forward Deep learning: a primer for radiologists Artificial intelligence for precision education in radiology Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine Noninterpretive Uses of Artificial Intelligence in Radiology Artificial Intelligence and medical imaging 2018: French radiology community white paper Canadian association of radiologists white paper on artificial intelligence in radiology Bone Tumor Diagnosis Using a Naïve Bayesian Model of Demographic and Radiographic Features A Generic Support Vector Machine Model for Preoperative Glioma Survival Associations Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier Utility of the K-Means Clustering Algorithm in Differentiating Apparent Diffusion Coefficient Values of Benign and Malignant Neck Pathologies Machine LearningÀAssisted System for Thyroid Nodule Diagnosis Note: question 3 is repeated in lectures 3 and 4, and Question 1 is repeated in lectures 5 and 6. This is because these concepts were re-introduced to better explain the next algorithm in the series. These topics were one of the major themes in AI-RADS and are typically difficult for learners to understand, hence their reintroduction and expansion.