key: cord-0933594-tx1ei8hz authors: Schiavenato, Martin; Edwards, Susan; Tiedt, Jane; Owens, Joan title: Comparing the Learning Effectiveness of Three Virtual Simulation Tools with Nursing Students During the COVID-19 Pandemic date: 2022-03-14 journal: Clin Simul Nurs DOI: 10.1016/j.ecns.2022.03.003 sha: 57e0dccfb920b201955064cd809d2b3edc42b398 doc_id: 933594 cord_uid: tx1ei8hz Background: We explored the learning effectiveness of three virtual simulation tools used in the COVID-19 pandemic environment. Sample: Study participants consisted of students from two nursing classes, a junior and a senior class. Method: A mixed-methods approach compared three tools’ performance across five learning domains. Descriptive statistics and analysis of variance (ANOVA) compared mean ratings for learning domains. Open-ended questions were included for qualitative evaluation. Results: Thirty-six respondents rated the Resource Simulation Center (RSC), based on the observation of videos of students undergoing simulation exercises, as superior to the other two. There were no differences between the other two tools. Qualitative findings echoed preference by students for “RSC”. Conclusion: “RSC” was preferred over a commercial product based on computer generated graphics, and a free-online product based on clinical scenarios acted out in short videos. Differences in debriefing practices may have influenced the results, thereby emphasizing the role of debriefing with virtual simulation tools. simulation initiatives, each addressing the specific didactic needs of two classes, to offset the loss of opportunities for clinical experience. Besides clinical opportunities, public health regulations on social distancing also precluded faceto-face, manikin-based simulation methods, which are standard and routine in nursing curricula. Thus, we turned to virtual, computer-based simulations that could be delivered online and within public health rules. Specifically, we used three different screen-based simulations, which are defined as "A simulation presented on a computer screen using graphical images and text, similar to popular gaming format, where the operator interacts with the interface using keyboard, mouse, joystick, or other input device" (Lioce et al., 2020) . The preponderance of evidence supports that these types of virtual simulations are effective in achieving learning outcomes (Foronda et al., 2020) . The objective of this study was to compare student assessment of learning effectiveness across the three different simulation tools employed. A convenience sample made up of students from two Bachelor of Science in Nursing (BSN) classes (one at the junior level and another at the senior level) comprised our study participants. We employed an online survey to evaluate respondent views on the learning effectiveness of three different virtual simulation tools. This was a mixed methods approach composed of Likert scale questions and open-ended questions covering various domains of nursing simulation education based on the Simulation Learning Effectiveness Inventory (SLEI). The SLEI identifies seven reliable and valid domains to measure student perception of learning effectiveness in simulation (Chen et al., 2015) . The original content validity of the SLEI was performed on the Chinese version of the tool. The English version was evaluated by Bates and Clark (2019) who examined the psychometric properties using a sample of 132 undergraduate nursing students and demonstrated validity and reliability. This study compared each of the three simulation tools' performance across five of the SLEI seven domains. Two SLEI domains were thought inapplicable to our objectives and not explored: Appropriateness of Course Arrangement, and Availability of Learning Resources. The SLEI was used here to identify the remaining five domains to compare tool performance focused on respondent perception of how well each of the tools: Facilitate Debriefing, Improve Clinical Ability, Help Problem Solving, Help Confidence, and Improve Collaboration, see Figure 1 . The electronic survey was distributed one time after participants had completed all simulation modalities. Simulation case studies were different for each of the two classes as content was chosen to meet specific course learning objectives. All students received an introduction and orientation to each tool. All simulations included debriefing activities. The tools evaluated and debriefing methods were as follows. I-Human Patients (Kaplan, n.d.) is a cloud-based simulation product that uses interactive, casedbased clinical scenarios. I-Human uses a serious game approach employing a virtual computer-graphic patient, physical assessment through a mouse-driven interface, and a simulated electronic health record (EHR). This is a commercially available product with a wide variety and number of case-scenarios across the clinical spectrum. Debriefing occurred during a weekly scheduled, clinical post-conference via videotelephony. The Virtual Healthcare Experience (Ryerson, n.d.) is a cloud-based simulation using interactive case-based scenarios and a simulated EHR through a video-based interface. Rather than computer graphics, the videos use actors to portray clinician-patient interactions. The simulations are housed within a virtual hospital that includes simulation content across the clinical spectrum. The Virtual Healthcare Experience portal was developed as a collaboration among Ryerson University, Centennial College, and George Brown College faculty. The tool is a freely available resource hosted under a Creative Commons license by Ryerson University. Debriefing occurred during a weekly scheduled, clinical post-conference via videotelephony. The Resource Simulation Center (RSC) is our own designed computer-based video simulation which has students observing prerecorded videos of students from previous semesters performing manikin-based simulations from our usual curriculum. The experience includes review and analysis of a simulated EHR, followed by observing the recordings with interval pauses to answer questions related to clinical action e.g., assessment, interventions, etc. Debriefing occurred over the following 24 hours via the learning management system (LMS) online discussion board with a concluding presentation highlighting the main takeaway points from the experience. An online survey was distributed to students in both classes. Respondents were asked to rate each tool using the same set of questions covering the five domains. Students were asked about their experience e.g., did it facilitate effective debriefing; did it improve your clinical ability; did it help your problem solving skills; did it help your confidence; and did it improve your ability to collaborate? Responses consisted of five categories ranging from not at all, to slightly, somewhat, fairly well, and very well. Descriptive statistics were used as well as an analysis of variance (ANOVA) to compare mean rating for each question among tools. A Cumulative Performance Score (CPS) was also computed. This was derived by adding the mean score across all five domains for each simulation tool. The variable was meant to encompass the tool's overall performance in learning effectiveness. Qualitative analysis was performed based on the question Do you have any other comments related to your [virtual tool] simulation experience this semester? Institutional Review Board (IRB) approval was received for this study. The response rate for seniors was 50% (17/34), while that of the juniors was 39% (19/49), with a combined student sample size of 36. Scores across the five domains were consistent between the two classes. When comparing simulation tools by domain, statistically significant differences were found in Effective Debriefing between RSC (mean 3.88) and Ryerson (mean 2.88), p < 0.05; Help with Confidence between RSC (mean 2.81) and iHuman (mean 1.69), p < 0.05; and Ability to Collaborate between RSC (mean 3.03) and iHuman (mean 2.11), and RSC (mean 3.03) and Ryerson (2.19), p < 0.05. No statistically significant differences were found among simulation tools for Help with Problem Solving and Improve Clinical Ability (Table 1) . When comparing Cumulative Performance Score (CPS) by tool, RSC (mean 16.06) was statistically significantly higher than both iHuman (mean 12.57) and Ryerson (mean 12.69), p < 0.05; no statistically significant differences were found in CPS between iHuman and Ryerson (Table 2) . Considering all three tools, Facilitate Debriefing domain has the most favorable score (3.4/5; 68%) while Help your Confidence has the least favorable score (2.25/5; 45%). Thematic analyses of student perceptions for the three tools were completed. Students felt that iHuman was too basic for their educational level. The overall student perspective was that iHuman would be better for first semester students who need to focus more on assessment skills. The product helped with head-to-toe assessment, prioritization, reviewing lab values and talking to each patient. As one student described, "It helped to perform each step of the process." Students were more positive about their experiences using the Ryerson simulations. According to the student thematic analysis, the Ryerson simulation system was more realistic and a good learning resource. Students provided the most positive comments about the RSC simulation experience. They commented that they liked watching other students' performance in the simulation because it helped to see the teamwork and group problem-solving. Another positive noted from the student thematic analyses was that the RSC simulations were augmented by additional teaching resources including an online presentation, discussion board questions and a critical thinking assignment. When comparing these three tools under novel circumstances, RSC, or the observing of previously recorded simulation scenarios, performed best overall in terms of learning effectiveness when compared to the other two virtual simulation tools. This seems supported by previous findings that participant reaction is similar between those performing simulation and those observing simulation (Delisle et al., 2019) . Screen-based simulation, as seen with two of the tools used in this study, scored lowest in a study by Leighton et al (2021) that compared clinical learning in undergraduate nursing students. The results of this study are consistent with those findings. Although virtual, the three tools were different in terms of technologies, graphics, and content. The screen-based tools alone performed lower in terms of learning effectiveness but may offer an opportunity for improved clinical learning if combined for future use with virtual reality components. Brown et all (2021) reported that student perceived experience with virtual reality simulation was similar to that of face-to-face simulation. In addition, in this study the debriefing varied between RSC vs iHuman and Ryerson simulations. This variation may have influenced student learning as the characteristics of debriefing, such as timing and structure, are known to affect learning (Kim & Yoo, 2020) . The International Nursing Association of Clinical and Simulation Learning (INACSL) best practices for debriefing also require all simulation-based experiences to include a planned debriefing session aimed at improving future performance (INASCL Standards, 2016) . Specific domains of the SLEI in which the RSC tool was reported to be significantly better by students included Effective Debriefing, Help with Confidence, and Ability to Collaborate, which may be related to the screen-based delivery. The tools did not differ in learning effectiveness across the domains of Help with Problem Solving and Improve Clinical Ability which demonstrates the students may have found engaging in the problem-solving portions of the simulation helpful for improving their ability to care for patients. The simulation tools' effect was highest on Facilitate Debriefing and lowest on Help your Confidence. There were no differences in learning effectiveness as reported by participants between iHuman and Ryerson simulation tools although participants reported a preference for Ryerson. Limitations of this work included a small sample size and the heterogeneity of conditions under study. For example, although our findings support that observing simulation is an effective way to learn, it is impossible to say what aspect of debriefing activity contributed to its effectiveness. Future work can aim to better understand these dynamics through a more controlled design. Further, although we attempted to obtain instructor feedback in this study, a small response rate precluded analyses and thus it was unreported. It would be interesting to see how instructor's perspective compares to the student's experience. The COVID-19 pandemic presents nursing education with unique forces that challenge the way we teach and expose students to the clinical experience. Nevertheless, in the context of limited resources, the role of simulation and the effectiveness of various techniques and products are a pressing and continuing question for the pandemic and beyond. It is encouraging to see that, given specific course learning objectives, home-grown, or school-based simulation efforts based on observing prerecorded simulations of peers, are as effective or more effective than available resources, including commercial ones. Further research exploring best practices in this fast-evolving area must be a priority. INACSL Standards Committee (2016, December Comparison of the three simulation tools identifying the primary simulation instrument to the other two tools by domain. R noted to be statistically significant in three of the five domains. * The mean difference is significant at the 0.05 level. Reliability and validity of the simulation learning effectiveness inventory Integrating virtual simulation into nursing education: A roadmap Development and validation of the simulation learning effectiveness inventory Comparing the learning effectiveness of healthcare simulation in the observer versus active role: Systematic review and meta-analysis Virtual simulation in nursing education: A systematic review spanning First case of 2019 novel coronavirus in the United States A cumulative score compared the overall performance in learning effectiveness across all five domains for each instrument; the mean difference for the RSC instrument was statistically significant (p 0.02-0.03). *. The mean difference is significant at the 0.05 level.