key: cord-1024570-futsthk3 authors: Collins, Cindy L.; Pina, Amahyrani; Carrillo, Audrey; Ghil, Eunice; Smith-Peirce, Rachel N.; Gomez, Morgan; Okolo, Patrick; Chen, Yvette; Pahor, Anja; Jaeggi, Susanne M.; Seitz, Aaron R. title: Video-Based Remote Administration of Cognitive Assessments and Interventions: a Comparison with In-Lab Administration date: 2022-03-03 journal: J Cogn Enhanc DOI: 10.1007/s41465-022-00240-z sha: b3cb3ed496c628387e9501cb20ebbd0554f8b7d8 doc_id: 1024570 cord_uid: futsthk3 While remote data collection is not a new concept, the quality and psychometric properties of data collected remotely often remain unclear. Most remote data collection is done via online survey tools or web-conferencing applications (i.e., Skype or Zoom) and largely involves questionnaires, interviews, or other self-report data. Little research has been done on the collection of cognitive assessments and interventions via web-conferencing that requires multiple sessions with or without the assistance of an experimenter. The present paper discusses limitations and challenges of studies administered remotely, and outlines methods used to overcome such challenges while effectively collecting cognitive performance data remotely via Zoom. We further discuss relative recruitment, retention rates, compliance, and performance findings between in-lab and remotely administered cognitive assessment and intervention studies, as well as limitations to remote data collection. We found that while it was necessary to recruit more participants in remote studies to reach enrollment goals, compliance and performance were largely comparable between in-lab and remotely administered studies, illustrating the opportunities of conducting this type of experimental research remotely with adequate fidelity. The COVID-19 pandemic led to a complete stand-still of research activities when stay-at-home orders resulted in campus closures around the world. Campus closures halted in-person data collection, leaving many labs scrambling to find alternative methods to collect data or terminating data collection altogether. Despite the ubiquity of remote data acquisition outlets that have proliferated even before COVID-19 (e.g., via crowdsourcing websites such as MTurk or Prolific), little research has been published on the process of remote data collection (Granello & Wheaton, 2004) , and the reliability and validity of the data collected are largely unclear (Al-Salom & Miller, 2017) . To provide some guidelines and resources for psychological scientists who are faced with the challenges of online data collection, Sydney Wood (2021) has recently summarized the types of research most suitable for online data collection and outlined some advantages and disadvantages associated with remote testing. Most literature on remote data collection to date has been focusing on data collected from interviews, surveys, and questionnaires (Solís-Cordero et al., 2021) . Collecting these types of data remotely is easy, as they do not require interaction between the experimenter and participant. It has been found that data collected online are comparable to inlab studies (Brock et al., 2012; Gosling et al., 2004; Weigold et al., 2013 , Germine et al., 2012 , with some exceptions (Buchanan et al., 2005; Dandurand et al., 2008) . However, the administration of cognitive assessments or cognitive intervention studies, particularly those that require substantial supervision, is more difficult to conduct remotely. Little is known about the quality of such data acquired remotely as compared to data acquired in more controlled lab environments. With advances in technology, experimenters have been able to successfully continue data collection activities remotely through the use of video-conferencing technology, such as Microsoft Teams (2017), Skype (2003), and Zoom (2011) . This growing list of video-conferencing tools offers video calls, instant messaging, and the ability to host meetings and webinars with both screen and document sharing (Zoom Video Communications Inc., 2016 , Sipes et al., 2019 . These features provide experimenters the ability to collect data with remote participants similar to in-person sessions by allowing them to interact with each other frequently (Quartiroli et al., 2017) , to walk them through each step of the study, and to maintain participant confidentiality and anonymity, which has been a key issue in telemedicine (Calton et al., 2020) . Remote data collection is not a new phenomenon. Market research has relied on remotely acquired data for years with the use of telephone and mail-in surveys. With increased accessibility of both the Internet and personal computers, the remote data acquisition approach has allowed the recruitment of larger and more diverse populations. Though it has been argued that phone interviews should not take the place of in-person interviews (Sturges & Hanrahan, 2004) , Schillewaert and Meulemeester (2005) discussed the many benefits of phone interviews such as the reduction of experimenter influence on participants through the removal of visual cues (e.g., facial expressions) that can lead participants to answer in a particular way. Similarly, video-conferencing technology, such as Zoom, Skype, and Microsoft Teams, provides experimenters with the flexibility to choose which features to use as needed. For example, if a task requires more interaction and rapport between the experimenter and participant, the video conferencing option can be used. In contrast, if a session requires a less hands-on approach and/or a minimization of experimenter influence, phone or chat features can be used. Some other advantages of remote data collection include convenience and cost-effectiveness. Participants can participate from the comfort of their own home, which reduces the amount of time it takes to participate in a study as it removes travel time to and from the lab. This could result in increased participant retention due to the increased convenience of participating remotely. The remote approach can also benefit recruitment efforts as data collection is not limited to areas local to the lab. Labs can recruit participants from other cities, states, and even countries, allowing for a more diverse population sample. Reducing time constraints and increasing participant recruitment can make remote data collection more cost-effective as more participants can be enrolled at a lower cost (Schillewaert & Meulemeester, 2005) . A few disadvantages of remote data collection include the variation of device specifications and compatibility when compared to traditional data collection methods, difficulty in explaining complex tasks to participants, and participant recruitment. Differences in testing environments, distractions, and internet connection strength can also disrupt data collection. Experimenters must rely on the adequacy of the participants' devices in data collection, including their computers, mobile devices, and internet connection, to meet study goals. This may be less of an issue with survey data which can be completed on various types of devices that do not require specific hardware or software. However, studies that require the implementation of psychometrically sophisticated assessments (e.g., assessments used in visual psychophysics, psychoacoustics, measurements of fine motor skills, or adaptive cognitive testing/training), participant recruitment is limited to those whose devices meet the required specifications (or to those to whom equipment can be loaned), which can also negatively impact access to the target population. Further, research that requires precisely calibrated sensory stimuli, such as sophisticated equipment for eye-tracking and measurement of brain activity, are a particular challenge to conduct remotely without the cost of losing data quality, although this issue is already in the process of being addressed as technology advances (Semmelmann & Weigelt, 2017) . These obstacles make remote data collection difficult, but there are ways to circumvent many of these challenges. The purpose of this paper is to discuss methods that have been successfully applied in moving from in-lab data collection to remote data collection using videoconferencing software during the COVID-19 pandemic. We provide an overview of methods that have been applied to various types of research studies that involve digital cognitive assessments and cognitive training interventions, and that are representative of methods applied across basic research and clinical settings. We present examples from our groups' approaches to validate new cognitive assessments, and in testing cognitive training interventions, where in both settings, procedures were adjusted to include video and audio conference software as the primary mode of interaction with participants. Most of our remote studies have been phone or tablet-based, requiring participants to download various applications onto their devices. Some studies also incorporated other commonly used research software, such as Inquisit (2004) or Qualtrics (Qualtrics XM Platform, 2002) . We thoroughly reviewed multiple web-conferencing software, such as Skype, Microsoft Teams, and Zoom, to determine which software would work best for our lab and our study types. We compared features, costs, and ease of use for both researchers and participants for each software. While all software included video, phone, chat, and other similar features, we chose to use Zoom as it was already licensed by our institutions; hence, we were able to use all the available features at no additional cost. We note that other groups similarly benefit from choosing different video-platforms that are adopted by their institutions, and critically, many of these systems are converging on a similar set of functionalities as we found beneficial in Zoom. Overall, when conducting our research remotely, we found that a good understanding and implementation of the features that are already part of Zoom, and how they applied to our lab, was an important step towards the successful administration of our studies online. The following is an overview of these features and how they were integrated into our lab procedures to aid our data collection process. The host is a crucial role that controls all aspects of the Zoom rooms, such as managing the waiting room, creating breakout rooms, and assigning experimenters (i.e., research assistants) and participants to the breakout rooms. They have full control of the room and its responsibilities, so this role is assigned to designated senior experimenters. The host must be attentive to research assistants and participants joining the waiting rooms and capable of moving both parties to the correct breakout rooms. The host role can be passed around to different experimenters depending on the situation. The host can assign a co-host role to specific people they choose in the main meeting room, which allows for more people to have control. The co-host can manage participants: at the time of this writing, they are able to admit people into the main Zoom room but lack the ability to create breakout rooms or move participants in and out of them. Future Zoom updates may change to allow co-hosts the same permissions and functionality as hosts. Research assistants are trained experimenters that stay in the main Zoom room on standby to run participants as they arrive for their scheduled session. The host is in constant communication with the experimenters in order to run scheduled participants efficiently, to emphasize organization and team management amongst the experimenters. Preserving participant anonymity and confidentiality is a top priority. In a regular lab setting, this would include placing participants in separate rooms as they complete an assessment or replacing their names with subject IDs to isolate the performance from the participant. Zoom offers several functions (i.e., renaming participants, waiting rooms, and breakout rooms) that are particularly helpful in ensuring that participants remain anonymous and maintain confidentiality. When participants first join, they are placed in a waiting room that advises them that experimenters will be able to assist them within a few minutes. The waiting room feature allows multiple participants to join the meeting simultaneously without seeing the other participants. To be admitted into the main meeting room, the participants must be approved by the host or co-host(s). Prior to joining the Zoom meeting, participants are instructed to log in using their assigned subject ID. In the event they fail to do so, the host has the power to rename them to the appropriate subject ID after they have been admitted into the main meeting room. Once they are in the main meeting room, the host will move them into a breakout room with a research assistant where their session will be conducted. When the session is complete, the research assistant instructs the participant to leave the meeting, rather than go back to the main meeting room to avoid seeing other participants in the main meeting room. Overall, these procedures maintain participant anonymity ensuring the privacy of the participant and the integrity of data collected from the participant. In the main meeting room, participants are identifiable only by their subject ID. Once the participant is placed into a breakout room, only their experimenter(s) joins them. This allows anonymity and confidentiality to be maintained by blocking any interruptions from other participants and lab members. If issues or questions arise, the host places themselves or another experimenter in the room to provide help. To effectively host a session, the host only admits participants one at a time from the waiting room. They do not admit a participant into the main session until the other participant has successfully joined a breakout room with their assigned experimenter. While the participants are in the waiting room, the host can send them a message. This allows the host to communicate with the participant if there are any issues in the main session room that concerns the lab or another participant. This maintains anonymity, privacy, and potential confusion. A typical session begins with a participant logging onto Zoom with a provided meeting link. Once they log in, they are prompted to enter a participant ID and are placed in the waiting room. When a participant enters the waiting room, the host is notified and given the option to either admit or remove the participant. After the participant is admitted into the main meeting room, they are greeted by the host and advised that they will be moved into a breakout room with a designated experimenter that will guide them through their session. While in the breakout room, the experimenter keeps both their camera and microphone on while communicating with the participant. The participant is encouraged, but not required, to turn on their camera for the duration of the session. At the end of the session, they are instructed to turn on their camera briefly to show the screen of their device to confirm they have completed the sessions. However, they are required to have their microphone on for the duration of the session to communicate with the experimenter. To conduct a session, several functions of Zoom are used (e.g., the chat and screen share function) to ensure that the participants receive the proper forms, install, and set up software correctly, and understand complex instructions. The chat function is used by the experimenter to send links to consent forms, surveys, and assessments to the participant. The share screen function is used by both the participant and experimenter as needed. It can be used by the participant to allow the research assistant to confirm that they are entering critical information accurately or to help troubleshoot issues encountered with assessments or software installations. It can also be utilized by research assistants to share detailed instructions with participants. When participants are completing tasks, experimenters turn off both their camera and microphone in an effort to reduce distractions and potential observer effects on the participant's performance (McCambridge et al., 2014) . Once the participant has completed the required tasks, the experimenter turns their video and microphone back on to communicate with the participant and provide any end of session instructions. When the session is complete, participants are instructed to leave the meeting. The sessions conducted via Zoom were structurally similar to those conducted in-person. In the following, we will illustrate the procedures of our remote studies and compare participant recruitment and retention, participant compliance, and data quality of our two different study types: training and validation. For our cognitive training studies that require multiple sessions, screening and assessment sessions that serve as the outcomes of interest are supervised over Zoom by an experimenter to help guide participants through the various procedures, answer questions, and troubleshoot software issues. Detailed instructions guide participants through the training sessions with multiple check-in sessions scheduled throughout that are conducted via Zoom. To maximize data quality, each participant completes their first training session with an experimenter on Zoom to ensure that the instructions are well understood. Participants are encouraged to reach out whenever new questions arise during unsupervised sessions. For validation studies, due to the low number of sessions required, all sessions were conducted on Zoom with a researcher. We examined several metrics that will be reported below in order to evaluate the effectiveness of our remote administration of studies compared to our in-lab studies that consisted of similar assessments and training and required approximately the same time commitment overall. These include participant recruitment, retention, compliance, and performance data comparisons between in-lab and remotely administered studies. Given the requirement to administer multiple sessions, conducting cognitive training studies remotely presents unique challenges in participant recruitment, participant retention, and coordinating research assistant and participant availability. Remote participant recruitment for our studies was limited to mass email blasts to professors and the student body, social media posts, newsletter articles, and the Participant Research System (SONA), a website that students use to sign up for on-campus research projects for class credit. The percentage of potential participants that signed up for a study was substantially lower in remote studies as compared to in-lab studies (47% vs. 22%, respectively). Recruitment for our in-lab training studies prior to COVID-19 included the same methods. However, it also included in-class presentations and flyers posted around the campus, which offered a second opportunity or a reminder to students who may ignore and/or delete the mass emails without reading them. The reduction in recruitment methods in the remote format may have influenced the sign up and compliance/retention rates of participants. Our training studies consisted of 14-19 sessions per participant, and we found that we needed to contact more participants to meet enrollment goals as compared with the previous in-lab studies. However, for participants that signed up for the remotely administered studies, completion rates were comparable to those that had been run in the lab. Specifically, for the in-lab training studies, which consisted of 14 in-lab visits, we emailed study details, consent form, and demographic survey links to 489 potential participants. Of those emailed, 47% signed up for the studies and completed both the consent form and demographic survey. Of those who signed up to participate, 70% completed the study. For remote training studies, study details were emailed to 1737 potential participants. Of those emailed, 22% signed up for remote studies with 60% of them completing the study. The retention rate was 10% lower in the remote studies compared to those in the lab (see Figure 1 ), which could be due to the slightly differential time commitment as perceived by participants. Remote participants' training sessions were broken up into individual 20-min sessions, whereas in-lab participants consisted of two 20-min sessions built into one session. Note that despite those slight differences, the overall structure and length of assessment sessions in these studies remained the same for in-lab and remote participants. Thus, while both the in-lab and online studies took the same amount of time to complete the tasks overall, participants may have perceived the online studies as a greater time commitment due to the increased number of training sessions (14 sessions for in-lab vs. 24 sessions for remote). Furthermore, compensation may have been another factor leading to the differences between settings. Participants who completed our in-lab training studies were compensated $125. In comparison, compensation was reduced to $80 for remote participants given that they did not need to travel to the lab to participate in their sessions and instead could complete the sessions at their convenience in any location. Finally, outside stressors due to COVID-19 may have influenced retention rates in our training studies. Specifically, previous research has found that COVID-19 has negatively impacted subjective well-being and cognitive functioning (Zacher and Rudolph, 2021; Fellman et al., 2020) . Participants may have been less motivated to participate in research studies given that COVID-19 required classes and extracurricular activities to be held completely online as well. "Zoom fatigue," defined as fatigue associated with extensive screen use, might have deterred people from participating in our remote studies. Another type of study that we conducted in both remote and in-person settings required less participant investment and consisted of two to four 60-min sessions that were designed to validate novel assessment materials. In-lab and remote participants in these validation studies were recruited through the Research Participation System (SONA), which allows students to participate in research for class credit. In contrast with the training studies described earlier, the completion rates between in-lab and remote settings were comparable: The in-lab validation studies had 300 participants sign up with an 87% retention rate, compared to the remote validation studies that enlisted 55 participants with a 91% retention rate (see Figure 1) . A two-way ANOVA was conducted to investigate whether retention rate differed as a function of setting (remote vs. in-lab) and study type (training vs. validation). There was Fig. 1 A comparison of in-lab and remote participants' retention rates between the two study types: training and validation. Error bars represent standard errors of the mean no significant difference in retention between in-lab and remote groups (p = .65), nor was there a significant interaction between setting and study type (p = .21). As expected, there was a statistically significant difference in retention between training and validation studies (F(1,13) = 20.67, p < 0.001, η 2 = 0.621). As can be seen in Figure 1 , retention was higher for validation studies, which consist of fewer test sessions compared to training studies. We attempted to understand the extent to which participants complied with the assigned schedule in cognitive training studies, and whether there were any differences in compliance between in-lab and remote groups of participants. To address this, we analyzed adherence to the training schedule in participants who completed all training sessions (n = 229). To calculate the compliance rate, we divided the number of days participants took to complete all training sessions by the number of days they were instructed to take. "Perfect" compliance was calculated as a compliance rate of 1. A compliance rate lower than 1 would mean that participants completed training sessions earlier than instructed (i.e., participants completed more sessions per day than the required 2 sessions, or participants trained for more days per week than the required 5 days); a compliance rate greater than 1 would mean that participants completed training sessions later than instructed (i.e., participants completed fewer sessions per day than the required 2 sessions, or participants trained for fewer days per week than the required 5 days). The compliance rate of in-lab participants (n = 94) averaged 1.21 (SD = 0.25), compared to that of remote participants (n = 135) at 1.52 (SD = 0.54), indicating that remote participants took significantly longer to complete the study (t(227) = 5.13, p < .001). We illustrated the compliance data onto an overlaid histogram (Figure 2) to better display the differences between the in-lab vs. remote study settings. The compliance rates of both in-lab and remote participants were greater than 1, which indicated that both groups took longer to complete the training sessions than they were instructed to. In-lab participants were more compliant in following the assigned training schedule than remote participants. In particular, we noticed a much wider tail for the remotely administered group, which may suggest that in-person interactions and visits to the lab provide an easier-to-follow routine than when studies are conducted remotely. To fully address this issue, a randomized control study should be conducted to systematically compare compliance rates in remote vs. in-lab training settings. We note the participant contact was similar between the in-lab and remotely administered studies. Participants for both in-lab and remote studies received regular reminders of their scheduled sessions. Participants in both the in-lab and remote studies were reminded of their next session at the end of their previous session. If in-lab participants missed 2 consecutive sessions, they were emailed a reminder of their schedule and the expectation that if they miss 3 consecutive sessions or 5 total sessions throughout the duration of the study they will be dropped. They also received a reminder call 10-15 min before their session. Remote participants were sent email reminders the evening before their scheduled Zoom sessions. During their training sessions that were done independently, remote participants were emailed reminders to complete 2 sessions a day when upon data verification Compliance is illustrated as the ratio of the number of days participants took to complete all training sessions to the number of days they were instructed to take it was apparent that they were completing sessions inconsistently or not at all. If they failed to complete sessions for 2-3 consecutive training days, they were reminded that they would be dropped from the study for non-compliance if they would not resume sessions consistently as per the study requirements. We next compared the participants' performance between in-lab and remotely administered studies. For the training studies, participants in both groups showed very similar training progress (Figure 3) , suggesting that the data quality in the remote setting was adequate and comparable to the in-person setting. In our assessment validation studies, we examined two test types. Countermanding assesses a person's processing speed and executive functioning (Ramani et al. 2019) where the participant must tap on the opposite side if presented with an incongruent stimulus. Cancellation assesses selective attention and inhibitory control and requires participants to tap on certain dogs and monkeys displayed in a row among distractor stimuli in a set amount of time and is measured by taking the sum of all hits minus any false alarms (Pahor et al., 2021) . Data are shown in Figure 4 , where independent sample t-tests revealed no significant difference between the in-lab and remote groups for two measures of executive function: countermanding (t(51) = 1.193, p = 0.236, Cohen's d = 0.236) and cancellation assessments (t(51) = 1.056, p = 0.294). We note that if anything, the remote group performed slightly better than the in-lab group on these tasks. . The y-axis shows the average performance level achieved per day weighted by the number of trials. Shaded areas represent standard error of the mean Fig. 4 Comparison of in-lab (N = 51) and remote (N = 51) performance on countermanding (left) and cancellation (right) tasks obtained from assessment validation studies. Error bars represent standard error of the mean. The y-axis for countermanding shows mean reaction time for correct responses on incongruent trials, an index of inhibitory control. The y-axis for cancellation shows the concentration performance score, calculated as ∑Hits − ∑False alarms Another assessment validation study compared various measures of hearing (Lelo de Larrea-Mancera et al., 2020 , performance between in-lab and remote participants was also very similar. Figure 5 illustrates the test-retest reliability between the two settings. However, of note, the remote group performed worse than the in-lab group by average half a standard deviation. While we at first thought that this was likely due to more ambient distractions and insufficiently uncalibrated devices in the remote group, in a subsequently conducted study, we had participants use calibrated devices at home and their own devices in the lab. In this study, we also included surveys about ambient auditory and visual distractors and participant focus. We found no relationship between distractions and performance (note there were relatively little distractions as we asked participants to conduct the study in a quiet setting), no differences between the calibrated and participant-owned devices, however, that the decrement in performance compared to the pre-pandemic dataset was maintained. As such, we suspect that the drop in performance may be a cohort effect, perhaps related to stress from the COVID-19 pandemic Overall, our data suggest that in the case of the cognitive assessments, particularly executive function, performance is highly comparable between in-lab and remote participants. For auditory tasks although performance was poorer in the remote sample, it is important to note that the inter-session reliability was comparable across settings, and that a similar decrement in performance was found in a later acquired inlab dataset also during the pandemic. One of the most common issues that makes remote experiments challenging is internet connectivity on either the participants' or the experimenters' part. In our labs, experimenters are instructed to immediately let other experimenters know if they are having trouble getting onto the Zoom meeting so another experimenter can take care of the session for them. If this happens to the participant, it is usually resolved with them re-entering the Zoom meeting and quickly being placed back into their breakout room. In situations where the participant cannot re-enter, they are rescheduled via email. Other issues we have commonly encountered are with the Zoom platform itself. One of these issues is with links that are sent through Zoom's chat function. Links for Qualtrics surveys or other assessments are primarily sent through the chat but, on occasion, some participants cannot click or copy the links sent. The inability to click links can be caused by a number of reasons, some being the privacy policies on individual computers or the Zoom client not being the latest version. In these cases, an email is sent to the participant containing the same link for them to open. This is also applied if the participant joins the meeting via a device other than a computer and the assessments must be completed on the computer or the surveys are much easier to complete on a computer. However, in these situations, the participant often decides to exit the meeting and rejoin from their computer. Remotely conducting studies has also brought up issues in participant management. Being in a breakout room with a participant involves the presence of an experimenter for the whole duration of the session, which limits the number of participants that can be scheduled at the same time. When conducting studies in person, it is easier to schedule more participants than there are experimenters if the experimenters are not required to sit with the participants for the entire session, which is the case for most training studies. To maximize the number of participants throughout the course of the study, we have tested running multiple participants with only one experimenter while maintaining participant anonymity. The experimenter admits participants from the waiting room one at a time and each participant is transferred to their own breakout room. Once in the breakout room, the experimenter joins the breakout room and explains the procedure to the participant. Once the participant understands the tasks and begins study procedures, the experimenter can leave the breakout room to join a second Test-retest reliability for composite scores of auditory assessments. The x-axis represents performance in session 1, and the y-axis represents performance in session 2. Light grey markers indicate data from in-lab study, open circles for remote study. R values are presented for both studies (gray for in-lab dataset) and are highly comparable. Note that poorer values are shown to the top and right to reflect that they reflect higher thresholds breakout room with another participant to get them started on their tasks. The experimenter can then stay in one of the breakout rooms or in the main room. If the participant needs assistance or has completed the session, they can use the "Ask for Help" function to notify the experimenter who can then join the breakout room to assist the participant. This procedure has its limitations, as it requires the sessions to be mostly independent with little guidance from the experimenter. Limitations are still present with conducting research studies completely online. Explaining cognitive tasks remotely on Zoom can be challenging, as it is more difficult to convey instructions of the task if participants do not understand the task from the original explanation. In the lab, experimenters can provide a quiet space for participants to complete the study, but online Zoom sessions might not have the same distraction-free space. While most participants have access to a room where they can complete the study without interruptions or distractions, some participants are surrounded by people and experience distractions during the study on Zoom. Nonetheless, Zoom has made several aspects of data collections easier. Overall, online research can very closely mimic in-lab research with the functions available on Zoom and other telecommunication software. For instance, participants can complete cognitive tasks while sharing their screen, allowing experimenters to closely monitor the participant's progress. In other cases, the screen share function might not be necessary, and participants can complete tasks on a personal device and show a confirmation screen to the experimenter. The remote-control desktop ability, which allows for another participant in a Zoom meeting to control a shared screen, can run cognitive tasks without requiring the participant to download software on their personal device for the experiment. Online research sessions might be more accessible to participants that would not typically spend much time on campus, specifically students commuting to and from campus. More broadly, Zoom could potentially connect more people across the country to research being conducted at various universities, leading to greater access to populations that are difficult to recruit for research studies (e.g., older adults and children). We believe that this type of remote data collection could extend to other experimental studies that take less than 2 h per session and where the task can either be performed online or the software to administer the task can be downloaded on the participant's mobile device or computer. The latter can be aided by software that allows for remote administration of experiments such as Inquisit (2004), Psy-choPy (Peirce, 2002) E-prime (Whitfield, 2020) , or Super-Lab (SuperLab 6, 2022), to name a few, some of which are available for free (e.g., PsychoPy). We demonstrated that even working memory training studies can be conducted remotely, and that performance is similar to that obtained in the lab, hence, these findings could be extended to other remote computerized intervention studies, such as those targeting attention, inhibitory control, or more generally, learning or mental health and well-being (PsyberGuide, 2021) . Experiments that require group performance and interaction can also be conducted via videoconferencing; however, it is important to consider how introducing technology can confound the results (Credé & Sniezek, 2003) . We note further that with advancing technologies that many techniques that were originally limited to laboratory settings can now be conducted in remote settings. Even approaches involving eye-tracking, electrophysiological recording (via skin or scalp), optical recording, motiontracking, and electrical stimulation can increasingly be delivered remotely using either consumer grade or research grade systems that can be loaned to participants (e.g., Gough et al., 2020) . Likewise, precisely controlled visual and auditory stimuli can be delivered given the focus of phone and tablet manufacturers to provide high-fidelity video and audio experiences to consumers; further, there are increasingly available options to remotely deliver haptic and olfactory stimulation as well. While there are certainly challenges to obtaining quality that is as precise as that found in highly controlled laboratory or clinical settings, oftentimes, the benefit of increased access and inclusions of historically underserved and sometimes difficult to reach populations can provide benefits that outweigh the costs of sometimes less precise systems. Furthermore, as technologies progress, solutions to potential data quality problems are rapidly emerging. Due to COVID-19, research labs around the world have adapted to conducting research completely online. In this paper, we outlined procedures that enabled us to effectively conduct large scale training interventions as well as other studies remotely. We found that messaging links through the chat, managing breakout rooms, and assigning co-hosts made these studies feasible. However, while the use of tools like Zoom provided participants with more access to research opportunities because of the flexibility of where their session takes place, technology in general can be unreliable. Remotely administered studies rely heavily on the use of the participant's device, internet connection, and other programs. Furthermore, while we find that remotely administered studies can be made relatively comparable to in-lab studies, there are still notable differences in the number of participants that we needed to contact to meet enrollment goals, compliance with requested schedules, and, in the case of auditory measures, possible systematic offsets in testing values. This means that while a participant can complete behavioral testing from virtually anywhere, there are also issues unique to remote data collection. Overall, while we find that remote administration is an effective route of research, there are still some challenges to be overcome. This paper aims to address some of these challenges by providing important tips on how to successfully conduct such studies. The problem with online data collection: Predicting invalid responding in Undergraduate Samples Internet administration of paper-and-pencil questionnaires used in couple research: Assessing psychometric equivalence Nonequivalence of on-line and paper-and-pencil psychological tests: The case of the prospective memory questionnaire Telemedicine in the time of coronavirus Group judgment processes and outcomes in video-conferencing versus face-to-face groups Comparing online and lab methods in a problem-solving experiment Beginning of the pandemic: COVID-19-elicited anxiety as a predictor of working memory performance Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/ perceptual experiments Should we trust Web-based studies? A comparative analysis of six preconceptions about Internet questionnaires Feasibility of remotely supervised transcranial direct current stimulation and cognitive remediation: A systematic review Online data collection: Strategies for research Portable Automated Rapid Testing (PART) for auditory assessment: Validation in a young adult normal-hearing population Portable Automated Rapid Testing (PART ) of auditory processing abilities in young normally-hearing listeners: A remotely administered replication with participant-owned devices Systematic review of the Hawthorne effect: New concepts are needed to study research participation effects Video Conferencing Software UCancellation: A new mobile measure of selective attention and concentration Running and sharing studies online -PsychoPy v2022.1.0. PsychoPy. Retrieved Using Skype to facilitate team-based qualitative research, including the process of data analysis Racing dragons and remembering aliens: Benefits of playing number and working memory games on kindergartners' numerical knowledge Comparing response distributions of offline and online Online webcam-based eye tracking in cognitive science: A first look Voice-only skype for use in researching sensitive topics: A research note. Qualitative Research in Psychology Overcoming methodological challenges due to COVID-19 pandemic in a non-pharmacological caregiverchild randomly controlled trial Comparing telephone and face-to-face qualitative interviewing: A research note Cedrus Examination of the equivalence of self-report survey-based paper-and-pencil and Internet data collection methods Using E-prime for remote data collection|Psychology Software Tools American Psychological Association Big Five traits as predictors of perceived stressfulness of the COVID-19 pandemic Security guide. Zoom Video Communications Inc