key: cord-0783615-965ul6uh authors: Shelby, Tyler; Caruthers, Tyler; Kanner, Oren Y; Schneider, Rebecca; Lipnickas, Dana; Grau, Lauretta E; Manohar, Rajit; Niccolai, Linda title: Pilot Evaluations of Two Bluetooth Contact Tracing Approaches on a University Campus: Mixed Methods Study date: 2021-10-28 journal: JMIR Form Res DOI: 10.2196/31086 sha: 47c1b935e4979cabc599745b69b85b3846d9d5d4 doc_id: 783615 cord_uid: 965ul6uh BACKGROUND: Many have proposed the use of Bluetooth technology to help scale up contact tracing for COVID-19. However, much remains unknown about the accuracy of this technology in real-world settings, the attitudes of potential users, and the differences between delivery formats (mobile app vs carriable or wearable devices). OBJECTIVE: We pilot tested 2 separate Bluetooth contact tracing technologies on a university campus to evaluate their sensitivity and specificity, and to learn from the experiences of the participants. METHODS: We used a convergent mixed methods study design, and participants included graduate students and researchers working on a university campus during June and July 2020. We conducted separate 2-week pilot studies for each Bluetooth technology. The first was for a mobile phone app (“app pilot”), and the second was for a small electronic “tag” (“tag pilot”). Participants validated a list of Bluetooth-identified contacts daily and reported additional close contacts not identified by Bluetooth. We used these data to estimate sensitivity and specificity. Participants completed a postparticipation survey regarding appropriateness, usability, acceptability, and adherence, and provided additional feedback via free text. We used tests of proportions to evaluate differences in survey responses between participants from each pilot, paired t tests to measure differences between compatible survey questions, and qualitative analysis to evaluate the survey’s free-text responses. RESULTS: Among 25 participants in the app pilot, 53 contact interactions were identified by Bluetooth and an additional 61 by self-report. Among 17 participants in the tag pilot, 171 contact interactions were identified by Bluetooth and an additional 4 by self-report. The tag had significantly higher sensitivity compared with the app (46/49, 94% vs 35/61, 57%; P<.001), as well as higher specificity (120/126, 95% vs 123/141, 87%; P=.02). Most participants felt that Bluetooth contact tracing was appropriate on campus (26/32, 81%), while significantly fewer participants felt that using other technologies, such as GPS or Wi-Fi, was appropriate (17/31, 55%; P=.02). Most participants preferred technology developed and managed by the university rather than a third party (27/32, 84%) and preferred not to have tracing apps on their personal phones (21/32, 66%), due to “concerns with privacy.” There were no significant differences in self-reported adherence rates across pilots. CONCLUSIONS: Convenient and carriable Bluetooth technology may improve tracing efficiency while alleviating privacy concerns by shifting data collection away from personal devices. With accuracy comparable to, and in this case, superior to, mobile phone apps, such approaches may be suitable for workplace or school settings with the ability to purchase and maintain physical devices. Background Following its identification in Wuhan, China in December 2019, SARS-CoV-2 rapidly spread across the globe, resulting in millions of infections and deaths due to COVID-19 [1] . As health organizations throughout the world worked to develop adequate pharmaceutical therapies and vaccines, many public health agencies relied on nonpharmaceutical interventions to reduce community transmission of SARS-CoV-2. In particular, the world relied on mass screening [2] , lockdowns [2] , physical distancing [3] , mask wearing [4] , and contact tracing [5] . While large-scale lockdowns and comprehensive masking interventions are less commonly seen in public health interventions, contact tracing is a traditional intervention that has proven effective in many other contexts [6] [7] [8] . However, the implementation of contact tracing for SARS-CoV-2 has faced many challenges due to high incidence rates, even among asymptomatic individuals [9] , presymptomatic transmission [10] , and, in many places, a lack of staffing and infrastructure [11] . These challenges made it difficult in many settings to achieve the yield (proportion of cases and contacts interviewed, isolated, and/or quarantined) and timeliness (time from symptom onset or testing to isolation for cases, and time from exposure to quarantine for contacts) thought to be required for effectiveness [12, 13] . These challenges shifted the focus of many health agencies to mitigation (rather than containment) and led many to propose contact tracing innovations designed to make tracing more feasible [14] . While traditional contact tracing relies on interviewing cases and contacts in-person or by telephone, several countries augmented data collection using individual-level GPS data [15] , Bluetooth technology [16] , and other personalized data sources [17] . One technology in particular, Bluetooth, gained widespread attention in both the press [18] as well as scientific literature [19] . Despite the theoretical benefits of Bluetooth-assisted contact tracing and its implementation in various countries [16] , the public health and lay communities are far from reaching consensus regarding the appropriateness [20] and effectiveness [21, 22] of this innovation, largely due to 2 reasons. First, many have raised concerns about the loss of individual privacy associated with automated data collection methods such as Bluetooth-assisted tracing [23, 24] . In many countries, mandating participation in Bluetooth-assisted contact tracing is not feasible, and the effectiveness of this approach relies on a high user uptake among the population [22] . Implementation of Bluetooth-assisted tracing apps in nonmandated settings has so far been met with low uptake [25, 26] , and therefore, a better understanding of potential users' perceptions and privacy concerns is needed. Second, while research in other contexts has found various technologies, including radio frequency detectors, Wi-Fi, and Bluetooth, to be helpful in the detection of contact interactions [27] [28] [29] , there are few studies evaluating the overall impact and effectiveness of Bluetooth-assisted tracing in the context of COVID-19 [30, 31] . Although it seems intuitive that Bluetooth-assisted data collection may lead to an increase in the total number of identified COVID-19 "close contacts" (defined by the Centers for Disease Control and Prevention [CDC] as in-person interactions within 6 feet for at least 15 minutes) and more rapid identification of these individuals, there is little real-world data to directly verify this or to evaluate the accuracy of Bluetooth data [21, 30] . Together, doubts about the appropriateness and acceptability of Bluetooth-assisted contact tracing and the accuracy and reliability of the data pose challenges to implementation and adoption. Due to low vaccine uptake [32, 33] and breakthrough transmission by variant strains [34] , overcoming these challenges is critical as contact tracing will remain a core part of the public health response to COVID-19, even in the postvaccine phase of the pandemic. To address these knowledge gaps, we pilot tested 2 different Bluetooth-assisted tracing technologies on a university campus, one which collected Bluetooth data using a mobile phone app and another that used a separate carriable device ("tag") with Bluetooth functionality. Using a convergent mixed methods design, we measured the sensitivity and specificity of each Bluetooth technology and assessed participant perceptions regarding appropriateness, usability, acceptability, and adherence, using a quantitative survey and qualitative free-text analysis. We conducted 2 separate pilot studies in June to July 2020 at a medium-sized private university in the US Northeast. During this time, only essential personnel and select individuals were allowed on campus with prior approval. Campus-wide precautions included mask wearing, physical distancing, daily symptom assessments, and testing. Study participants included graduate students and researchers working during this period; graduate students or researchers working from home were ineligible for participation. We recruited participants by emailing faculty members and lab supervisors who subsequently forwarded our recruitment emails to their students and research staff. We then selected labs with the highest acceptance rates. We also prioritized enrollment from labs that shared workspaces with other recruited labs. Due to the focused nature of the pilots, we did not collect demographic data from participants. Each of the sequential pilots lasted 2 weeks (14 days) starting on a Monday, and different labs participated in the separate pilots. Sample size was determined by the availability of required study devices. The collected data were stored on secure university servers throughout the study and analysis period. In the first pilot (hereafter referred to as the "app pilot"), we evaluated a mobile phone app developed by the university's information technology services staff (Multimedia Appendix 1). It functioned by detecting Bluetooth signals emitted by other phones that had the same app downloaded and activated. The app estimated the distance between mobile phones based on signal strength while recording the duration of the interaction. The app also had functionality for users to enter a date of symptom onset or positive test; however, this function was not used during the pilot. Data were automatically sent to a centralized server. The university provided Android phones to participants for the duration of the study, so that they did not have to download the app on their personal devices. All app pilot participants were provided with written instructions describing how to install and use the mobile app and how to validate and report new contact interactions, as well as contact information for technical support if needed. Participants were asked to carry the study phone while on campus. At the end of each day, participants reviewed an online spreadsheet of their Bluetooth-identified close contacts and confirmed or denied each interaction. We also asked participants to identify additional contacts that were not detected by Bluetooth, and we subsequently removed any self-reported contacts who were not study participants. Participants were asked to use their best judgment when estimating the length of each interaction. In the second pilot (hereafter referred to as the "tag pilot"), we evaluated a carriable device ("tag") equipped with Bluetooth functionality, designed by the author RM (Multimedia Appendix 2 and Multimedia Appendix 3). The tags recorded Bluetooth signals emitted from other tags, using signal strength to determine distance while recording the duration of interactions. Data were stored locally on the tags and routinely synced to a central server by study participants using a mobile app that paired with the participant's tag. The app only used Bluetooth to communicate with the tag while syncing and otherwise did not collect any additional data or use Bluetooth to communicate with any nonpaired tags or other devices. The tag software additionally allows for contact interactions to be encrypted when recorded and stored in the central server, thereby anonymizing the data. When this feature is active, decrypting the data requires the user to provide permission by submitting a decryption token through the app. However, this feature was not enabled during the study, so that we could determine all contact records for the purpose of evaluating the system's efficacy. Additional details regarding the tag's development can be found elsewhere [35] . The university provided participants with Android phones for the duration of the pilot to facilitate syncing of tag data. Participants were asked to use their best judgment when estimating the length of each interaction. All tag pilot participants were provided with written instructions describing how to install and use the mobile syncing app, how to pair it with their Bluetooth tag, and how to validate and report new contact interactions, as well as contact information for technical support if needed. Participants were asked to carry the tag while on campus and to sync their Bluetooth data after each shift. At the end of each day, participants reviewed a list of their Bluetooth-identified close contacts and confirmed or denied each interaction using an online web interface. We also presented participants with the estimated duration of each recorded interaction and asked participants to report if the duration was underestimated or overestimated. Similar to the app pilot, we asked participants to identify additional contacts not detected by Bluetooth and subsequently removed those who were not study participants. Following each pilot, we sent a survey to participants focusing on their experiences using the pilot technology, as well as their perceptions regarding the appropriateness of technology-assisted tracing on campus (see Table 1 for survey domains). We adapted this survey from a previously validated mHealth usability questionnaire [36] . Most questions used a 7-point Likert scale ranging from strong agreement to strong disagreement, including a neutral response option. The survey also contained a free-text question asking participants to provide any additional comments about their experience or suggestions about the technology. We used Cronbach alpha to measure the reliability of our adapted scale after aligning the directionality of question responses. We excluded the free-text response and 2 other scale items from the reliability measurement that asked participants to select various ways in which they carried the devices or reasons why they were not carried. To measure participant perceptions about the appropriateness of Bluetooth contact tracing and the use of certain types of data (Bluetooth, GPS, Wi-Fi, etc) To measure the ease with which participants install, learn to use, and use the apps Ease of use To measure participant experiences and satisfaction with the design and interface of the app Interface and satisfaction To evaluate participant beliefs surrounding the usefulness of the tracing technology Usefulness To evaluate participants' understanding of how data are collected and protected by the technology Coherence To measure the presence of social influence from peers or supervisors regarding uptake of technologyassisted tracing To measure perceptions about available assistance for the use of the apps and/or devices and individual agency in uptake Setting To measure adherence and participant preferences with regard to carrying the study devices Adherence We used participants' daily contact validation responses to estimate the sensitivity and specificity of the 2 technologies (see Table 2 for outcome and measure definitions) and used 2-tailed tests of proportions to compare these values between pilots. We also described the postparticipation survey by presenting proportions of participants agreeing with each Likert question or selecting responses from other categorical questions, as well as means for responses to continuous questions. We measured differences in survey responses between participants from different pilot groups using 2-tailed tests of proportions for Likert agreement and categorical questions, and unpaired 2-tailed t tests for continuous questions. Additionally, we used paired tests of proportions to measure differences between agreement with several comparable survey questions, including (1) appropriateness of Bluetooth vs location data (GPS and/or Wi-Fi) for contact tracing, (2) peer vs supervisor vocal support of study technology, and (3) peer vs supervisor vocal concern about the study technology. True positive/(true positive + false negative) Sensitivity True negative/(true negative + false positive) Specificity a 15 minutes of interaction within 6 feet required to meet the definition of "close contact." In addition to confirming/denying each close contact interaction, participants from the tag pilot were asked to comment on the underestimation or overestimation of the recorded contact duration. We allowed a 5-minute window of error, within which a contact's measurement type could be altered. For example, a contact detected for 15-19 minutes would be designated as a false positive if the study participant noted that the interaction length was overestimated, while a contact detected for 10-14 minutes would be designated as a false negative if the study participant noted that the interaction length was underestimated. The coding team (TS and LG) used a codebook that was deductively based upon the survey topics. TS coded the free-text responses, and the coding team met regularly to review the coded text and reach agreement on all coding decisions. The coding team also refined code definitions and generated new codes when applicable throughout the coding process. "RADaR," a rapid qualitative analysis approach [37] , was used, in which the coding and analysis were done in Microsoft Excel (Microsoft Corp) rather than in a traditional qualitative analysis software. We synthesized the qualitative and quantitative aspects as part of the mixed methods analysis [38, 39] by identifying quotes that provided greater context or deeper understanding for the findings from the quantitative survey analyses. Selected quotes are presented alongside the quantitative findings within the relevant survey domains. This study was approved by the Yale Human Subjects Committee, and written consent was obtained from participants prior to enrollment. We did not offer incentives for participation. We invited 33 participants from 7 labs for the app pilot, of which 30 agreed to participate, and 25 completed the 2-week period of follow-up. Overall, 53 contact interactions were identified via Bluetooth, and an additional 61 were reported by participant recall. We invited 24 participants from 2 labs for the tag pilot, of which 17 agreed to participate, and all completed the 2-week period of follow-up. A defect was identified in the tag cases at the end of the first week of data collection that rendered the data unusable. The cases were then replaced, and only the data from the second study week were further analyzed. In the second week of data collection, 171 contact interactions were identified by Bluetooth and an additional 4 were reported by participant recall. We present estimates of sensitivity and specificity, and counts of true/false positives and negatives in Table 3 , stratified by pilot. The tag pilot had significantly higher sensitivity compared to the app pilot (46/49, 94% vs 35/61, 57%; P<.001), as well as higher specificity (120/126, 95% vs 123/141, 87%; P=.02). Of note, 3 participants in the tag pilot reported leaving their tags on their desks during days on which they were not on campus, resulting in false recordings of contact interactions. When these interactions were removed from the data set, sensitivity and specificity became 93% (43/46) and 100% (111/111), respectively. Twenty participants from the app pilot and 12 participants from the tag pilot completed the postparticipation survey (Cronbach α=.90). Below, we present the quantitative results from each section alongside qualitative findings when applicable. Overall, there were no differences in perceived appropriateness of technology-assisted tracing among participants between pilot groups (Table 4 ). Most participants felt that contact tracing via Bluetooth was appropriate but felt that the use of additional location data such as GPS or Wi-Fi was less appropriate (26/32, 81% approval for Bluetooth vs 17/31, 55% approval for GPS/Wi-Fi; P=.02). Most participants also preferred technology developed and managed by the university rather than a third party (27/32, 84%) and preferred to not download apps on their personal devices (21/32, 66%). Regardless of the approach, most participants (24/32, 75%), though not all, reported concerns about how their privacy would be protected, and these concerns were expanded upon in the free-text data. There were no observed differences between pilot groups regarding app usability (Table 5) , and most participants from both pilots felt their respective apps were easy to install (25/31, 81%) and use (31/32, 97%). They also reported moderate levels of satisfaction with the app interfaces (21/32, 66%) and feedback from the apps (18/31, 58%). The amount of time required to use the apps was acceptable to most (29/32, 91%), and overall satisfaction was high (26/32, 81%). However, several participants from both pilots described difficulties downloading and installing the apps, syncing tags to mobile devices for uploading data, discerning how the app was responding to the user due to unclear feedback from the app, or experiencing other technological glitches. When I first obtained the phone, there was no contact tracing app on it, and I could not find a way to download it…When I tried syncing the tag to the phone, there was never a message telling me that the tag was synced, only "connecting" and "communicating." [Tag pilot, Participant #19] Most participants felt that their respective app or tag would be useful for contact tracing (25/31, 81%), though a lack of consistency between recalled interactions and Bluetooth data diminished some participants' confidence in the technology. Table 6 ). With regard to social influence and study setting, there were no significant differences between pilot environments. Across both pilots, participants more frequently reported vocal support for the technology from supervisors than from peers (21/26, 81% from supervisors vs 10/27, 37% from peers; P=.001). The opposite was true regarding vocal concern, with participants more frequently reporting vocal concern from peers compared to supervisors (13/29, 45% from peers vs 2/25, 8% from supervisors; P=.003). Within the study environment, most participants felt that adequate technical assistance was available when needed (20/28, 71%) , and also felt that, should the university adopt such technology, they would maintain individual agency over whether or not they used the devices (26/31, 84%). There was no difference between pilots in overall adherence rates based on self-reported percentages of shifts during which the study device was carried (mean 87%) ( Table 7) , although participants in the tag pilot more commonly reported that their study device was convenient to carry than did participants from the app pilot (tag pilot: 11/12, 92% vs app pilot: 11/20, 55%; P=.03). While some participants from the app pilot reported leaving the device at home (2/13, 15%), participants from both pilots reported that the most common reason for not carrying the devices was forgetting it at a workstation (17/23, 74%). App pilot participants also reported inabilities to carry the study device into certain lab environments (app pilot: 5/13, 38% vs tag pilot: 0/10, 0%; P=.03), while tag pilot participants reported that charging the device interfered with adherence (tag pilot: 3/10, 30% vs app pilot: 0/13, 0%; P=.03). Many participants from the app pilot used the free-text response to note the inconvenience of carrying an additional phone and suggested that a smaller device be used. A minority suggested that they be allowed to download the tracing app directly on their personal phones. Gender-specific difficulties in carrying the app pilot study phone were also noted by 1 participant, while a separate participant from the tag pilot noted the relative ease of carrying the tag. The vast majority of participants from the app pilot reported that they would be more likely to carry a Bluetooth device if it were smaller than a phone (19/20, 95%), while no participants from the tag pilot (0/12, 0%) agreed that increasing the size of the tag would increase adherence (P<.001), indicating an overall preference for smaller devices. Incomplete vaccine uptake [32, 33] and potential for breakthrough transmission due to new variants [34] suggest that contact tracing will remain an important tool in the ongoing response to COVID-19. However, its use thus far in the pandemic has revealed many challenges to scaling-up traditional contact tracing [40] [41] [42] [43] and identified a need to improve upon existing methods. Digital contact tracing tools offer many opportunities to improve the impact of contact tracing [44] , and increasing our understanding of how different technologies may be applied for this purpose is critical. In our dual-pilot evaluation of 2 novel contact tracing technologies, we found that Bluetooth contact tracing was perceived as appropriate to the majority of study participants, adherence to device carrying was high, and participants were largely satisfied with their experiences. However, most participants still reported concerns about privacy, and both technologies encountered occasional technical glitches. Importantly, we also found that the tag-based device was easier to carry and had superior sensitivity and specificity. These increased performance metrics may have been due to differences between the Bluetooth signal strength settings of the technologies or in how participants carried the different study devices, as reflected in the postparticipation survey. Our findings are similar to a recent study [45] that compared a Bluetooth mobile app to a wearable, radio frequency-based, real-time locator device within a health care setting. The researchers found the wearable device to be superior to Singapore's "TraceTogether" app with regard to sensitivity and specificity, and also found that the app's performance was worse on iPhones compared to Android devices. In a similar study, a wearable device was compared to electronic medical record-assisted tracing and was again found to be superior [46] . Our study builds upon these findings by evaluating similar app-based technology in a new university setting, while also comparing it directly to a novel Bluetooth tag device, rather than a radio frequency-based device. Although most proximity-based contact tracing technologies offer similar benefits, such as the ability to identify unknown contacts or customize detection thresholds based on evolving knowledge of transmission dynamics [47] , different approaches (eg, app vs carriable device) offer certain additional benefits and drawbacks. Below, we discuss key differences while paying heed to the importance of context. While traditional contact tracing focuses on community and population transmission, COVID-19 has led many closed-door environments, such as workplaces, schools, universities, and hospitals, to conduct contact tracing independently from, or in partnership with, local public health systems [48, 49] . The differences between community tracing and closed-door tracing are important when comparing app-based and tag-based systems, as different contexts are often coupled with different funding capacities, thresholds for acceptable uptake of tracing technology, and user privacy concerns. Deploying Bluetooth tracing technology to communities or populations at large is likely only feasible using an app-based system. App-based tracing technologies, such as those developed by Apple and Google, have already been deployed throughout the globe [16] , including in many US states [26] , with relatively little cost to distribution beyond social marketing. Meanwhile, it would not be logistically or financially feasible to deploy a similar number of tag devices throughout the population, as each tag costs approximately US $10. Furthermore, while updating apps is relatively seamless, updating hardware poses a greater challenge, as we encountered in this study when we discovered a defect in our tag cases. Despite these potential drawbacks, tags and similar approaches may be more feasible in closed-door environments that have available funds to spend on the protection of a much smaller population. Acceptable thresholds for uptake may also differ between environments, making the logistical concerns noted above more or less important across different settings. Public health officials in many countries are often hesitant or unable to mandate participation in health interventions, as demonstrated with mask policies in response to COVID-19 [50] . Public health programs also frequently lack funding to properly incentivize participation. As a result, population-wide uptake of app-based technology for tracing will likely always be limited. Closed-door environments, on the other hand, may face greater pressure to standardize and ensure the safety of all staff, students, or workers, and therefore may prioritize, or mandate, comprehensive uptake, as demonstrated by many universities requiring vaccination for all students [51] . However, reaching such a high uptake of digital contact tracing without diminishing individual agency or ignoring privacy concerns poses a challenge. Privacy concerns are often related to the types of information collected as well as the organization or government collecting the data [23, 24] , and may be heightened in the context of a pandemic [52] . Notably, our study participants felt that using Bluetooth data for tracing was more appropriate than GPS or Wi-Fi data. While technologies, such as blockchain, may increase the security of app-based approaches [53] and further reduce the risks of data leakage, effectively communicating such methods and establishing trust with potential users may remain difficult as long as data collection relies on personal devices, as reflected by our participants' preferences against using apps on their phones. This provides several arguments for shifting data collection away from personal devices and onto organization-owned tracing tags when possible. First, the tag-based system offers users in closed-door environments the opportunity to participate in contact tracing without requiring data collection on their phones. While our study still relied on an app to sync the tag's data, the provision of "syncing stations" throughout closed-door environments could eliminate the need for an app entirely and further reduce concerns about leakage of personal phone data. Second, the use of organization-owned tags addresses concerns about governments or third-party companies accessing personal data [23, 52] , which was reflected in our participants' preferences against third party apps. Ultimately, these features offer the potential to reduce privacy concerns and increase uptake within closed-door environments. There are several key strengths to this study, including the use and evaluation of novel technologies developed directly in response to the COVID-19 pandemic. Second, the setting in which the study was conducted is typical of some other environments, in particular schools and universities, that have struggled to perform contact tracing throughout the pandemic, making this study increasingly relevant to public health practitioners or researchers operating in similar environments. Lastly, the use of mixed methods, including sensitivity and specificity estimations, survey analyses, and qualitative analysis, allowed us to triangulate our findings and present a layered evaluation of the technologies' performance metrics as well as the users' experiences. There are also several important limitations in this study. First, the sample size was relatively small, increasing the risk of type II errors. Second, the recruitment of different labs and participants for each pilot created some uncertainty about the mechanisms driving the observed differences in Bluetooth performance metrics and user experiences or perceptions. However, the lack of significant differences in survey responses regarding setting and social influences, and the baseline similarities in lab environments selected for the study minimize this risk. Third, the lack of a true "gold standard" measurement for close contact interactions introduces the potential for bias in the estimations of sensitivity and specificity. In particular, recall bias may have led to misreporting of self-report contacts, and the lack of precise measurements for the length of self-report interactions between participants may have introduced additional uncertainties. However, participants' daily review and validation of contact interactions likely minimized the potential for recall bias, which would have been more severe if the data were collected less frequently. Furthermore, these potential biases likely affected each pilot similarly, which lessens the degree to which such biases may have affected the comparisons between pilots. Fourth, based upon the participant-initiated method of qualitative data collection (optional free-text box vs traditional interview queries), it is doubtful that meaning saturation [54] was achieved and likely that themes would have been better explicated and perhaps more abundant if a traditional approach to qualitative interviewing had been used. Nonetheless, the study provides preliminary evidence about the relative merits of the 2 technologies that can inform larger studies in the future. Fifth, demographic data were not collected from participants at the time of recruitment, limiting our ability to evaluate differences across participant characteristics. Considering the small sample size and short timeframe of the pilots, we lacked statistical power to evaluate differences across participant characteristics and therefore did not include this as a study goal. Last, the relative homogeneity of the study sample may limit the generalizability of our findings to other nonuniversity contexts, which may feature differences in behavior, familiarity with technology, and/or attitudes [55] . As vaccine uptake remains noncomprehensive and new variants appear, contact tracing will remain a pillar of the public health response to COVID-19. Increasing the efficiency of contact tracing through adoption of technologies, such as those evaluated here, may improve its impact and ability to prevent or control outbreaks. This is among the first studies to directly evaluate the performance metrics of novel Bluetooth technologies when used for COVID-19 contact tracing in conjunction with evaluations of user experiences. Our participants found Bluetooth-assisted tracing to be appropriate, and we noted several key differences between app-based and tag-based approaches. The benefits of the app-based system include its low cost and theoretical ease of mass distribution, and the drawbacks include increased privacy concerns of users. The benefits of the tag system include its superior sensitivity and specificity, the ease of carrying the tag, and the potential to alleviate user privacy concerns, and the drawbacks include its reliance on hardware that may be less feasible to deploy in certain settings. COVID-19: Emergence, Spread, Possible Treatments, and Global Burden. Front Public Health Mass screening vs lockdown vs combination of both to control COVID-19: A systematic review COVID-19 Systematic Urgent Review Group Effort (SURGE) study authors. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis Trends in County-Level COVID-19 Incidence in Counties With and Without a Mask Mandate -Kansas Cross-country evidence on the association between contact tracing and COVID-19 case fatality rates. Sci Rep Use of quarantine in the control of SARS in Singapore Sustained high HIV case-finding through index testing and partner notification services: experiences from three provinces in Zimbabwe Contact investigation for tuberculosis: a systematic review and meta-analysis The incidence of the novel coronavirus SARS-CoV-2 among asymptomatic patients: A systematic review Epidemiology and transmission of COVID-19 in 391 cases and 1286 of their close contacts in Shenzhen, China: a retrospective cohort study Why Contact Tracing Efforts Have Failed to Curb Coronavirus Disease 2019 (COVID-19) Transmission in Much of the United States Prioritizing COVID-19 Contact Tracing Mathematical Modeling Methods and Findings. CDC Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing Information technology solutions, challenges, and suggestions for tackling the COVID-19 pandemic Contact Transmission of COVID-19 in South Korea: Novel Investigation Techniques for Tracing Contacts Digital contact tracing for COVID-19 Contact tracing with digital assistance in Taiwan's COVID-19 outbreak response A covid-fighting tool is buried in your phone. Turn it on COVID-19 Contact Tracing and Data Protection Can Go Together Do we need a contact tracing app? Comput Commun The past, present and future of digital contact tracing Exploring the effectiveness of a COVID-19 contact tracing app using an agent-based model Acceptability of App-Based Contact Tracing for COVID-19: Cross-Country Survey Study The acceptability and uptake of smartphone tracking for COVID-19 in Australia Contact Tracing Apps Were Big Tech's Best Idea for Fighting COVID-19. Why Haven't They Helped? Time State Approaches to Contact Tracing during the COVID-19 Pandemic. The National Academy for State Health Policy How should social mixing be measured: comparing web-based survey and sensor-based methods Inferring friendship network structure by using mobile phone data High-resolution measurements of face-to-face contact patterns in a primary school Automated and partly automated contact tracing: a systematic review to inform the control of COVID-19 Digital Contact Tracing Tools. CDC Vaccine confidence in the time of COVID-19 See How Vaccinations Are Going in Your County and State. The New York Times COVID-19 Vaccines vs Variants-Determining How Much Immunity Is Enough HABIT: Hardware-Assisted Bluetooth-based Infection Tracking The mHealth App Usability Questionnaire (MAUQ): Development and Validation Study Rapid and Rigorous Qualitative Data Analysis Designing and Conducting Mixed Methods Research Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays Contact Tracing Assessment Team, et al. COVID-19 Contact Tracing in Two Counties -North Carolina Outcomes of Contact Tracing in San Francisco, California-Test and Trace During Shelter-in-Place COVID-19 Contact Tracing Assessment Team. COVID-19 Case Investigation and Contact Tracing in the US Why many countries failed at COVID contact-tracing -but some got it right The Use of Digital Tools to Mitigate the COVID-19 Pandemic: Comparative Retrospective Study of Six Countries. JMIR Public Health Surveill Performance of Digital Contact Tracing Tools for COVID-19 Response in Singapore: Cross-Sectional Study Use of a Real-Time Locating System for Contact Tracing of Health Care Workers During the COVID-19 Pandemic at an Infectious Disease Center in Singapore: Validation Study A guideline to limit indoor airborne transmission of COVID-19 Case Investigation and Contact Tracing in Non-healthcare Workplaces: Information for Employers | iss. 10 | e31086 Considerations for Case Investigation and Contact Tracing in K-12 Schools and Institutions of Higher Education (IHEs). CDC SHEA Board of Trustees. Local, state and federal face mask mandates during the COVID-19 pandemic 100 U.S. colleges will require vaccinations to attend in-person classes in the fall. The New York Times Privacy concerns can explain unwillingness to download and use contact tracing apps when COVID-19 concerns are high Blockchain-Based Digital Contact Tracing Apps for COVID-19 Pandemic Management: Issues, Challenges, Solutions, and Future Directions Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough? Qual Health Res The Dutch COVID-19 Contact Tracing App (the CoronaMelder): Usability Study. JMIR Form Res ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information We would like to acknowledge the university administration for facilitating these pilots, as well as the graduate laboratory supervisors, staff, and students for their participation. This study was funded by Yale University. TS, OK, DL, RM, and LN contributed to the study design. OK, RS, DL, and RM contributed to the development of study devices. TS, TC, OK, RS, DL, and RM contributed to data collection. TS, TC, RM, LG, and LN contributed to study analyses. TS and LN contributed to initial manuscript drafting. All other authors contributed to manuscript editing and revision. LN provided study oversight. The authors disclose that Yale University, University of California, Los Angeles, and Carnegie Mellon University have a patent pending for the Bluetooth tag device, and the author RM has a personal financial interest through the standard patent policy at Yale University. The authors also disclose that LN is a member of the Scientific Advisory Board for Moderna, and TS is part of a COVID-19 support contract between the State of Connecticut Department of Public Health and Yale School of Public Health. Screenshot of the app pilot mobile app.