key: cord-0797290-35c3ie0g authors: Nordhoff, Sina; Stapel, Jork; He, Xiaolin; Gentner, Alexandre; Happee, Riender title: Perceived safety and trust in SAE Level 2 partially automated cars: Results from an online questionnaire date: 2021-12-21 journal: PLoS One DOI: 10.1371/journal.pone.0260953 sha: 1fdc23948805495fa9ba30e5f67d76ce9d2eab79 doc_id: 797290 cord_uid: 35c3ie0g The present online study surveyed drivers of SAE Level 2 partially automated cars on automation use and attitudes towards automation. Respondents reported high levels of trust in their partially automated cars to maintain speed and distance to the car ahead (M = 4.41), and to feel safe most of the time (M = 4.22) on a scale from 1 to 5. Respondents indicated to always know when the car is in partially automated driving mode (M = 4.42), and to monitor the performance of their car most of the time (M = 4.34). A low rating was obtained for engaging in other activities while driving the partially automated car (M = 2.27). Partial automation did, however, increase reported engagement in secondary tasks that are already performed during manual driving (i.e., the proportion of respondents reporting to observe the landscape, use the phone for texting, navigation, music selection and calls, and eat during partially automated driving was higher in comparison to manual driving). Unsafe behaviour was rare with 1% of respondents indicating to rarely monitor the road, and another 1% to sleep during partially automated driving. Structural equation modeling revealed a strong, positive relationship between perceived safety and trust (β = 0.69, p = 0.001). Performance expectancy had the strongest effects on automation use, followed by driver engagement, trust, and non-driving related task engagement. Perceived safety interacted with automation use through trust. We recommend future research to evaluate the development of perceived safety and trust in time, and revisit the influence of driver engagement and non-driving related task engagement, which emerged as new constructs related to trust in partial automation. Since the early 19th century, we trust cars driving at high speeds in complex traffic. However, vehicle automation technology has not yet reached a level of maturity comparable to steering systems, brakes and powertrains underlying our trust in manually driven cars. A range of today's passenger cars provides SAE Level 2 automation through the combination of Adaptive provided information to drivers about the status of the system (i.e., clearly indicating whether the system is engaged or disengaged) using visual, audible, or haptic signals or a combination of these; monitor or see the driver to determine the level of engagement; and collaborate with drivers by giving them the option of full control and working with and not against their intention (i.e., the system should stay engaged but should always be overridable) [36] . Perceived safety is related to trust and has been identified as a basic human need and key predictor of automated vehicle acceptance [37, 38] . Similar to trust, perceived safety has received numerous definitions. Perceived safety was contrasted with actual safety: It is the "feeling of security because people will leave their welfare directly to a technical machine that is not working transparent" [39, p. 20] . In other studies, perceived safety was defined as "the degree to which an individual believes that using automated vehicles will affect his or her wellbeing" [40, p. 55], as "a climate in which drivers and passengers can feel relaxed, safe and comfortable, while driving" [38, p. 323], or as the condition of being secure from accidental harm, distinguishing it from intentional harm [41] . The perception of safety varies strongly between individuals and depends on context and past experience [23, 42, 43] . Perceived safety is closely related to objective safety [44, 45] , but also depends on motives [46] , and is thus inherently subjective. It can be influenced by the vehicle's "personality" [47] , aesthetics of an interface [48] , information [21] , and sound related to perceived safety in public spaces (i.e., sound affected perceived safety in public places) [49] . Several studies have modelled the relationship between trust and perceived safety in automated driving. Trust was modelled as predictor of perceived safety, while in other studies perceived safety was modelled as predictor of trust. In the study of [50] , trust was modelled as a function of perceived safety and privacy risk, perceived ease of use and usefulness. Furthermore, trust influenced the attitude towards using automated cars. In the model of [38] , trust served as predictor of perceived usefulness, ease of use, perceived safety, behavioral intention, and willingness to re-ride automated cars. Perceived usefulness, trust, and perceived safety predicted the intention to use automated cars. The perceived benefits of automated cars (commonly measured by the technology acceptance constructs perceived usefulness / performance expectancy) influenced acceptance [38, 50, 51] . Perceived safety was modelled as direct predictor of trust, while trust served as direct predictor of acceptance [52] . We model trust as a function of perceived safety. As the literature has revealed effects of both trust and perceived safety on acceptance, we hypothesize that perceived safety and trust are both positive predictors of acceptance. Furthermore, we posit that driver engagement, non-driving related task engagement and the perceived benefits of partially automated driving influence acceptance. The present study addresses the following research questions, using a new online survey targeting drivers of partially automated cars. Research question 1: What are the activities that drivers of partially automated cars engage in during manual and partially automated driving? The recruitment targeted current users of partially automated cars. We distributed the survey at Tesla's supercharging stations near Utrecht, Dordrecht and Amsterdam in the Netherlands in the form of a QR code. The link was further distributed among members of Tesla Owners clubs [53] and Tesla Owners forums [54] . In order to target drivers of partially automated cars of other brands, the survey was distributed in car-and mobility-related forums and groups of Reddit and Facebook, respectively. The authors of the present study further shared the link to the questionnaire on LinkedIn. Further, an anonymous link to access the questionnaire was sent to employees of Toyota Motor Europe using internal communication mailing. https://www.tesla.com/de_DE/support/ownersclubhttps://www.tff-ev.de/ An online questionnaire was created on Qualtrics.com [55] . Instructions informed the respondents that it would take around 20 minutes to complete the questionnaire and that the study is organized by Delft University of Technology in the Netherlands. In order to warrant data quality, Qualtrics applied a number of technologies that ensured that respondents did not take the survey more than once, that suspicious, non-human (i.e., bot) responses were detected, and that search engines were prevented from indexing the survey. Prior to participation in the questionnaire, respondents received a description about the functionality of partially automated cars in order to ensure that respondents had a sufficient understanding of partially automated cars. Have you heard of partly automated cars? With this questionnaire, we would like to get your opinion on partly automated cars which are already commercially available. Partly automated cars automate the acceleration, braking, and/or steering of the car. This implies that they control the speed and distance to the car in front and/or the steering, keeping the car in the lane. They have gas and brake pedals and a steering wheel. When the car is driving in partly automated mode, you as driver have to supervise the performance of the car in order to continue manual driving. Your hands have to remain on the steering wheel, or alternatively, you have to periodically touch the steering wheel. Your eyes remain on the road. After the respondents received the instructions, they were asked to provide their written consent to participate in the study. They were asked to declare that they had been informed in a clear manner about the nature and method of the research as described in the instructions at the beginning of the questionnaire. They were further asked to agree, fully and voluntarily, to participate in this study. They were further informed that they retain the right to withdraw their consent and that they can stop participation in the study at any time. Finally, they were informed that their data will be treated anonymously in scientific publications, and will not be passed to third parties without their permission. After having been asked to provide the written consent to participate in the study, respondents were asked to provide information about their sociodemographics (i.e., age, gender, education), personality, driving behavior and frequency of use of their partially automated cars (e.g., access to valid driver license, age, brand, and model of car, effect of COVID-19 on mileage, accident involvement). Respondents were asked to indicate their access to Lane Departure Warning (LDW), Lane Keeping Assist (LKA), and Adaptive Cruise Control (ACC) in their cars, and how often they activate those systems. Only respondents who indicated that they had access to all three systems (i.e., LDW, LKA, and ACC) or a combination of two of the three systems (i.e., LDW and LKA / ACC) were navigated to the questions that asked them to rate their attitudes towards and experiences with their partially automated cars. If they did not fulfil this condition, they were directed to the final questionnaire section on the evaluation of six Human Machine Interfaces that are adopted in commercially available in today's passenger cars (Cadillac SuperCruise, Toyota Safety Sense 2.0, Tesla Autopilot). Respondents were asked to indicate on a Likert scale from strongly disagree (1) to strongly agree (5) which types of behaviors they experienced with their partially automated cars. They were further asked to indicate on a scale from strongly disagree (1) to strongly agree (5) to what extent they trust their cars to perform partially automated driving manoeuvres (e.g., keeping the car centered in the lane, maintaining speed and distance to the car ahead). Further questions pertained to the behaviour of drivers in partially automated cars such as whether respondents felt hesitant to activate the partially automated driving mode from time to time, and whether they engaged in secondary activities. Respondents were also asked to indicate to what extent their partially automated car keeps them engaged in the driving task, and the frequency with which they engage in certain types of activities during manual and partially automated driving. Respondents were further asked for their motives to use their partially automated car, and the reasons for deactivating the system. Furthermore, respondents had to indicate to what extent they feel safe as a driver in their partially automated cars. The order of these attitudinal questions was randomized in order to rule out order effects. First, descriptive statistics (i.e., means, standard deviations, and frequencies) were calculated for the questionnaire items. Mean ratings were compared in order to identify the highest, moderate, and lowest mean ratings. Second, a confirmatory factor analysis was performed to confirm the latent structure in the dataaset. The output of the confirmatory analysis is the measurement model, which assesses the measurement relationships between the latent (i.e., unobserved / hypothetical component or factor) and observed variables (i.e., questionnaire items). The psychometric properties of the measurement model were assessed by its indicator reliability, internal consistency reliability, convergent validity and discriminant validity. Convergent validity was assessed by four criteria: 1) All scale items should be significant and have loadings exceeding 0.60 on their respective scales, 2) the average variance extracted (AVE) should exceed 0.50 [3] construct reliability (CR) and 4) Cronbach's alpha values should exceed 0.70 [56, 57] . Discriminant validity of our data was examined with the test of squared correlations by [56] , which implies that the correlation coefficient between two latent variables should be smaller than the square root of the average variance extracted (AVE) of each latent variable to demonstrate sufficient discriminant validity. The third step of the analysis involved testing the structural model. Maximum-likelihood (ML) estimation was used to estimate the measurement and structural model, which has proven robust to violations of the normality assumption [58] . The confirmatory factor analysis and structural equation modeling were performed with R software lavaan package [59] . In total, 1,557 questionnaires were completed. The data was collected between November 24, 2020 and January 30, 2021. On average, respondents needed 78.78 minutes to complete the survey (note that the responses were recorded after one week of the last activity of respondents). In order to enhance data quality, we applied a strict data screening: Respondents were excluded if they were identified as bots (n = 46), if they did not agree to participate in the study (n = 10), if they took an unreasonable amount of time to complete the survey (i.e., less than 2 and more than 9551 minutes) (n = 311), if they did not report to have access to a valid driver license (n = 14). "I prefer not to respond" and "Not applicable to me" responses were defined as missing values and excluded from the analysis. 1,137 responses remained for the analysis. An overview of respondents' profile is provided in Table 1 . Means, standard deviations, and relative frequencies are shown in Table 2 , ordered from highest to lowest mean score. As shown by Table 2 , the highest mean rating was obtained for using the partially automated car with speed and steering assist (M = 4.70, SD = 1.44, on a scale from 1 (never) to 6 (at least five times a week)). The second-highest mean rating was obtained for always knowing when the car is in partially automated driving mode (M = 4.42, SD = 0.87), and the third-highest was obtained for trusting the partially automated car to maintain the speed and distance to the car ahead (M = 4.41, SD = 0.73). The fourth highest rating was obtained for monitoring the performance of the partially automated car most of the time (M = 4.34, SD = 0.87). More than 50% of respondents agreed with these questionnaire items. The lowest mean rating was obtained for feeling anxious most of the time (M = 1.94, SD = 0.93). The second-lowest and third-lowest ratings were obtained for using the partially automated car for other activities unrelated to driving (M = 2.17, SD = 1.15), and for feeling bored most of the time (M = 2.26, SD = 0.93). The fourth-lowest mean rating was obtained for engaging in other activities while driving the partially automated car (M = 2.27, SD = 1.21). As shown by The proportion of respondents indicating to always monitor the road during manual driving was 79% compared to 62% of respondents reporting to monitor the road during partially automated driving. The results also indicated a modest shift from monitoring the road towards non-driving related activities. Surprisingly, only 1% of respondents (n = 8) reported to never monitor the road ahead in PAD. An additional 1% of respondents reported to always (n = 3), frequently (n = 1), or occasionally (n = 3) engage in sleeping during PAD. The results of the confirmatory factor analysis are shown in Table 3 and Fig 2. Several items measuring perceived safety (PS4-PS6), trust (TRU3-TRU8), driver engagement (DE4-DE5), performance expectancy (PE3-PE4) were omitted from the analysis as their loading was below the recommended threshold of 0.60. The questionnaire item TRU7 ("I engage in other activities while driving my partially automated car") that did not load strongly enough on trust and the Perceived safety and trust in SAE Level 2 partly automated cars: Results from an online questionnaire item PE3 ("I use my partly automated car because it helps me to use my time for other activities unrelated to driving") that did not load strongly enough on performance expectancy were merged into the new construct 'non-driving related task engagement (NDRTE)' due to their semantic similarity and interpretability. The questionnaire item TRU8 ("I monitor the performance of my partly automated car most of the time") was merged with questions on the construct 'driver engagement' due to the interpretability of this item. The fit parameters of the measurement model were acceptable (Confirmatory Fit Index (CFI) = 0.93, Root Mean Square Error Approximation (RMSEA) = 0.07, and Standardized Root Mean Square Residual (SRMR) = 0.05). The chi-square statistic (χ 2 / df) (= 3.31) exceeded the recommended threshold of 3. Composite reliability and Cronbach's alpha both Table 2 . Descriptive statistics of attitudinal questions (M, SD, 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree, n = number of respondents). Means were ordered from highest to lowest in order to show high, moderate, and low mean ratings. Notes: � Q26 was measured on a scale from 1 = Never, 2 = Less than monthly, 3 = Less than weekly but more than once a month, 4 = 1-2 times a week, 5 = 3-4 times a week, to 6 = At least 5 times a week. �� Self-created. Perceived safety and trust in SAE Level 2 partly automated cars: Results from an online questionnaire exceeded the recommended threshold of 0.70 for trust, perceived safety, driver engagement, non-driving related task engagement, and performance expectancy, confirming internal consistency reliability for these constructs. The average variance extracted (AVE) values exceeded the recommended minimum threshold of 0.50 for all constructs except for driver engagement (AVE = 0.44) and trust (AVE = 0.45). As shown by Table 4 , which reports the Pearson interconstruct correlations, discriminant validity was acceptable for all latent variables. We analyzed two structural models capturing the relationship between our study constructs. In the first model, perceived safety and trust were identified as predictors of automation use. As shown by Fig 3a, the relationship between trust and automation use was significant (β = 0.69, p = 0.001). Trust explained 41.3% of the variance in behavioral intention. Perceived safety did not influence automation use directly (β = -0.08, p = 0.29). As the direct effect of perceived safety on automation use was negative and not significant, we tested whether trust mediated the relationship between perceived safety and automation use in a second structural model (Fig 3b) . In addition, we added the predictors driver engagement, non-driving related task engagement, and performance expectancy to the model. The analysis revealed that performance expectancy had the strongest effect on automation use (β = 0.31, p = 0.001), followed by driver engagement (β = 0.30, p = 0.001), and non-driving related task engagement (β = 0.14, p = 0.01). Trust mediated the relationship between perceived safety and automation use. The path from trust to automation use was positive and significant (β = 0.21, p = 0.02). Perceived safety had significant positive effects on trust (β = 0.69, p = 0.001). As in the first model, perceived safety did not predict automation use directly (β = -0.07, p = 0.46). The variance explained in automation use was still 41.3%, meaning that the addition of the other predictor variables did not increase the explanatory power of the model. Note that ƛ are the factor loadings, which are interpreted as correlation coefficients for the relationship between the questionnaire items and their underlying constructs. α is the Cronbach's alpha reliability coefficient, which is a measure for the internal consistency of a latent construct assuming that the correlations between the questionnaire items underlying a latent construct are equal. Composite reliability is also a measure for the internal consistency of a latent construct, using the varying factor loadings of the questionnaire items on their underlying constructs and their error variance as input for the calculation. AVE is the average variance extracted that is accounted in the latent construct among the questionnaire items underlying a latent construct. https://doi.org/10.1371/journal.pone.0260953.t003 Perceived safety and trust in SAE Level 2 partly automated cars: Results from an online questionnaire The present study surveyed drivers of partially automated cars to address the following research questions. One of the lowest mean ratings was obtained for engaging in other activities while driving the partially automated car (M = 2.27). Respondents indicated that they most frequently monitored the road ahead, talked to fellow travellers, and observed the landscape, while they least frequently watched videos or TV shows, slept, or used the phone for texting during partially automated driving. Our respondents seemed to take their monitoring obligations during partially automated driving seriously, reporting to monitor the performance of their car most of the time (M = 4.34). These findings stand in contrast to the studies and videos showing inappropriate use of Tesla's Autopilot system (e.g., prolonged hands-free driving, ignoring warnings to place hands back on the steering wheel, testing the limits of the operational design domain, mode confusion, engagement in secondary activities, using the system in bad weather conditions, using the system not on highways, misleading the hand detection by attaching objects to the steering wheel, leaving the driver seat, falling asleep) [4] [5] [6] [7] [8] [68] [69] [70] [71] [72] [73] . Partial automation did, however, increase reported engagement in secondary tasks that are already performed during manual driving (i.e., the proportion of respondents reporting to observe the landscape, use the phone for texting, navigation, music selection and calls, and eat during partially automated driving was higher in comparison to manual driving). Unsafe behaviour (automation misuse) was hardly reported as only 1% of respondents indicated to rarely monitor the road while using partially automated driving, and another 1% of respondents reported to always, frequently, or occasionally sleep during partially automated driving. However, such rare behaviours can still lead to a relevant number of accidents. Note that there is a paucity of scientific studies with real-world SAE Level 2 passenger cars [70, [74] [75] [76] . Therefore, it is not clear to what extent these unsafe behaviors of drivers of partially automated cars represent long-or short-term effects of automation, and why these behaviors actually occur. It is also plausible that some drivers believed that taking their hands off the wheel and watching a video was safe [77] , or that some staged falling asleep in order to contribute to the hype around Tesla's Autopilot system. Previous studies on road vehicle automation have operationalized perceived safety and trust by generic items, such as: "Overall, AVs would help make my journeys safer than they are when I use conventional vehicles" [78, p. 3; 79, p. 874], "I am worried that the general safety of using an AV is worse than that of driving a common vehicle" [80, p. 109848] , and "Overall, I can trust autonomous vehicles" [29, p. 697 ]. Other studies asked respondents to rate their level of trust and safety or changes in these using items such as: "To what extent do you trust the driving automation according to the previous performance of the system?" [81] , "Ranked the buttons on safety perception scale", [82, p. 351] , and "Please indicate the degree that your trust has changed after this encounter" [83] . These items were not tailored to the specific nature of partial automation requiring permanent supervision by human drivers. Trusting in partially automated driving system was tested for parking ("To what extent do you trust the Tesla's ability to park itself?") [84, p. 197 ]. The present study contributed to the development of scales to measure trust and perceived safety in partially automated cars. The confirmatory factor analysis revealed that trust mainly depended on longitudinal automation performance ("TRU1: I trust my partly automated car to maintain speed and distance to the car ahead"), lateral performance ("TRU2: I trust my partly automated car to keep the car centered in the lane"), and overall trust ("TRU3: I can trust my partly automated car"). The self-developed items "TRU4: I feel hesitant about activating the partially automated car mode from time to time" (reverse-coded), "TRU5: I am unwilling to hand over control to my partially automated car from time to time" (reversecoded) and "TRU8: I monitor the performance of my partially automated car most of the time" were dropped as their loadings on trust were insufficient. These items imply an evaluation of the frequency (i.e., from time to time, most of the time) of situational usage behaviour (i.e., activation of partially automated driving, handing over control to the car, and driver monitoring). Our results do not support usage of such items to assess trust in partial automation. This finding also points to a distinction between general trust ("TRU3: I can trust my partly automated car") and behavioural trust in partially automated driving ("TRU4: I feel hesitant about activating the partially automated car mode from time to time", "TRU5: I am unwilling to hand over control to my partially automated car from time to time" and "TRU8: I monitor the performance of my partially automated car most of the time". Our findings indicate that respondents have an accurate understanding of where, when and how to use their partially automated cars in order to trust it. This supports the notion that trust in automation is inherently context-/ situation-specific (e.g., "I trust the automation in this situation") [85, p. 41 ] and differs across driving scenarios [86] . The items "TRU6: I always know when my car is in partially automated driving mode" and "TRU7: I engage in other activities while driving my partially automated car", which were based on the literature, were excluded as indicators of trust. This is plausible as the formulation of these items is very specific and tailored to aspects that may be conceptually unrelated to trust such as non-driving related task engagement and driver engagement. In partially automated cars, drivers are not allowed to engage in non-driving related activities. Furthermore, drivers need to know the mode the partially automated car is in regardless of trust since it dictates task distribution. Perceived safety was measured by three items established from the literature. The item "PS2: I feel relaxed most of the time" had the strongest loading on perceived safety, indicating that feelings of relaxation may be most decisive for feelings of perceived safety in partially automated cars. While the item "PS1: I feel safe most of the time" had the second-strongest loading on perceived safety, the reversely coded item "PS5: I am concerned about my general safety most of the time" did not load sufficiently. The item "PS3: I feel anxious most of the time" (reverse-coded) had the third-strongest loading on perceived safety. This indicates that perceptions of safety in partially automated cars are strongly associated with emotional and affective dimensions. Others [87, p. 4 ] validated a scale for perceived safety for intelligent connected vehicles (ICVs), measuring cognitive components (e.g., "I think the potential danger of an ICV is acceptable") and emotional components of safety (e.g., "I think it's relaxing to operate an ICV"). The item "PS4: I feel bored most of the time" was also omitted from the analysis, meaning that feeling bored was not associated with perceptions of safety in the context of partial automation. The evaluation of the acceptance of SAE Level 4 driverless shuttles using data from respondents who physically experienced automated shuttles resulted in the distinction between the theoretical constructs perceived safety and boredom [52] . The questionnaire item "PS6: I entrust the safety of a close relative to my partially automated car" was also omitted from the scale perceived safety. This suggests that feelings of perceived safety are more oriented towards the individual rather than close relatives. Future studies should test whether the questions that were not included as valid and reliable indictors of trust and perceived safety in the present study, respectively, can become so in SAE Level 2+ vehicles, taking into account the corresponding role of human drivers. Respondents rated the perceived safety and trust while using their partially automated cars as very high. Over 80% of respondents indicated to feel safe and relaxed most of the time, while only 8% reported to have feelings of anxiety during partially automated driving. This is in line with reports from manufacturers claiming that their partially automated cars are indeed safer (than manual driving) [88] . Over 80% of respondents testing different L3 conditionally automated driving functions in the context of the L3Pilot project indicated to feel safe when driving with the system active, and more than 60% indicated to feel safe in take-over situations [89] . The assumption that partially automated driving is safer than manual driving may hold if the car is used appropriately. Note, however, that a formal assessment of automation effects on safety would require substantially more data than is currently available [90] . Regarding their ratings of trust, 89% of respondents reported to trust their partially automated car maintaining the speed and distance to the car ahead. Over 70% agreed with the statements to trust their partially automated car, and to trust their partially automated car keeping the car centered in the lane. This matches a survey [70] where 90% of respondents considered the partially automated driving system dependable and 78% of respondents reported to trust it. Our respondents seemed to have a solid understanding of the car's capabilities and limitations: A high mean rating (M = 4.02) was obtained for the partially automated car helping drivers to use it as advised by the manual. This suggests that respondents were aware of the car's capabilities and limitations as well as of their role as driver. Other studies have shown inaccurate expectations of the capabilities of partially automated cars [86] . In [91] , 57% of respondents reported to know "very little" and 23% of respondents "a moderate amount" of autonomous vehicles. In our recent study [61] with 18,631 respondents from 17 countries, respondents were inaccurate about the operation of conditionally automated cars (SAE level 3) being limited to operational design domains. Furthermore, only 5% and 8% of respondents from the Dominican Republic knew Intelligent Transportation Systems (ITS) in 2018 and 2019, respectively [92] . One plausible explanation for our positive finding regarding understanding automation is that our respondents were experienced drivers of partially automated cars. Structural equation modeling revealed a positive relationship between perceived safety and trust (β = 0.69, p = 0.001), which corresponds with other studies [38, 52, 81] . Our finding suggests that individuals who provided higher ratings to the safety of their partially automated cars were more likely to consider partially automated cars as trustworthy than individuals who provided lower ratings to perceived safety. This matches various studies showing positive effects of perceived safety on trust [50, 52, 93] . The finding implies that increasing the perceived safety of partially automated cars is a useful avenue to promote trust in partially automated cars. How do performance expectancy, perceived safety, trust, driver engagement, and non-driving related task engagement relate with the acceptance of partially automated cars? In the second structural model, performance expectancy had the strongest effect on automation use (β = 0.31, p = 0.001), followed by driver engagement (β = 0.30, p = 0.001), trust (β = 0.21, p = 0.02), and non-driving related task engagement (β = 0.14, p = 0.01). The intention to use automated vehicles was strongly related to performance expectancy and the perceived benefits of automated cars [38, 50, 51, 79, 80, 93, 94] . This suggests that individuals who appreciated the benefits of (partially) automated cars are more likely to form positive intentions to use these cars. Driver engagement, trust in partially automated cars, and non-driving related task engagement were the second, third, and fourth strongest predictors of automation use, respectively. This suggests that keeping the driver engaged in the driving task, promoting trust in partially automated cars, and encouraging engagement in non-driving related activities can be useful ways to promote the use of partially automated cars. We recommend future research to revisit our scale measuring driver engagement as this is pivotal in SAE Level 2-4 cars. The relationship between perceived safety and automation use was not significant in both structural models. This is in contrast to research studies showing positive effects of perceived safety on the intention to use automated vehicles [38, 79] . However, positive effects of perceived safety on trust were found, indicating that the effect of perceived safety on automation was mediated by trust, which is in line with other studies [93] . First, respondents may not necessarily be representative of the general population of drivers of partially automated cars. The interest in, knowledge about, and enthusiasm for this technology may be higher among our respondents compared to the general population, possibly because the majority of respondents were recruited from platforms attracting people with a high interest in automated vehicles. On the other hand, it could be argued that most previous studies with partially automated cars addressed expected behaviour in future partially automated cars not yet experienced by respondents. The present study evaluated actual experienced drivers of partial automation, documenting adequate understanding and behaviour. Second, respondents reporting safe behaviour may not always use automation safely. They may have answered questions in a socially desirable way given their awareness of the misuse of Tesla's Autopilot system. Future work should investigate to what extent partially automated driving encourages risky driving in comparison to manual driving through observations of actual behaviour in naturalistic driving settings, and analysis of real-world accident statistics. Third, the causality of the relationship between perceived safety and trust can't be proven due to the cross-sectional nature of the survey data. We recommend future research to examine the nature of the relationship between perceived safety and trust. That is, do drivers feel safe because they trust partially automated cars, or do they trust partially automated cars because they feel safe? Is this relationship of a correlational rather than causal nature? This can be pursued studying the development of perceived safety and trust in time and across conditions of varying automation performance and criticality of driving conditions. Fourth, it should be noted that respondents may find it difficult to clearly discriminate between the constructs of perceived safety and trust in survey research as it is likely that respondents attach a similar meaning to these constructs. We recommend future research to use neuro-physiological, objective data (e.g., number of manual interventions, eye glance behaviour, heart rate frequency) [74, 84, [95] [96] [97] , and link these with the subjective selfreported measures of perceived safety and trust. Respondents reported high levels of perceived safety and trust in their partially automated cars. We also found high mean ratings for always knowing when the car is in partially automated mode, and for monitoring the performance of the partially automated car most of the time. One of the lowest mean ratings was obtained for engaging in secondary activities while driving the partially automated car. Unsafe behaviour was rare with 1% reporting to rarely monitor the road and 1% reporting to sleep in their partially automated cars. Structural equation modeling analysis revealed positive effects of perceived safety on trust. Perceived safety did not directly influence automation use but interacted with automation use through trust. Trust significantly affected automation use in addition to performance expectancy, driver engagement and non-driving related task engagement. The present study contributed to the development of scales for trust, perceived safety, driver engagement, and non-driving related task engagement. Comparing the relative strengths of EEG and low-cost physiological devices in modeling attention allocation in semiautonomous vehicles New GM technology lets cars go an eye for an eye Conceptual design of the elderly healthcare services in-vehicle using IOT Is partially automated driving a bad idea? Observations from an on-road study Autonomous Driving Systems: A preliminary naturalistic study of the Tesla model S. J Cogn Eng Decis Mak Trust and risky technologies: Aligning and coping with Tesla Autopilot An interview study exploring Tesla drivers' behavioural adaptation Driver trust & mode confusion in an on-road study of level-2 automated vehicle technology When will most cars be able to drive fully automatically? Projections of 18,970 survey respondents How can humans understand their automated cars? HMI principles, problems and solutions Advancements, prospects, and impacts of automated driving systems Autonowashing: The Greenwashing of Vehicle Automation Tesla full self-driving is going to have 'quantum leap' w/new rewrite A Paradigm shift in autonomous cars (and More) at vehicle displays. Information Display Society) SI. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles Misuse of automated decision aids: Complacency, automation bias and the impact of training experience Behavioural impacts of Advanced Driver Assistance Systems-an overview Humans and automation: Use, misuse, disuse, abuse Supporting drivers of partially automated cars through an adaptive digital in-car tutor Extending the Technology Acceptance Model to assess automation Calibrating trust through knowledge: Introducing the concept of informed safety for automation in vehicles Trust in aautomation: Designing for appropriate reliance Not all trust is created equal: Dispositional and history-based trust in human-automation interactions A Framework for Analyzing and Calibrating Trust in Automated Vehicles Trust in market relationships An integrative model of organizational trust Not so different after all: A cross-discipline view of trust Interpersonal trust, trustworthiness, and gullibility Investigating the importance of trust on adopting an autonomous vehicle Does initial experience affect consumers' intention to use autonomous vehicles? Evidence from a field experiment in Beijing Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust Public perceptions of autonomous vehicle safety: An international comparison Situational awareness, drivers trust in automated driving systems and secondary task performance Control task substitution in semiautomated driving: does it matter what aspects are automated? Hum Factors Keep your scanners peeled: Gaze behavior as a measure of automation trust during highly automated driving What's new for 2020? A Wizard of Oz Field Study to Understand Non-Driving-Related Activities, Trust, and Acceptance of Automated Vehicles. 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications; Virtual Event, DC, USA: Association for Computing Machinery Structural equation modeling in practice: A review and recommended twostep approach Evaluating structural equation models with unobservable variables and measurement error Multivariate data analysis. Pearson New International Edition lavaan: An R Package for Structural Equation Modeling Likert scale examples for surveys Profiling the enthusiastic, neutral, and sceptical users of conditionally automated cars in 17 Countries: A questionnaire study Theoretical considerations and development of a questionnaire to measure trust in automation Driver acceptance of partial automation after a brief exposure Trust in automation-before and after the experience of take-over scenarios in a highly automated vehicle Establishing face and content validity of a survey to assess users' perceptions of automated vehicles editor An assessment of the willingness to choose a self-driving bus for an urban trip: A public transport user's perspective Technology acceptance modeling based on user experience for autonomous vehicles Watch these unsettling videos of all the times Tesla autopilot drivers were caught asleep at the wheel in Is Driving Automation Used as Intended? Real-World Use of Partially Automated Driving Systems and their Safety Consequences Real-world use of partially automated driving systems and driver impressions Tesla driver repeatedly spotted in backseat on Autopilot is begging to be arrested 2021 Tesla Driver Caught On Camera Apparently Asleep At The Wheel2019 Acclimatizing to automation: Driver workload and stress during partially automated car following in real traffic Driver behavior and the use of automation in real-world driving Driver-initiated Tesla Autopilot disengagements in naturalistic driving Knowledge gap: New studies highlight driver confusion about automated systems Perceived benefits and constraints in vehicle automation: Data to assess the relationship between driver's features and their attitudes towards autonomous vehicles Perceived safety and attributed value as predictors of the intention to use autonomous vehicles: A national study with Spanish drivers Critical factors influencing acceptance of automated vehicles by Hong Kong drivers Investigating perceived safety and trust in driving automation with a simulator experiment How people perceive and expect safety in autonomous vehicles: An empirical study for risk sensitivity and risk-related feelings Real-time estimation of drivers' trust in automated driving systems Trust and distrust of automated parking in a Situational trust scale for automated driving (STS-AD): Development and initial validation Changes in trust after driving Level 2 automated cars The development and validation of the perceived safety of intelligent connected vehicles scale Tesla updates Autopilot safety numbers; almost 9x safer than average driving Driving to Safety How many miles of driving would it take to demonstrate autonomous vehicle reliability? RAND Corporation Dimensions of attitudes to autonomous vehicles. Urban, Planning and Transport Research Is there a predisposition towards the use of new technologies within the traffic field of emerging countries? The case of the Dominican Republic Baby, you can drive my car": Psychological antecedents that drive consumers' adoption of AI-powered autonomous vehicles User acceptance of automated public transport: Valence of an autonomous minibus experience This is your brain on Autopilot: Neural indices of driver workload and engagement during partial vehicle automation Automated driving reduces perceived workload, but monitoring causes higher cognitive load than manual driving The effect of partial automation on driver attention: A naturalistic driving study