key: cord-0314034-ipb5cna8 authors: Cools, Robbe; Han, Jihae; Simeone, Adalberto L. title: SelectVisAR: Selective Visualisation of Virtual Environments in Augmented Reality date: 2021-04-17 journal: nan DOI: 10.1145/3461778.3462096 sha: c98590ff9b7caa5052e58313030f728ad5382dd6 doc_id: 314034 cord_uid: ipb5cna8 When establishing a visual connection between a virtual reality user and an augmented reality user, it is important to consider whether the augmented reality user faces a surplus of information. Augmented reality, compared to virtual reality, involves two, not one, planes of information: the physical and the virtual. We propose SelectVisAR, a selective visualisation system of virtual environments in augmented reality. Our system enables an augmented reality spectator to perceive a co-located virtual reality user in the context of four distinct visualisation conditions: Interactive, Proximity, Everything, and Dollhouse. We explore an additional two conditions, Context and Spotlight, in a follow-up study. Our design uses a human-centric approach to information filtering, selectively visualising only parts of the virtual environment related to the interactive possibilities of a virtual reality user. The research investigates how selective visualisations can be helpful or trivial for the augmented reality user when observing a virtual reality user. Virtual Reality (VR) and see-through Augmented Reality (AR) devices are becoming increasingly affordable. VR enables users to * Both authors contributed equally to this research. immerse themselves in a Virtual Environment (VE). A see-through AR device can overlay virtual content on top of the physical environment. VR and AR users can interact with each other through Collaborative VEs [23] , and the interaction between a VR and AR user is considered a type of "Cross-Reality Interaction". Cross-Reality (CR) is a field of research that looks at how users of different realities can interact with each other. These realities can be described through Milgram's Reality-Virtuality continuum [13, 16] , which ranges from the real world to the virtual world and the spectrum of hybrid realities in between. This paper will focus on scenarios that involve interactions between two different points in this continuum: AR and VR. This type of scenario can be beneficial when the roles of the AR and VR user in the collaboration are asymmetrical. It is important to note the differences in how users perceive their VEs: VR users benefit from more immersion and AR users benefit from more nonverbal cues. In a CR context, nonverbal cues refer to the advantage AR users have over VR users when communicating with an external user -For instance, both VR and AR users can talk to an external user, but only AR users can see the physical body and gestures of an external user in real life. The VR user cannot see the external user, and at most can only perceive the external user's virtual avatar. As such, AR retains most nonverbal cues lost to VR users. In contrast, while VR users retain high immersion, AR users will be less immersed in the VE due to their vision of the physical environment. For some users VR might be more desirable, such as for a training simulation where the user needs to have a sense of being at the place of the training. Other users can benefit from AR to enable them to see nonverbal cues of other co-located users. In this study, we aim to develop a CR scenario that exploits both the immersive benefits of VR and the nonverbal communication features of AR. We question how an AR user can spectate and interact with the VR user in their VE [6] . We propose a selective visualisation system that enables AR users to only see select virtual elements of the VE whilst VR users see the entire VE to maintain their immersion. We investigated two design factors in terms of visualising a VE: level of information and scale. We designed a framework where we presented the VE to the AR user at a 1:5 dollhouse-scale and at 1:1 room-scale with three levels of information: no selection, a dynamic selection following the VR user, and a predetermined static selection. This study was then repeated with two improved dynamic and static selection techniques implementing feedback from the main study. In our studies we found that participants felt they had a better overview of the VE at the small 1:5 dollhouse-scale; however, this had the drawback that it was more difficult to see smaller movements of the VR user. We found that our dynamic selection methods were preferred by fewer participants than the static selections. No significant differences were found in participant competence in recognising events in the VE. CR collaboration refers to users on different points on the Reality-Virtuality continuum [13, 16] working together. In order to support collaboration, users need to be aware of each other's "realities" and be able to interact with other users and their reality. Different technologies can be used to support CR visualisation and interaction. TransceiVR [12] enables a non-immersed tablet user to view the VE from the perspective of the immersed user by freezing the frame and making annotations that are communicated back into the VE. The real environment is disconnected from the VE, as the external user sees it from the perspective of the immersed user. FaceDisplay [5] mounted screens on the Head-Mounted Display (HMD), through which the external user can view the VE. This presents the VE from the perspective of the external user, however only when they are looking directly at the VR user's HMD. Silhouette Games [11] presents an approach with a screen behind a one-way mirror. The screen displays a simulated reflection of the VE calculated based on the position of the non-immersed user. The non-immersed user can then view the VE and the physical reflection of the VR user simultaneously. Seeing both the VR user and their reflection caused some confusion in participants. Our AR-based approach does not rely on a reflection of the VR user, but visualises the VE directly around them. This does require the external user to wear an AR HMD which is more invasive than the approach presented in Silhouette Games. Wang et al. [21] mounted a projector on the HMD, which allows visualisation of the VE on the floor around the immersed user. The VR user had control over the content that was shown in the projection. In this paper we investigate techniques that visualise the area of the VE around the VR user, changing the visualisation as the VR user moves. ShareVR [4] combined a static floor projection, covering the entire space available to the VR user, with a handheld screen to enable interaction between an immersed user and an external user. ReverseCAVE [8] also used a projection-based approach, where the VE was not projected on the floor but on four translucent screens around the VR user that external users can then spectate from the outside. In our work we also investigated static visualisations covering the entire available space, however we used see-through AR instead of a projection. Pham et al. [15] investigated the effect of the scale of AR visualisation on gestures, investigating models at 'in-air' scale, tabletop scale and room scale. They found that these different scales elicited different gestures from users. We will investigate the effect of the scale of the visualisation on an AR user spectating a VR user, inspired by Dollhouse VR [7] and World In Miniature [14] . We find further investigation on scale relevant as neither Dollhouse VR [7] nor its follow-up study [17] specifies or justifies the use of a specific scale when implementing the 'dollhouse', only detailing a relative size difference between visualisations. World In Miniature provides more but still relatively abstract detail regarding its implementation of scale, remarking that a World In Miniature may be 'hand-held' but not specifying a scalar value [14] . Grandi et al. [2] investigated collaboration between VR and tablet AR users. AR and VR users were co-located and collaborated on solving a docking task. The AR user saw the virtual objects with the same spatial orientation as the VR user. The shared virtual elements were limited to tabletop-size objects, whereas we investigated how to visualise the VR user in context of their VE to the AR user on room-scale. ObserVAR [18] explores the use of see-through AR for a teacher to visualise their students, who are immersed in a VE using 3-DOF VR devices. Three visualisations were tested: First Person View, World in Miniature and World Scale. Participants found World Scale easier to use than World in Miniature, though scale was not the only factor because World in Miniature showed a separate miniature per VR user. The ObserVAR user study had multiple remote VR users, while our study had a single co-located VR user and focused on visualising them in context of the VE. Slice of Light [20] presents a method for a VR user to see and move between VEs of other VR users. The other users' VEs are visualised as slices around the user in that VE, the external user can then enter that VE be stepping towards it. The purpose of presenting the VEs as slices is so that multiple users in their VE can be shown at once. We investigated if filtering what is shown of the VE can improve AR users' understanding of the VR user's actions. To do this we tested both static and dynamic selections of virtual content. A summary positioning our work to the related work can be seen in Table 1 The selection methods we designed aim to visually emphasise the actions of the VR user to an AR spectator by selectively filtering relevant parts of a VE -specifically, the visual artefacts the VR user is interested in or interacting with. This VE is asymmetrically filtered only for the AR user; the VR user would see a fully visualised VE to maintain user immersion. We hypothesise that it is possible to remove a part of the VE without hurting task performance of how an AR user perceives the actions of a VR user. We conducted a pilot study with three HCI experts and a prototype 5 m × 5 m virtual room as the VE. The HCI experts had varying degrees of experience with CR technologies: one expert, one with previous experience, and one without any experience. We conducted a Think-Aloud Protocol, with one researcher taking notes and the other assuming the role of the VR user, to prototype our framework and reduce the number of techniques being tested for the main study. We investigated six different visualisation techniques based around the interactive range and possibilities of the VR user, categorised as either static or dynamic visualisations: Static visualisations select parts of the VE to visualise at all times of the simulation. (1) Everything visualises the entire VE to the AR user. This is the control condition in which AR users see the VE in the same way as the VR user. No changes are made to augment or filter information in the visualisation of the VE. (2) Interactive is a predetermined selection of interactive objects in the VE. This is inspired by literature that suggests only relevant information should be visualised to prevent overloading the user with irrelevant information [3, 9] . As 'relevant information' is an abstract term, we attempt to draw thresholds in information filtering using Interactive. Lastly, (3) Dollhouse visualises a smaller, scaled model of the VE. Directly based on Ibayashi et al. Dollhouse VR [7] , this visualisation is grounded in previous literature [14, 15] that argue that scaled visualisations of VEs enable more efficient navigation of a VE. However, instead of a 2D top-down view of the VE as investigated in Dollhouse VR [7] , we investigate a 3D scaled model of a VE in AR. Dynamic visualisations select different parts of the VE to visualise, depending on where the VR user is located or what the VR user is doing. (4) Head-Direction only visualises the part of the VE that the user is facing towards. (5) Proximity visualises a radial area of VE nearby the VR user. The conditions Head-Direction and Proximity are inspired by Slice of Light [20] , a visualisation which shows only part of the VE to the guest VR user and dynamically changes depending on the user's location or actions. We based these two conditions on common tracking methods for VR, head-direction tracking for Head-Direction and position-tracking for Proximity. Lastly, (6) Dynamic-Interaction visualises the virtual objects that the VR user is currently interacting with using the controllers. Dynamic-Interaction is a responsive implementation of the static condition, Interactive, which enables the users to filter information depending on their hand motions. Based on a preference ranking and informal interviews, we decided to remove two visualisation techniques that participants liked the least for the main study: Head-Direction and Dynamic-Interaction. Regarding Head-Direction, participants found that frequent changes to a VR user's line of sight and thus the visualisation made the technique confusing. Regarding Dynamic-Interaction, participants found the visualisation difficult to understand as too little information was being shown. We also improved some visualisations, such as Dollhouse which participants complained that the visualisation was too small to see clearly. We thus increased the scale of the visualisation to 1:5 (1 m × 1 m) from its original 1:10 (0.5 m × 0.5 m) scale. Using our pilot study to refine our design, we selected four visualisation techniques for the main implementation of the study: • Everything: A VR-mimicking condition where the entire VE is visualised to the AR user at 1:1 scale. No modifications are made and the visualisation is symmetrical between the VR and AR user. • Proximity: An 'arm's-reach' approach that dynamically visualises a 1 m radius of VE around the VR user, with an additional 0.5 m radius of decreasing opacity to fade-out the visible threshold of the technique. The parts of the VE visualised changes depending on the location of the VR user. • Interactive: A static, predetermined visualisation of interactive movable objects in the VE. This is a selection of objects that the VR user can pick up and interact with using their controllers. As a static visualisation, all the interactive objects are visualised at all times. • Dollhouse: A 1:5 scaled visualisation that provides a topdown overview of the VE, which hovers 1 m above floor-level. The walls and ceiling of the virtual model are removed to facilitate looking into its interior. These techniques can be seen as diagrams in Figure 2 and from the AR user's perspective in Figure 3 . We developed this selective visualisation system in Unreal Engine 4.25 as a networked application, which runs on two computers on the same LAN with one as the server and the other as a client. We used a HTC Vive Pro and a Microsoft Hololens 2, both of which have their own coordinate systems: lighthouses managed by SteamVR and embedded camera-based tracking respectively. Using a custom calibration procedure the coordinate systems are aligned. This procedure consists of scanning two QR codes with the Hololens, and placing the HTC Vive controllers on top of these codes. With two corresponding points in both coordinate systems known, the origin of the Hololens coordinate system is transformed so that these points overlap with the corresponding points in the SteamVR coordinate system. In operation drifts between the coordinate systems can be observed up to a maximum of 5 cm. For the purposes of testing the visualisation system, we created a 'bartender simulation' as the VE. The participant assumes the role of the AR user within this simulation because we are investigating the AR user's perception of how a VR user interacts with a VE. The researcher assumes the role of VR user and conducts a 'performance' using a predetermined script for the AR user to observe. This performance consists of making cocktails using three different recipes, and the order of the recipes and the actions performed for making them differed between the different visualisation techniques. The VR user used three types of interactive objects to perform this script: fruits, bottles and glasses. All these objects can be picked up and moved with the VR user's motion controllers. The glasses can hold slices of fruit and liquids. The contents of the glass are indicated by floating text above it. A fruit is added on entering the collision box of the glass. Liquids are only added when the top of the bottle collides with the glass, to mimic a pouring motion. When the glass is held upside-down the contents are emptied. On the bar counter there is floating text indicating the current order and a simplified three-ingredient recipe. Below this text is a collision box that checks the glass contents on collision, and when the contents are correct empties the glass and advances to the next recipe. These are the different events that the AR user can perceive in the VE that are triggered by the VR user. We recruited thirteen participants for the main study, aged between 21 and 57 (M=30.62, SD=12.48; 6 male, 7 female). They had a low self-reported experience with VR and AR technologies (M=3.08, SD=1.19 on 7-point scale). Participants were tasked with using the Hololens 2 to observe a VR user that is performing a bartender simulation. The Hololens was set to the highest brightness setting. Holographic remoting 1 was used to stream the image from the computer to the Hololens. Participants were given an 'event recognition task', a list of events which they need to recognize as they happen. These events are triggered by the VR user's actions. The researcher used a HTC Vive Pro with the Vive wireless attachment to perform the role of the VR user, following a predetermined set of actions on each trial. During each trial the VR bartender made three drinks, consisting of combining three ingredients in a glass each. Participants performed four trails, one for each technique, during which they could move around the lab to adjust their viewpoint. The study lasted about 40 min. Before taking part in the study, participants signed a consent form and filled in a demographics questionnaire, then we explained to them the event recognition task and instructed them on how to use the Microsoft Hololens 2. Before starting the first trial participants were given some time to look around the bar environment and get to know the positions of all the objects. The techniques were presented in counterbalanced order, using a balanced Latin square. During each trial participants were required to pay attention to the VR user and the VE. After each trial participants filled in which events they saw happen, Slater-Usoh-Steed's (SUS) presence questionnaire [19] , Kennedy's Simulator Sickness Questionnaire [10] and a questionnaire with custom questions on a 7-point likert scale. After the last trial participants were asked to rank the techniques (1 st , 2 nd , 3 rd and 4 th ), and were interviewed on their thoughts on the techniques and the experience in general. The study took place during the COVID-19 pandemic. The keyboard, mouse and desk area used by the participants were disinfected before and after the study, as well as the Hololens 2 for which a Cleanbox UV-C decontamination device 2 was used. Participants and researcher disinfected their hands before and after the study, wore face masks and maintained a distance of at least 1.5 m between them. There was at least 30 min between participants to avoid them meeting and allow time to disinfect and ventilate our lab. The study and COVID-measures were approved by the university's privacy and ethics board (PRET). Preference Ranking. Participant preference of the techniques can be seen in Figure 4 . Following a pairwise Wilcoxon signed rank test, the Everything technique was ranked significantly higher than Interactive (p<0.05) and Proximity (p<0.05), with 77% ranking it at the first or second place. Both Interactive and Dollhouse were ranked first or second by 46% of participants. Only 31% of participants ranked Proximity in first or second place. Event Recognition. We used a competence calculation (True Positive Rate -False Positive Rate) [22] to analyse how well the participants understood the VR user's actions during the event recognition task. 'Competence' is the probability of knowing a correct answer without guessing and not by chance, and in this context refers to the probability of an AR user correctly identifying the actions conducted by the VR user. A Kruskal-Wallis test showed no significant difference (p=0.93) in competence across the four visualisation conditions. However, we observed a marginal difference in the mean competence that favoured the filtered Interviews. We analysed the interview using a thematic analysis [1] . We categorised user responses in three themes: Firstly, the ability to focus on the simulation; secondly, the presence of the VR user; and thirdly, feedback on the visualisation methods. Participants found it harder to focus in the Everything condition, with five participants finding real objects distracting and two finding virtual objects distracting. In contrast, five participants found it easier to focus in the more visually filtered Interactive condition. This is higher than the number of people who stated that Dollhouse or Proximity helped focus, which was two. Regarding the presence of the VR user: five participants commented that rather than the physical appearance of the VR user, they found themselves focusing on the actions being conducted. Any mentions of the physical appearance of the VR user only arose from room-scale conditions, even if the VR user was co-located in all the conditions. Regarding feedback for the visualisation conditions: For Interactive, six participants found they missed the bar counter as a point of reference in the scene. For Proximity, two participants complained about having less control over the visibility of virtual artifacts, and two other participants about wanting to stay aware of the invisible part of the VE. Issues with the selective techniques were found in the main study, these were addressed and the resulting improved techniques evaluated in a follow-up study. In feedback given during the interviews, participants mentioned issues with our selective techniques: for Interactive they missed the bar counter as a point of reference and for Proximity it was confusing that the environment disappeared completely which removed all context in which to see the highlighted area around the VR user. We thus implemented two new selective techniques to address this feedback: • Spotlight: An improved version of Proximity in which the VE in proximity of the VR user is rendered opaque, but modified to enable the AR user to see the rest of the VE as simple outlines. This allows users to see the highlighted area in context of the rest of the VE without it obstructing view of the physical environment. • Context: A refinement of Interactive that responds to the participants' desire to see more of the VE. The furniture that supports the interactive objects can now be seen, i.e. the counter and sink. These techniques can be seen as diagrams and from the AR user's perspective in Figure 5 . The follow-up study followed the same procedure as the first study described in subsection 4.1, with the Proximity technique replaced by Spotlight and the Interactive technique replaced by Context. For the follow-up study we recruited 13 participants, aged between 19 and 57 (M=29.77, SD=14.35; 6 male, 7 female). With low self-reported experience with VR and AR (M=2.92, SD=1.25 on a 7-point scale). Preference Ranking. Participant preferences can be seen in Figure 6 . A pairwise Wilcoxon signed-rank test showed that Spotlight ranked significantly lower than the other techniques (p<0.01 for Everything and Context, and p<0.05 for Dollhouse) with 92% of participants ranking it third or fourth. Dollhouse was ranked significantly lower (p<0.05) than Everything with 54% of participants ranking it third or fourth, and 77% ranking Everything first or second. 69% of participants ranked Context first or second. Interviews. The same three themes were identified in the interviews as in the main study: ability to focus on the task, the presence of the VR user and feedback on the methods. Four participants found the VE distracting in Everything, while three participants were distracted by the VE represented as outlines in Spotlight finding that the out-of-focus environment was unclear. Three participants also found that Spotlight helped them focus more on the bartender. Four participants only saw the bartender as an avatar, while six others mentioned that they could see the physical person behind the avatar in the room-scale conditions. Three participants said they could not see the bartender well enough. Six participants found Everything and Context very similar, five participants even expressed difficulty in discerning these two techniques. Three participants expressed frustration with the limited vision in Spotlight. Nine participants found that Dollhouse gave them a good overview of the VE, five participants found it too small. We hypothesised that it would be possible to remove parts of the VE that are non-essential to the task being performed in it without altering an external user's perception of the task itself. Our results indicated that removing a large part of the VE indeed does not create a significant difference in how well an AR user can identify the events triggered by a VR user. However, our findings also reveal that competence does not necessarily correspond with user preference on the different visualisations, and we identified that participants preferred to see the supporting furniture of visible objects and did not prefer to lose control over the visualisation. An initial static selection Interactive was also not preferred. Participants indicated that they could not see enough of the VE. We developed an improved iteration, Context, which showed relevant furniture in addition to the original objects of the VE. Participants expressed a more positive response for this visualisation, commenting that Context was similar to seeing the entire VE. Some of the participants even expressed being unable to tell the difference between this selection and seeing Everything. Between the main and follow-up study, the preference ranking of this static visualisation has increased by one rank. On the effect of scale we found that Dollhouse provided a better overview of the VE, but also that many participants found it too small. We were able to use a 1:5 scale because our VE was only as large as the physical size of the room. Larger VEs need to be scaled down more, which can make the issue of them being too small worse, or not shown entirely which can make users lose their overview on the VE. In ObserVAR [18] participants found the World Scale condition easier to use than the World in Miniature condition which is supported by our results where participants preferred the Everything and static selection room-scale techniques over the Dollhouse technique. The results from ObserVAR indicate that their World Scale provided a better overview, which contradicts our results that indicated Dollhouse as the technique providing a better overview. This can be explained by the ObserVAR implementation of World in Miniature that visualises a separate miniature for each VR user, thus splitting up the information required by their user study participants. Two types of selection were investigated, a predetermined static selection of objects and a dynamic selection that follows the VR user. Participants did not prefer the dynamic selection, citing lack of control over what they could see in the VE. A similar trend was cited in HMD light, in which external users looking into a VR user's VE wanted to have more control over the visualisation of the VE [21] . Further comparisons can be made with HMD Light regarding user preferences on the stability of the visualisation. Comparing a third person view with a first person view, most users in HMD Light chose third person view because as it was more stable and holistic than the 1st person view [21] . In our study, more holistic and static visualisations such as Everything and Dollhouse were preferred over more dynamic visualisations such as Proximity or Spotlight. Participants in selective visualisations such as Interactive or Context could identify VE events more accurately than seeing Everything. Compared to other visualisations, the majority of the participants highlighted some distracting features in the Everything visualisation. In contrast, a number of participants cited Interactive in particular as useful for maintaining focus, despite the lower preference rating compared to Everything. However, it is important to note that statistically we have found no significant differences proving that more selective visualisations improve focus. We have only observed that the mean values for competence are marginally higher for the selective visualisations in this instance of a 'bartender' VE. Further investigation is necessary, perhaps with a range of different levels of information that incorporate tasks of greater complexity. For researchers and developers in CR, we recommend different visualisations depending on the purpose and appearance of the VE. For a VE that requires the AR user to have an overview of the space, the Dollhouse condition has shown to be the most effective of those evaluated in this study. It is important to note that the scale of the Dollhouse depends on the size of the VE, as very large VEs are potentially limited by the physical space available even when scaled, and there exists a limit to how small a VE can be visualised before the AR user no longer understands what the VE represents. Additionally, the VE should be visualised as a static selection as opposed to a dynamic selection whenever possible, as static selections have shown to rank higher in terms of user preference. Lastly, it is possible to remove all non-essential information and preserve the recognition of events, but showing the immediate context matters for user preference. In two studies we investigated how a selective visualisation system of VEs can influence an AR user's perception of a co-located VR user. We looked at two variables: the level of visual information and the effect of scale. Regarding level of visual information, we observed that filtering specific selections of the VE did not significantly affect the competence of how well people could identify events in the VE. These selections were based on the interactive range and possibilities of the VR user. Regarding scale, users generally agreed that smaller visualisations provide a better overview of the VE, but had the chance of decoupling the user from the task at hand. In terms of user preference, our qualitative data showed that participants tended to prefer static visualisations over dynamic visualisations, disliking the lack of control they could exercise for visualising the VE. In future work we would like to improve these visualisations to apply to a greater variety of VE contexts. Techniques such as Proximity are generalisable, but techniques such as Context are very specific to the context of the VE as they use a predetermined selection of virtual objects. This selection of visualised objects can be made in different ways, instead of a predetermined selection future work can investigate the creation of an interface for the AR, or VR, user to make this selection themselves. Moreover, we would like to apply these visualisations into an interactive implementation of this visualisation system, as currently the AR user only assumes a passive spectator role in the task. We could test our visualisations in a collaborative task that requires both the AR user and VR user to interact with elements from the VE. Using thematic analysis in psychology Characterizing asymmetric collaborative interactions in virtual and augmented realities Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality ShareVR: Enabling co-located experiences for virtual reality between HMD and Non-HMD users FaceDisplay: Towards asymmetric multi-user interaction for nomadic virtual reality The Body in Cross-Reality: A Framework for Selective Augmented Reality Visualisation of Virtual Objects Dollhouse VR: A multi-view, multi-user collaborative design workspace with VR technology Let Your World Open: CAVE-based Visualization Methods of Public Virtual Reality towards a Shareable VR Experience Information filtering for mobile augmented reality Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness Silhouette Games: An Interactive One-Way Mirror Approach to Watching Players in VR TransceiVR: Bridging Asymmetrical Communication Between VR Users and External Collaborators Augmented reality: a class of displays on the reality-virtuality continuum Navigation and locomotion in virtual worlds via flight into hand-held miniatures Scale impacts elicited gestures for manipulating holograms: Implications for AR gesture design Revisiting Milgram and Kishino's Reality-Virtuality Continuum An asymmetric collaborative system for architectural-scale space design Obser-VAR: Visualization system for observing virtual reality users using augmented reality Using presence questionnaires in reality Slice of light: Transparent and integrative transition among realities in a multi-HMD-user environment HMD Light: Sharing In-VR Experience via Head-Mounted Projector for Asymmetric Interaction Cultural Consensus Model. In Encyclopedia of Social Measurement SpaceTime: Enabling fluid individual and collaborative editing in virtual reality This research is supported by Internal Funds KU Leuven (C14/20/078).