Virtual & Augmented Reality

© Fraunhofer FKIE
Research topics of particular importance: The team is specialized in the human-centered design of virtual & augmented reality applications in various application areas. These include, for ex-ample, situation awareness, human-robot interaction and maintenance.

The human-centered design of virtual reality (VR) and augmented reality (AR) applications is at the core of the research area. Special emphasis is placed on the exploration of new design and interaction paradigms for three-dimensional interfaces. Natural interaction techniques as well as the use of different modalities such as gesture, voice or gaze control, navigation in virtual environments, technology-enhanced learning, multi-user AR/VR and interaction with autonomous systems are of particular importance as research topics.

The team ensures the suitability of the developed design solution through systematic, context-specific evaluations with the help of eventual users. In this way, the applications can be tested for objective performance measures as well as subjective metrics. The insights gained from this are incorporated into the solutions, which are thus iteratively improved and can thus optimally support users in performing complex tasks.

Extensive methodological expertise is leveraged: In the analysis phase, the approach covers, for example, the development of personas, user journey maps or scenarios. During development, current industry solutions as well as expertise in the area of 3D engines, for example, come into play. This enables efficient implementation of the design solutions, for which modern hardware is used and dedicated solutions are developed. User behavior is then captured in controlled environments and statistically evaluated.

Augmented Reality for aircraft maintenance and training (ARIEL)

Advanced military transport aircrafts such as the Airbus A400M Atlas were developed with the aim of addressing increasing requirements in the field of tactical and strategic logistics. This also increases the complexity of the aircraft - one of the A400M's four engines alone has more than 10,000 individual parts. The maintenance of these complex systems places the highest demands on the personnel performing the work and on their training. Augmented reality (AR) technologies augment the user's real environment with computer-generated content, creating an actionable link between the physical world and electronic information.

The goal of the ARIEL project is to investigate the potential of AR in aircraft maintenance and training. For this purpose, it will be analyzed which three-dimensional information and data visualization techniques are suitable in this context. In addition, the suitability of input modalities and interaction techniques will be examined. In collaboration with domain experts, a demonstrator was developed which addresses representative use cases in the context of maintenance and training within the framework of human-centered design.

Virtualized Control Center for Unmanned Systems (VOPZ UAS)

Drones or Unmanned Aerial Vehicles (UAVs) are used by the German Navy, Air Force and Army. The corresponding UAV Control Systems (UCS) range from tablets to entire operations centers and are often only usable for a specific drone. In the Virtualized Control Center for Unmanned Systems (VOPZ UAS) project, an environment is being designed that can be viewed in virtual reality (VR) and serves as a tool for rapid prototyping as well as a virtual control center.

This enables virtual controls to be quickly adapted to a specific drone. Using VR, 360° views are also possible, allowing free and seamless arrangement of elements, as well as stereoscopic display of 3D data such as waypoints, airspace data or sensor data. The research focus is on collaboration issues as well as the identification, adaptation and generalization of typical UCS elements for a three-dimensional environment. The knowledge gained in the VOPZ UAS project can lay the foundation for the design of future drone and aerial systems.

re:pair – Remote Procedure Assistance in Immersive Reality

The Corona crisis has shown how important and at the same time difficult distributed collaboration is despite advanced digital solutions. Not being on site makes discussions and the identification of problems difficult - with negative effects on the development of possible solutions. In the re:pair project, immersive technologies such as augmented and virtual reality (AR/VR) are used to give the impression of what it feels like to be on-site, even if participants are several hundred miles apart.

The technician uses an AR-HMD with real-time information overlaid on the real component. The expert uses a VR-HMD and is in a fully computer-generated, three-dimensional world where he sees a virtual representation of the defective component in front of him. In addition, the expert has several virtual tools at his or her fingertips that allow him or her to interact with both the component and the technician. One of these tools allows the expert to examine the inside of the component by cutting into it at any point. Another tool acts as a laser pointer.

Mobile user interfaces and augmented reality for microdrones

Civilian rescue missions or military operations in buildings are dangerous and highly dynamic for emergency forces, as walls restrict visibility enormously. Microdrones could reduce the risk: they accompany the emergency forces, explore the situation and identify dangers lurking around the next corner at an early stage. However, currently available models are primarily designed for outdoor use. The possibilities for using them indoors or in canals are very limited. To ensure that they can also be used there, the control and display of information must be as user-friendly and reliable as possible.

The Human-Systems-Engineering  department has been investigating how reconnaissance data can be captured and collected with microdrones in indoor and outdoor environments. The focus was on the design and visualization of user interfaces. In addition to a mobile user interface, an augmented reality (AR) interface was also designed and implemented.

Special attention was given to the use of multimodal and situational communication. How AR can best be used to provide a spatial representation of information was investigated as well as the use of different modalities (touch, gestures and head tracking) to explore information and control the drone.

Virtual Reality in Mission Planning

The AutoAuge project investigated the extent to which unmanned systems can support a dismounted infantry platoon within a human-multi-robot team. In this context, virtual reality (VR) was used to provide situational awareness to facilitate mission planning. By using VR goggles, the user can explore the mission terrain and work out appropriate use of the unmanned systems. In doing so, they can identify relevant locations in the terrain, which are automatically transferred to the command and control system.

In addition to the interaction design in virtual reality, another aspect was the integration of 3D models, which can come from external agencies such as the Center for Geoinformation Systems of the German Armed Forces or be generated by the unmanned systems themselves. Two core components of the VR system that were looked at in more detail during the research were the locomotion and the system control in VR. Finally, the overall VR mission planning application was evaluated in a user study, the results of which suggest excellent usability (System Usability Scale) and user experience (User Experience Questionnaire).

A freehand gesture framework for systems with fully articulated hand tracking

The use of gestures as a means of communicating with interactive systems is now widespread: Touch gestures simplify the operation of smartphones, hand gestures control industrial robots, and full-body gestures replace game controllers for controlling games. With immersive technologies like VR and AR, the way we communicate with these systems is also changing. For example, interaction based on touch gestures or the mouse and keyboard input established in the PC sector is only applicable to a limited extent in a virtual, three-dimensional space.

The latest generation of VR and AR hardware allows hands to be captured in real time, enabling the use of hand gestures as an input method. However, such systems typically support only a limited number of predefined gestures. In order to use arbitrary hand gestures in an immersive system, the ability to define custom gestures and have them recognized in real time is required.

The goal of this work was the conception and development of such an environment. It was implemented in the form of a framework that can be easily integrated into software projects using gesture control. The feasibility of integration was demonstrated using an example application for navigating PDF files in VR. The ability of the framework to reliably recognize previously defined freehand gestures was evaluated using an exploratory user study.

A Systematic Literature Review of Virtual Reality Locomotion Taxonomies

The change of the user's viewpoint in an immersive virtual environment, called locomotion, is one of the key components in a virtual reality interface. Effects of locomotion, such as simulator sickness or disorientation, depend on the specific design of the locomotion method and can influence the task performance as well as the overall acceptance of the virtual reality system. Thus, it is important that a locomotion method achieves the intended effects. The complexity of this task has increased with the growing number of locomotion methods and design choices in recent years.

Locomotion taxonomies are classification schemes that group multiple locomotion methods and can aid in the design and selection of locomotion methods. Like locomotion methods themselves, there exist multiple locomotion taxonomies, each with a different focus and, consequently, a different possible outcome. However, there is little research that focuses on locomotion taxonomies.

We performed a systematic literature review to provide an overview of possible locomotion taxonomies and analysis of possible decision criteria such as impact, common elements, and use cases for locomotion taxonomies. We aim to support future research on the design, choice, and evaluation of locomotion taxonomies and thereby support future research on virtual reality locomotion.

Impact of Scene Transitions on Spatial Knowledge during Teleportation in Virtual Reality

Teleportation is a locomotion technique in Virtual Reality (VR), widely used in immersive games and applications. It allows open virtual space exploration, unrestricted by physical space, but leads to disorientation due to a lack of sensory information. Scene transitions can help counter disorientation by providing visual cues of relative motion. This thesis investigated user-controlled teleportation, augmented with three types of scene transitions: instant (i.e. regular teleportation), pulsed, and continuous.

A formal user study was conducted in a realistic context to observe the impact of the transitions on spatial knowledge acquisition, which was evaluated by checking the accuracy of the cognitive map developed during virtual navigation. The effects on spatial orientation and VR sickness were also individually measured. It was found that instant transition induced the most disorientation while causing negligible VR sickness. On the other hand, pulsed and continuous transitions, when combined with regular teleportation, facilitated a more detailed view of the environment and helped maintain orientation better than instant transition, without inducing severe sickness.

Optical see-through augmented reality can induce severe motion sickness

The aim was to investigate whether severe symptoms of visually induced motion sickness (VIMS) can occur in augmented reality (AR) optical see-through applications. VIMS has been extensively studied in virtual reality (VR), whereas it has received little attention in the context of AR technology, in which the real world is enhanced by virtual objects. AR optical see-through glasses are becoming increasingly popular as technology advances. Previous studies showed minor oculomotor symptoms of VIMS with the aforementioned technology. New applications with more dynamic simulations could alter previously observed symptom severity and patterns.

In experiment 1, we exposed subjects to a traditional static AR application for pilot candidate training. In experiment 2, subjects completed tasks in a dynamic starfield simulation. We analyzed symptom profiles pre and post with the simulator sickness questionnaire (SSQ) and during exposure with the Fast Motion Sickness Scale (FMS). We also developed a new FMS-D that captures symptoms of dizziness during simulation.

As expected, in experiment 1 we found low VIMS symptomatology with predominantly oculomotor symptoms. In experiment 2, on the other hand, we detected severe VIMS symptoms in some subjects, with disorientation (SSQ subscale) as the main symptom group. The present work demonstrates that VIMS can be of serious concern in modern AR applications. The FMS-D represents a new tool to measure symptoms of dizziness during exposure. VIMS symptoms need to be considered in the design and usage of future AR applications with dynamic virtual objects, e. g. for flight training or machine maintenance work.

Emotions are associated with the genesis of visually induced motion sickness in virtual reality

Visually induced motion sickness (VIMS) is a well-known side effect of virtual reality (VR) immersion, with symptoms including nausea, disorientation, and oculomotor discomfort. Previous studies have shown that pleasant music, odor, and taste can mitigate VIMS symptomatology, but the mechanism by which this occurs remains unclear. We predicted that positive emotions influence the VIMS-reducing effects. To investigate this, we conducted an experimental study with 68 subjects divided into two groups. The groups were exposed to either positive or neutral emotions before and during the VIMS-provoking stimulus. Otherwise, they performed exactly the same task of estimating the time-to-contact while confronted with a VIMS-provoking moving starfield stimulation. Emotions were induced by means of pre-tested videos and with International Affective Picture System (IAPS) images embedded in the starfield simulation.

We monitored emotion induction before, during, and after the simulation, using the Self-Assessment Manikin (SAM) valence and arousal scales. VIMS was assessed before and after exposure using the Simulator Sickness Questionnaire (SSQ) and during simulation using the Fast Motion Sickness Scale (FMS) and FMS-D for dizziness symptoms. VIMS symptomatology did not differ between groups, but valence and arousal were correlated with perceived VIMS symptoms.

For instance, reported positive valence prior to VR exposure was found to be related to milder VIMS symptoms and, conversely, experienced symptoms during simulation were negatively related to subjects’ valence. This study sheds light on the complex and potentially bidirectional relationship of VIMS and emotions and provides starting points for further research on the use of positive emotions to prevent VIMS.

An Overview and Analysis of Publications on Locomotion Taxonomies

In immersive virtual environments, locomotion allows users to change their viewpoint in the virtual world and is one of the most common tasks. Locomotion taxonomies can describe relationships between the locomotion techniques and thus represent a common understanding, form the backbone of many studies and publications, and can increase the comparability of studies.

Therefore, it is relevant for VR researchers, developers, and designers to get an overview of previous research on taxonomies including benefits, drawbacks, and possible research gaps. Current literature reviews focus on locomotion techniques instead of locomotion taxonomies. Thus, a time-consuming search, evaluation and comparison of many publications is required to get such an overview.

We present the design of a currently performed systematic literature review examining taxonomies of locomotion techniques. In addition, we present initial results including an overview of publications introducing locomotion taxonomies, their relationships, and impact. We aim to provide a reference to potential taxonomies to support the choice of a locomotion taxonomy and insights into the research field evolution to aid the design of novel locomotion taxonomies.

Augmented reality in training on helicopter consoles

AR applications offer new possibilities for imparting knowledge and making learning processes autodidactic. In this paper, we take a closer look at the use of AR learning tools for learning procedures in helicopter pilot training. For this purpose, a prototypical AR learning system was built, which is characterized by spatially anchored information and individual setting options for users.

In a user study (n = 32), the developed learning tool was compared inter-individually with a paper manual. Analysis of usage data, learning outcomes, and surveys showed that subjects successfully learned the procedure with both learning tools. There were no differences in learning outcomes between the groups; however, the results suggest that AR technology facilitated the learning task compared to paper instructions. In addition, learning with AR was rated as more original, attractive, and stimulating by the subjects.

An Evaluation of Pie Menus for System Control in Virtual Reality

While natural interaction techniques in virtual reality (VR) seem suitable for most tasks such as navigation and manipulation, force-fitting natural metaphors for system control is often inconvenient for the user. Focusing on traditional 2D techniques like pie menus and exploiting their potential in VR offers a promising approach.

Given that, we design and examine the four pie menus pick ray (PR), pick hand (PH), hand rotation (HR) and stick rotation (SR), addressing usability, user experience (UX), presence, error rate and selection time. In terms of UX and usability, PH was rated significantly better compared to HR and SR; PR was rated better compared to SR. Presence was not affected by menu design. Selection times for PH were significantly reduced compared to SR. PH and PR resulted in significantly decreased error rates compared to SR and HR respectively. Based on these findings, we eventually derive implications for developers of VR applications. 

Evaluation of Immersive Teleoperation Systems using Standardized Tasks and Measurements

Despite advances regarding autonomous functionality for robots, teleoperation remains a means for performing delicate tasks in safety critical contexts like explosive ordnance disposal (EOD) and ambiguous environments. Immersive stereoscopic displays have been proposed and developed in this regard, but bring about their own specific problems, e.g., simulator sickness. This work builds upon standardized test environments to yield reproducible comparisons between different robotic platforms.

The focus was placed on testing three optronic systems of differing degrees of immersion: (1) A laptop display showing multiple monoscopic camera views, (2) an off-the-shelf virtual reality headset coupled with a pantilt-based stereoscopic camera, and (3) a so-called Telepresence Unit, providing fast pan, tilt, yaw rotation, stereoscopic view, and spatial audio. Stereoscopic systems yielded significant faster task completion only for the maneuvering task.

As expected, they also induced Simulator Sickness among other results. However, the amount of Simulator Sickness varied between both stereoscopic systems. Collected data suggests that a higher degree of immersion combined with careful system design can reduce the to-be-expected increase of Simulator Sickness compared to the monoscopic camera baseline while making the interface subjectively more effective for certain tasks.

Point-and-Lift: 3DoF Travel in Virtual Environments

Travel techniques enabling 3 degrees of freedom (DoF) locomotion allow the user to obtain an overview of the virtual environment (VU). As a result spatial information gathering as well as the user's orientation skills in the VU are supported. Existing 3DoF travel techniques are, however, rather inefficient and enhance simulator sickness symptoms which can lead to a reduced sense of presence. We propose a novel method called Point-and-Lift to address these problems. Furthermore, we present a user study concept comparing our method to state-of-the-art travel techniques regarding simulator sickness, presence, orientation skills and performance.

Exploring Pie Menus for System Control Tasks in Virtual Reality

The growing popularity of Virtual Reality (VR) makes it possible to address a growing user base that can draw on extensive prior knowledge of traditional, desktop-based applications and related two-dimensional interaction techniques using menus. The transfer of these techniques into the three-dimensional interaction space benefits VR applications, in particular those that are characterized by extensive system control options.

The main focus of this work is on the adaptation of pie menus for VR. Four different implementations have been developed: Pick-Ray (PR) and Pick-Hand (PH) each supporting six degrees of freedom (6-DoF) for selection as well as Hand-Rotation (HR) and Stick-Rotation (SR) each supporting one degree of freedom (1-DoF). To examine the influence of the four different implementations on selection time, error rate, user experience, usability and presence we propose a corresponding user study.

Touch-based Eyes-free Input for Head-Mounted Augmented Reality Displays

Interacting with head-mounted augmented reality displays using natural user interfaces like speech recognition or gesture recognition is not practical in many situations notably in public spaces. Since these displays can be used combined with smartphones or wearables like smartwatches, the user interaction elements can be distributed across these devices. An eyes-free touch input concept for implementation on a connected mobile device with a touchscreen is presented here. An experiment was carried out to investigate the user performance on three different input devices, a smartphone, a smartwatch and a head mounted touch panel (HMT), using the same set of the touch gestures.

Mobile devices are often used while walking; accordingly the interaction was investigated both while standing and walking. The evaluation showed no significant difference in user performance, response time or errors. A significant difference in subjective performance between the HMT and the smartphone was found using the NASA TLX questionnaire. As expected, the subjective estimation of mental, physical and temporal demands as well as effort and frustration were significantly higher while walking compared to standing.

It was expected that the size of the touch screen would affect the performance of the various input devices, but this could not be verified. The touch gestures used were well suited for all three touch devices. Since the HMT is an integrated controller and there were no significant drawbacks in terms of performance compared to smartphone or smartwatch it is worth exploring how to optimize gesture control, size and placement of HMTs for future augmented reality displays.

An Eyes-Free Input Concept for Smartglasses

With the advancement of augmented reality (AR) technology in the recent years, several AR head-mounted displays and smartglasses are set to enter the consumer market in the near future. Smartglasses are usually employed as output devices to display information and require a paired input device, an integrated touch panel or buttons for input.

They can communicate and collaborate with other wearables like smartphones or smartwatches. This makes it possible to distribute the user interaction elements over multiple connected wearables. A concept for the eyes-free input for smartglasses using wearables such as smartphone, smartwatch or using an additional accessory is presented here.