[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3613904.3641912acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Choosing the Right Reality: A Comparative Analysis of Tangibility in Immersive Trauma Simulations

Published: 11 May 2024 Publication History

Abstract

In the field of medical first responder training, the choice of training modality is crucial for skill retention and real-world application. This study introduces the Green Manikin, an advanced Mixed Reality (MR) tool, conceptually combining the immersiveness of Virtual Reality (VR) with the tangibility of real-world training, and compares it against traditional real-world simulations and VR training. Our findings indicate that MR and real-world settings excel in Self and Social Presence, and in intention to use, offering heightened psychological presence suitable for complex training scenarios. Effort expectancy was highest in real-world environments, suggesting their ease of use for basic skill acquisition. This nuanced understanding allows for better tailoring of training modalities to specific educational objectives. Our research validates the utility of MR and offers a framework for selecting the most effective training environment for different learning outcomes in medical first responder training.
Figure 1:
Figure 1: The three first responder training modalities compared in this work: real training (left), mixed reality training (middle) and virtual reality training (right). Red marks the physical/real elements, and blue the virtual elements.

1 Introduction

In their daily operations, medical first responders (MFRs) are regularly immersed in highly interactive and stressful environments. These situations demand a blend of cognitive, emotional, and tangible skills, such as knowledge of medical processes and rules, resilience under pressure, effective decision-making, and hands-on interaction with medical equipment. Recognizing these multifaceted requirements, our research aims to address a critical gap in MFR training. We seek to develop a training solution that not only imparts knowledge but also provides immersive and tangible experiences to facilitate skill acquisition and retention for professionals in this live-saving occupation.
While invaluable for experiential learning, traditional training methods–which often involve real-world simulations with physical equipment and multiple actors–can be resource intensive and logistically complicated to set up, making them infrequent occurrences in most training curricula. Thus, especially large-scale exercises can not be seen as training but rather as a rehearsal of already trained skills. From an HCI perspective, traditional training methods often fall short in terms of environmental realism and the accurate simulation of injuries, areas where interactive technology could offer significant improvements. Virtual Reality (VR) addresses some of these challenges by providing immersive environments for decision-making and cognitive skill development. However, VR falls short in tactile interaction design. Given that a medical professional’s primary diagnostic tool is often noted as their hands, this is a significant shortcoming that calls for an HCI-centered approach to create more balanced and effective training systems.
In recent years, Mixed Reality (MR) has been suggested as a promising solution to bridge this gap. As a blend of physical and digital interaction [32, 48], MR offers the immersive decision-making training of VR while incorporating tangible, real-world elements. However, despite the increased development and use of these novel training modalities, empirical comparisons between them remain sparse. This leads to the question, whether advancements in training technology are genuinely beneficial for medical first responders, or whether they are simply being driven by the momentum of technological progress itself. The objectives of this work are therefore to offer a comparison between different training modalities for MFRs (specifically MR, VR and traditional simulation training) in terms of training acceptance and presence, as well as assessing the context in which these modalities should be employed, envisioning a blended training curriculum containing traditional and virtual forms of training.
This work further introduces the Green Manikin, a unique MR training tool that aims to bridge the gap between cognitive and tactile training experiences. We present a comparative study involving this tool, along with a real-world training vignette and a state-of-the-art VR simulation, to assess the strengths and limitations of each modality. Our goal is to determine which training environments are most effective for specific aspects of MFR training.
Our research specifically addresses two key questions:
RQ1: How do the three training modalities compare in terms of presence and technology acceptance?
By focusing on these two constructs, we aim to determine whether the immersive benefits of VR, when combined with the tactile, real-world aspects of traditional training, result in a training modality that is both immersive and more readily accepted by medical professionals.
RQ2: What modalities are most beneficial and best matching for which training objectives?
This aims to understand how different training modalities meet the specific user experience and interaction design needs of medical first responders.
In summary, this work makes three key contributions to the HCI community. First, we introduce the full body tracked Green Manikin, a tangible MR tool for immersive and tactile first responder training. Second, we provide one of the first empirical comparison studies among real-world, VR, and MR training, focusing on presence and technology acceptance. Lastly, we offer actionable insights into tailoring virtual training towards specific training objectives for MFRs. The long-term goal of our research is to guide the development of future technology and training programs, ensuring that they effectively meet the complex and demanding training requirements of medical first responders.

2 Related Work

2.1 Mixed Reality and Presence

MR as a term has been used somewhat loosely in recent years, sometimes for overlaying reality with virtual elements like Microsoft’s Hololense, sometimes to describe a wider array of technologies on the reality-virtuality spectrum [32]. Recently, Skarbez et al. [48] have enhanced the reality-virtuality spectrum and put forth a framework spanning the space of MR technologies on three factors: Extent of World Knowledge (EWK), Immersion (IMM) and Coherence (COH).
EWK relates to how aware a system is of its environments or objects within it. In the case of virtual training, tangible tools serve this role, as they can be tracked in space. IMM is concerned with how complete the sensual illusion of a simulated environment is. This is closely tied to Slater’s Place Illusion, but also Body illusion [49] (also termed Physical Presence and Self-Presence respectively). This not only includes the fidelity of audio-visual stimuli but also, more recently, multi-sensory stimuli like haptics, olfaction, or taste. Coherence closely relates to Slater’s Plausibility Illusion [49]–one of his key concepts of presence–and describes the amount of context-specific realism of what happens in a simulated environment, i.e. ‘This is really happening’. In the context of patient care training, a big factor relating to the plausibility of the training is the amount of social realism, leading to Social presence, but also the amount of medical realism.
As immersion is defined as the extent to which a system tracks actions of the user and gives adequate feedback on as many sensory modalities as possible [49], it can be increased by increasing the range of sensorimotor contingencies (SCs). These are described by Slater (ibid.) as ‘actions that we know to carry out in order to perceive, like moving the head and eyes to change gaze direction’. By including tangible interaction in MR, the range of SCs extents, which should lead to a higher immersion of the system, and therefore presence.
In this work, we use this framework, as it closely relates to presence and allows for a mapping of the three training modalities on the three factors: Real training is low in IMM and EWK, VR is high in IMM but low in EWK and our proposed solution should be high in both IMM and EWK, as it involves tangible props tracked in space in a virtual environment. COH, though crucial should be kept constant between the modalities to ensure comparability.
When speaking of tangible interaction and presence, a closely linked concept related to the IMM factor of a system is that of haptic realism, i.e. how close to reality the haptic sensation is simulated in a virtual environment. Muender et al. [36] provide a framework for haptic feedback in MR/VR, in which haptic feedback can be classified in a two-dimensional model, comprised of haptic fidelity (i.e., the realism of haptic feedback, abstract vs. realistic) and versatility (i.e., generic vs. specific). In their validation of the framework, they found a strong correlation between haptic fidelity and haptic realism, indicating the need for high-fidelity simulations of haptic experiences for an increased experience of haptic realism (realting to higher IMM and therefore higher presence within the Skarbez framework). In the context of this work, increasing the haptic fidelity of interacting with medical tools in virtual environments should therefore lead to a more immersive training.

2.2 Training for Medical First Responders

A first responder is a professional trained to be the first to arrive and provide assistance at the scene of an accident or emergency. Different occupations are considered FRs (e.g. a police officer, firefighter, medical FR, CBRN specialist, disaster management personnel). Medical first responders (MFRs) focus on triaging and treating injured persons at the scene of emergencies, equipped with various medical tools and medicine stored in backpacks and the ambulance car.
Training for these emergencies takes on different forms and modalities. According to Baetzner et al. [3] different training methods are used in the domain of first responders to disasters: lectures, real-life scenario training, discussion-based learning, practical skill training, field visits, debriefings, and computer-based learning, including VR and MR training.
How a specific training program is designed depends on the training objectives or skills that are focussed on [53]. In the existing literature, four main training objectives can be identified: (1) technical and physical skills of the occupation (e.g. handling a tool, treating patients, endurance, fatigue), (2) psychological (cognitive and emotional) skills relating to the environmental threats, (3) communication and (4) decision-making skills relating to the knowledge of processes, rules and situational demands.
In the following sections, we will give an overview of existing real and virtual training methods for these four objectives and how they relate towards the design of MR MFR training.

2.2.1 Technical and Physical Skills.

Technical skills are important to be trained for first responders, as muscle memory is an important aspect when performing technical skills [25]. Interestingly, according to Baetzner et al. [3] in disaster response training practical skill training was never the sole objective, but combined with other aspects like decision-making or communication.
Haskins et al. [18] provided a concept of use for VR training for first responders (police, firefighters, medical first responders) and noted that ‘VR can teach how to use specific equipment in a stressful but safe simulation’ [18, p. 60]. Physical mockups and passive haptics were especially mentioned as possible solutions. However, this - still - is an area where real-world training often has better results. Recently - due to technical improvements in the area of MR - more approaches towards training technical skills in MR have been published.
Escobar-Castillejos et al. [13] provides a review on haptic simulations in medical training. Examples of MR training in the medical domain are often in surgery - e.g. [17, 19, 22, 38, 40]. In the area of medical first responder training dedicated skills have been addressed. This includes first aid for patients with seizures [2], handling of equipment and tools[25] first aid and reanimation [6], COVID19 Nasopharyngeal swab [7, 62], and FR responses to mass casualty incidents [58]. An approach using a physical manikin in VR in combination with data gloves and a head-mounted display was described by [46]. Similarly, Uhl et al. [54] describe a manikin integrated inside VR using chroma-keying and Scherfgen et al. [45] proposed a system that ‘estimates the pose of a medical manikin’ and ‘haptically augments a 3D human model’ for VR based training, allowing users to physically touch a virtual patient. Apart from using immersive approaches [46, 54] Gasques et al. [14] proposed ‘HoloSim’ where a physical manikin is augmented with visual overlays.
Based on related work, we suggest that training technical skills, especially the combination of training technical skills with other skills [3], is an important aspect that can be achieved well through using MR training. Thus, for the Green Manikin solution presented in this work, we included the usage of real tools (e.g. stethoscope, tourniquet), in combination with the other training modalities.

2.2.2 Psychological Skills.

Due to their exposure to dramatic, critical incidents and chronic stressors, first responders have a higher prevalence of negative health outcomes - due to stress or trauma - compared to other professions. Psychological resilience and training for psychological resilience are important factors in preventing negative occupation-related health effects [21]. Therefore training for psychological skills and especially resilience against stress and trauma is an important aspect in current research towards training for first responders.
A review by Wild et al. [57] showed only limited ‘evidence for interventions aimed to improve well-being and resilience to stress’. However, they also noted that ‘current trials [...] show promise in preventing stress-related psychopathology’, but further evaluations in high-quality trials are needed. Multiple VR training solutions to improve resilience against stress and trauma have been developed, e.g. in the military domain [5, 39], for police forces [37].
Based on the aforementioned work we can draw the conclusion, that immersive technologies are well suited for psychological training, due to the highly immersive environment. Especially stress inducting environments [5, 37] can support the resilience of responders for upcoming stressful situations. Therefore we integrated a realistic stressful environment in our Green Manikin solution, to support training for stressful real-world situations [21].

2.2.3 Communication Skills.

Haskins et al. [18] describe training challenges for VR training identified through user research with first responders. According to Haskins et al., ‘the most common response we got when we asked what the biggest pain point is for first responders is communication.’
Communication is a complex process that is influenced by many factors and variables [44, 50], misunderstandings [51] or the knowledge base [44, p. 31]. When looking into communication during first responder operations, we can distinguish two categories: (1) inward-facing communication (communication within participating first responders) and (2) outward-facing communication (e.g. people in civil society) [23].
There are examples of training for communication inside a hospital - e.g. diagnosing patients [29] or training for the patient consultation process before surgery [28]. However, they do not address first responders in their complex, stressful and sometimes dangerous context. In the FR domain, [34] evaluated an interactive course for the whole disaster management chain, including communication and coordination. Uhl et al. [54] integrated communication with patients in their MR simulation, where the trainer role-plays the patients using a microphone. Also (commercial) approaches for training communication in stressful scenarios exist - e.g. Enhanced Dynamic Geo-Social Environment EDGE 1, ETC Simulation - Advanced Disaster Management Simulator 2 or XVR Simulation 3.
Communication training is often done via real-world roleplay. However, Uhl et al. [52] could show that VR can lead to similar results in terms of social presence as real-world roleplay, thus communication training can be done in MR if the dialogue and the responses of NPCs are properly designed. In the Green Manikin solution, we integrated the possibility of talking to the victim (see Section 3.2.4) to address outward-facing communication. Inward-facing communication was not a focus of the presented study.

2.2.4 Decision-making Skills.

Decision-making skills, including the knowledge of processes, rules, and situational demands, are crucial in the education and training of medical first responders, as well as for nurses, physicians, and medical students. This is particularly true in mass casualty incidents and disaster situations, where decision-making and triage are essential skills.
Various forms for training triage exist including video [1], real-world high fidelity simulation [8, 9], educational review sessions [10], exercises with patient dummies [11], card-based training [24], serious games [24], educational courses [60] and Mixed Reality (MR) [54].
Special situations are also a focus of training. For example, Montan et al. [34] describe an interactive course that covers the entire ‘chain of response’ from the scene to the hospital, including communication, coordination, and command, using magnetized cards in a tabletop simulation. Motola et al. [35] evaluated training of medical first responders to CBRN responses using a training video that corresponds to the trained CBRN scenario. Jones et al. [20] assessed training for Emergency Medical Services’ response tactics to active shooter incidents, and Edinger et al. [12] evaluated online education for dealing with individuals with developmental disabilities.
Simulations, increasingly including computer/VR/MR-based methods, are common for decision-making training. We suggest that MR is a fruitful medium for training in medical decision-making, as the immersive environment enhances the perception of stressful situations and supports realistic decision-making processes. The Green Manikin solution we present supports decision-making training on multiple levels, from situation assessment to planning and executing the appropriate response, including the selection and application of the right tools for patient treatment.

2.3 Comparative Studies Between MR and VR and Reality for Training

In the specific context of medical first responder training, we are currently not aware of comparative studies directly evaluating MR, VR, and traditional real-world training modalities in terms of user experience and acceptance.
Mills et al. [33] compared VR triage training and live simulation training, focusing on physical demand, satisfaction, and performance. They found that although no significant differences in the number of correct triages or general satisfaction with the modality were observed, VR offered some advantages in usability, as triage cards could be allocated far quicker in VR. Next to this, findings from other fields can provide valuable insights into the relative advantages and limitations of different training environments.
Rettinger et al. [41], although not focused on medical training, highlighted the effectiveness and cost-efficiency of VR training compared to traditional methods for health professionals. This finding could have potential implications for our domain, particularly regarding cost-effectiveness.
A subsequent study by the same authors [43] examined various types of interactions within VR training, including Real-World, Controller-VR, Free-Hand-VR, and Tangible-VR. Their findings on the importance of tactile feedback, especially in medical scenarios where touch-based diagnostics and treatments are critical, are noteworthy. They found that Free-Hand-VR, which lacks haptic realism, showed the poorest training outcomes.
In the realm of high-risk training, such as explosive ordnance disposal, another study [42] found that both MR and VR offer advantages over real-world training in terms of usability and cognitive workload. While not directly applicable, these findings could offer valuable insights into medical training situations that similarly involve high-stakes, high-pressure decisions.
Winther et al. [59] explored VR training for sequential maintenance tasks and concluded that traditional hands-on training still yielded better outcomes for specific tasks. This emphasizes that, despite advancements in virtual technologies, there remain training needs best met through real-world, hands-on experiences.
In summary, although these studies were either not focused on HCI-related topics or were not conducted in the medical first responder training context, their collective insights into the comparative effectiveness of MR, VR, and traditional training environments offer helpful perspectives for HCI and for the design of our study.

3 Design Process

3.1 Background and Design Goals

The MR solution presented in this work builds upon prior work [54], where–based on the wants and needs of end-users from an extensive requirements phase of the project–a prototype for an MR manikin using chroma-keying and real tools was developed. Using a Varjo-XR3 headset in combination with chroma-keying enables the overlay of the participants’ real hands and tools on top of the virtual environment. Because the manikin and the floor are green, they are removed and replaced with virtual content.
The manikin in said work consisted of a green training manikin torso and a virtual scene where different (real) tools had to be used to treat an accident victim. During a formative evaluation that accompanied this first prototype, we gathered feedback which guided the next iteration of the solution presented in this section [ibid.].
The main needs that emerged from the formative evaluation guided our design goals for the next iteration presented in this work:
Full manikin incl. extremities
Moveable manikin
Larger selection of tools
Real clothing
Improved social realism of the virtual agent
We employed a participatory design process for the further iterative development of our solution, including a research and education organization for medical first responders4 in all stages of the process (ideation, prototype design, scenario creation and evaluation, see [61] for more details). In this process, our collaboration involved multiple experts in the field of medical first response from our partner organization. This team included seven medical professionals with diverse specializations to ensure a comprehensive understanding of the requirements and challenges in medical emergency scenarios. Their varied perspectives (including researchers, trainers, paramedics and doctors) were instrumental in refining the design and functionality of our MR solutions. Two of the experts, due to their specialized expertise in MR technology and direct experience with the simulated medical scenarios, were primarily involved in the iterative design process to ensure a focused and technically accurate development after the initial requirement workshops.
To capture the essence of the various medical tasks that occur during an emergency first response we combined the knowledge of our experts with existing schemes for trauma assessment, namely the Airway, Breathing, Circulation, Disability, Exposure (ABCDE) method and SAMPLER (Signs/symptoms, Allergies, Medication, Past medical history, Last meal, Events prior to incident, Risk factors) scheme[47]. These schemes are acronyms used to remember the crucial aspects during trauma assessment. During the workshops it became clear that any training for trauma assessment, be it virtual or not, would need to enable the trainee to check this scheme, as this is what they learn and have to adhere to. This informed the training scenario design, detailed in Section 3.4. All relevant information from the scheme needs to be either observable (e.g. skin turning blue) or questionable in interaction with the role-player or the virtual patient. Regarding the interactions in the resulting scenario, we mapped out cause-and-effect schemes during the formative evaluation [ibid.], to fully account for the specifics of the tasks. This included information assessment from the schemes, combined with the appropriate actions of the MFR in response, as well as timings of the different actions. For example, for the ventilation bag, this included the positioning of the MFR when applying it, location and pressure of the application as well as pumping frequency and intensity, as well as the effect that the chest of the patient would slightly lift in accordance with the pumping motion.

3.2 Implementation and System Description of the MR Modality

3.2.1 Technical Setup.

The MR training prototype was developed using Unity 3D version 2022.3.7f1 set to High-Definition Rendering Pipeline (HDRP) for enhanced graphical fidelity. The Varjo XR-3 head-mounted display (HMD) was employed in conjunction with the application Varjo Base for tracking, depth-sensing and calibration, and Varjo Lab Tools to enable chroma-key masking surfaces to combine the real and virtual worlds. The system ran on a Dell Alienware Aurora R12 PC with an 11th generation Intel i9 CPU, 64GB of RAM and a Geforce RTX 3080 graphics card which resulted in approximately 40 frames per second rendering capabilities during play time.
For chroma-keying, a specialized green screen setup is used, comprising green fabric and floor tape. This enables the integration of physical objects, such as a stethoscope, into the virtual 3D environment. Additionally, it allows for a layered composition in which the physical Green Manikin is augmented with a virtual avatar. The depth-testing feature of the Varjo XR-3 supports the seamless integration of the user’s real hands and medical instruments into the virtual environment, utilizing depth information captured by the device’s integrated LiDAR sensors.
Positioning and orientation of tools like the stethoscope are tracked through the use of printable fiducial markers attached to the objects. These markers are identified by the pass-through cameras integrated into the Varjo HMD. To provide an immersive auditory experience, the trainee is equipped with an audio headset that delivers spatialized sounds, i.e., ambient sounds, breathing noises (internal and external) and voice from the virtual agent had distance attenuation and sound source localization properties.
The contextual 3d avatars (NPCs) were produced using Realillusion’s Character Creator 45 software and custom-animated with Realillusion’s iClone 86. The main virtual agent simulating the victim had no pre-programmed animations, but reacted to sound cues during run-time using SALSA LipSync Suite animation tool. For example, it compressed and expanded its chest according to a set intensity value from integrated breathing sound patterns, it also moved its lips and jaw according to the voice generated by the Text-to-Speech engine responsible for generating the voice.
Two Wallimex Pro Daylight 1260 photography studio-lights were used when lighting conditions affected the shades of green, and thus more control over the environmental brightness and shadows was required.

3.2.2 The Green Manikin.

We used a simple full-body training manikin, made of hard plastic and moveable limbs. In the first step, the manikin was painted green, using plastic primer and green acrylic spray paint for the individual parts. After assembly, we installed the Vive trackers on the now Green Manikin using Vive’s body tracking straps, attached to hands, feet and hip. An additional tracker was mounted on top of the manikin’s head. To ensure tracking even when the users would cover the trackers from multiple sides, four lighthouses were placed around the manikin and green screen floor.
In Unity, the inverse kinematics package VR IK was used to map the Vive trackers on the manikin’s body (see Figure 2) to the respective parts of the virtual 3D model in Unity. The mapping process also required the scaling of the existing 3D model to match the proportions of the individual parts of the manikin. Ideally, the manikin should be scanned to produce a 3D model with exact proportions that can be further modified. Doing this should result in a more accurate and faster mapping process. Once finished, the manikin and the 3D model move in sync and further natural interactions are possible, e.g., lifting the hand to have a closer look at the virtual pulse oximeter on the finger, or moving the leg for better placement of the tourniquet. However, the trackers need to always be visible by the surrounding base stations. If their path gets blocked, the mapping between the manikin and the 3D avatar is broken which leads to awkward positions of the avatar. This is a limitation, given that some treatments performed by MFRs require that they adopt positions that block the trackers, e.g., turning the patient on the back for visual inspection of injuries, or placing their knees around the head of the patient to use a bag valve mask. Additionally, a calibration offset which can occur for several reasons, e.g., relocating the manikin, or inadvertently moving trackers or base stations, will cause a mismatch between the manikin and the 3D avatar. In these cases a re-calibration of the play area or the trackers will be needed.
Figure 2:
Figure 2: The full body tracked Green Manikin on the green screen mat, including Vive trackers on feet, hands, hip and head. On the right of the manikin is the Varjo XR-3 HMD.

3.2.3 Tools and Interactions.

A total of four (physical) medical tools were integrated into the training scenario, which would be stored in a first responder backpack. To enable our research on tangible tool interaction, we used a mix of wizard-of-oz simulation of the interaction and automated interactions.
A tourniquet could be applied to the manikins leg to stop a (virtual) bleeding (see 3.4) on the virtual patient, which would be stopped by the operator manually. On the stethoscope a fiducial marker was applied, allowing it to be tracked in space. A simple collision detection script, enables a tangible way of blending in the internal lung sounds, once the stethoscope is placed on the chest of the manikin. The bag valve mask could be placed on the patient’s mouth and nose and squeezed to ventilate the patient. This would be synced with the virtual avatar’s chest moving up and down, which was achieved manually by the operator moving a slider. Lastly, to communicate vital signs like heart rate and blood oxygen levels, a virtual oximeter was added to the scene, that would dynamically display the vitals of the patient, depending on their state.

3.2.4 Communication With the Virtual Agent.

In order for the users to talk to the virtual patient, we employed a mixture of speech recognition, a specifically trained Large-Language-Model (LLM) on the basis of ChatGPT 3.57, and voice output. This system will be described in detail in future work, so we will only give a short summary here.
The system captures a user’s voice through a noise-cancelling microphone and sends the audio to OpenAI’s Whisper8 for initial processing. Whisper, an Automatic Speech Recognition module, converts the speech into text. This transcribed text is then merged with role characteristics set in a prompt for ChatGPT. The prompt serves as a constraining feature to guide the model’s responses, which can be tailored to evoke a specific emotional tone or context. ChatGPT then generates a text-based response that considers both the transcribed message and the prompt. Finally, ElevenLabs’ Text to Speech engine9 transforms the text response back into audio, offering various customization options like pitch and emphasis. Finally, the resulting audio file is played in the virtual scenario, automatically moving the patient’s lips. The voice used by ElevenLabs was custom recorded with a hurting, shocked timbre, to simulate a hurting patient.

3.3 Implementation and System Description of the VR Modality

To enable a valid comparison between traditional VR and the proposed MR approach, we developed a VR version of the same environment and scenario as in MR. Our goal with this was to assess the added benefit of tangible MR over VR.

3.3.1 Technical Setup.

The Virtual Reality prototype was developed using Unity 3D version 2022.3.7f1 set to Universal Rendering Pipeline (URP) to optimize graphical performance and resolution. The Meta Quest Pro HMD was employed as a standalone device with its integrated hand-tracking, depth sensing, play area calibration, and speakers. The Quest Pro´s internal Qualcomm Snapdragon XR2+ with 12 GB of RAM resulted in approximately 30 frames per second rendering capabilities during play time. No controllers were used in the VR modality since participants only needed to move freely within a 2m x 2m play area, i.e., no teleportation was necessary. All visual elements, e.g., NPCs and the main virtual agent simulating the victim with its corresponding sound, reactive chest movements and facial gestures, remained the same as in the MR modality, and only the materials, textures, skybox and lightning systems were adjusted to URP.

3.3.2 Tools and Interactions.

The same four medical tools as the other modalities, i.e., tourniquet, stethoscope, bag valve mask and oximeter, were available in VR as virtual models. Participants were able to manipulate these 3D objects with a natural hand grab gesture and could release them anywhere by spreading the fingers apart. No snapping to the virtual patient was applied to these objects in order to study their ease of use with free rotation and translation affordances. These interactions were designed to better map to the other modalities that use real-world physical objects.

3.3.3 Communication With the Virtual Agent.

The same mode of communication as in the MR modality was employed for the VR version (see Section 3.2.4).

3.4 Virtual Environment and Scenario

As coherence is one of the three main factors contributing to a systems level of immersion [48], we collaborated with a medical expert with a background in medicine to design a scenario that would be (1) realistic and plausible and (2) allow us to test the tools and interactions we designed. We extended the scenario as described in Uhl et al. [54] by adding a strongly bleeding wound on the patient’s leg and by moving the other patients in the scenario so that the participants would be focused on the treatment of one patient.
The resulting scenario depicts a traffic accident next to a square in a small city, involving a cyclist (the patient), a bus, and two cars (see Figure 3 a). Most of the uninjured persons involved have already left the scene, leaving two mildly injured persons beside the cyclist on park benches in the middle of the square. These two persons are in conversation with two MFRs, signifying the participants to focus on the cyclist as they are being taken care of. The cyclist is lying on the floor near the participant’s starting point with a bleeding wound on the left thigh and a thorax trauma (visible via a bump mark on his chest) resulting from the accident. The training scenario was designed around three phases, inspired by real-life training:
Phase 1: Assessment In the first phase, the trainees get an overview of their surroundings and the accident site, and further, approach the cyclist and talk to him to assess his mental state and get information about the course of the accident.
Phase 2: Treatment In the treatment phase, participants would first stop the bleeding on the thigh with a tourniquet, and afterwards do their standard questioning schemes and checks, including checking the airways and lung sounds with the stethoscope. During this stage, the cyclist’s state would start to decay, forcing the participants to use the bag valve mask for respiratory support.
Phase 3: Patient transfer Patient transfer–The last stage marks the end of the scenario: the emergency doctor arrives (who legally can perform different procedures) to take over the patient. The participant’s job is to report any information they have about the patient and what they have done.
Figure 3:
Figure 3: Different stages of the virtual scenario (VR and MR). (a) Depicts the environment of the scenario, including a crashed bus, bicycle, multiple involved victims and first responders. (b) shows the patient without the t-shirt at the beginning of his decay and (c) shows the unconscious and pale state in the end, where participants started using the bag valve mask to assist respiration.

4 Method

4.1 Study Design

To address our research questions, we designed a comparative study that featured three distinct training modalities: (1) Real Simulation (REAL), (2) Virtual Reality (VR), and (3) Mixed Reality (MR). The same scenario was replicated closely across all three modalities, employing a within-subjects design for the study. Our objectives were to evaluate variations in both technology acceptance and the sense of presence among the three modalities and to identify which modality might be most apt for differing training objectives.

4.2 Scenario Implementation in the Three Modalities

In all three modalities, the first responder trainer would brief the participant on the backstory of the scenario: The participants just arrived at the scene of an accident involving a bus, two cars and a cyclist. Two of the involved victims were already cared for by two colleagues on the scene (see 3 a), leaving the injured cyclist to be treated by the participant. In the real training modality, further contextual information (like the weather, time of day, and information about the environment) was given, which would be apparent in the virtual environment.
Upon the start of the scenario, the participants would approach the cyclist who was lying on the pavement with a strongly bleeding open wound on the left thigh, breathing heavily. Participants were free to choose how to proceed with the treatment, but ultimately used the tourniquet to stop the bleeding, check the patient and discover the bump mark on the cyclists chest, check the airway and use the stethoscope to check the lung sounds. Ultimately, as the cyclists state would decline, they would use the bag valve mask to ventilate him. The ventilation marked the end of the scenario, when the trainer would tell participants that the emergency doctor had arrived and that they should hand over the patient. See Figure 5 for an overview of the three main interaction in all three modalities.

4.2.1 Real Training.

In the real training simulation, a role-player was recruited to lie down in an office room. A bicycle was placed beside him, and printed images of a wound and a blood puddle were positioned on his thigh and adjacent to him on the floor, respectively (see Figure 4 for an overview of the steps in the real training scenario). The role-player was thoroughly briefed regarding the scenario, and was given the same information that made up the prompt of the speech interaction system in the virtual modalities. The scenario was performed as in the other modalities, with information about vitals (like the heart rate, respiratory rate or whether the bleeding had stopped) being given by the trainer when asked, as these can’t be ‘acted’. The participants would get the contextual information of the scenario (weather, time of day, information of the environment and how they arrived on the scene) from the trainer before entering the office room. Upon entering the room, they would approach the role-player (Figure 4 b) and talk to the role-player, who would make loud pain noises and answer within the constraints of the information given to him during the briefing. Participants would the apply pressure on the simulated wound (often first with their knees, see Figure 4 c) and then apply the tourniquet. They would check the airway and use the stethoscope, to be given the respiration rate by the trainer (Figure 4 d). The decay of the patient was actively reported by the trainer (e.g. ‘His skin is getting pale and sweaty and his oxygen level is dropping.’), which would also lead to the role-player not answering anymore, and falling unconscious. Some participants first used an oxygen mask (Figure 4 e, but ultimately switched to the ventilation bag, when the vitals (especially the oxygen level) would not improve.
Figure 4:
Figure 4: Scenario progression in the REAL condition: a) The patient lies injured on the floor, next to his bike and the MFR backpack. b) The MFR approaches the patient and (c) applies the tourniquet. They use the stethoscope for checking respiration (d) and apply an oxygen mask (e) and later the bag valve mask (f) once the patient decays.

4.2.2 Mixed Reality.

After equipping the Varjo XR-3, participants would first see the passthrough video of the lab environment. The MR application was launched directly from the PC, which would blend the passthrough to the MR view of the accident environment (Figure 3 a). Because of the chroma-keying of the green floor and manikin, participants could see their own hands and body overlaid over the virtual scene.
They would further see the physical backpack with their tools next to the tangible patient. As the manikin is green, it would be replaced with the virtual patient, whereas the physical zip hoodie would be visible around the virtual, but tangible patient (see Figure 5 bottom row). To stop the bleeding they would take out the tourniquet and apply it on the manikin, with the tourniquet being overlaid on top of the virtual thigh (see 5 bottom left). They would talk to the patient via the system described in Section 3.2.4. When placing their hands on the chest where the bump mark was, the patient would make a sound of pain. Participants would uncover the chest by opening the (physical) zip hoodie and proceed to use the stethoscope. When the stethoscope was placed on the chest, the internal breathing sounds would become audible. Lastly, upon the decay of the patient, participants would apply the bag valve mask on the tangible manikin (see Figure 5 bottom right), and upon pressing the bag valve bag, the chest of the cyclist would be moved up and down accordingly by the operator (i.e. wizard-of-oz).

4.2.3 Virtual Reality.

Upon equipping the Meta Quest Pro, participants would first see the passthrough video of the lab environment. The VR application would be launched directly from Unity, which would switch the headset to VR. Just as in the MR modality, participants would appear at the scene of the accident on a square (see Figure 3 a). Participants would first look around the virtual scene and then attend the patient. The medical tools were placed near to the patient as 3D assets, matching the look of the real tools (see Figure 3 b). Using the hand-tracking feature of the Meta Quest Pro, participants could grab the virtual tools and apply them to the patient by moving them to the appropriate location. The patient wore a t-shirt, which could be removed by the participants by means of a pinching gesture, that was explained to them during the briefing. When touching the bump mark on the patient’s chest with their virtual hands, the patient would grunt with pain. The tourniquet, when moved to the correct position on the leg would trigger the blood flow to stop after 2 seconds. For the stethoscope, the internal lung sounds would be audible if the virtual stethoscope collided with various areas on the chest of the patient and be inaudible once the stethoscope was removed. The ventilation bag could be moved to the correct position on the patient’s mouth, with the participants then performing a pumping gesture on the ventilation bag, which would animate the bag accordingly, as well as trigger chest-movements according to the movement. The patient’s vitals could be checked on a oximeter on the patient’s left hand. Once the initial treatment (i.e., once the tourniquet and stethoscope were used), the experimenter would trigger the patient’s decay, visualized through changing vitals on the oximeter, the patient turning blueish, sweaty and finally unconscious. This would trigger the participants to use the ventilation bag, which then marked the end of the scenario.
Figure 5:
Figure 5: Three core interactions with the patient in the three modalities: applying a tourniquet (left column), using the stethoscope (middle column) and the bag valve mask (right column). Each modality is a different row (top row reality, middle row VR and bottom row MR).

4.3 Participants

15 first responders participated in the study (3 female, 12 male) with a mean age of 31.6 years (SD = 7.5) and varied professional experience (M = 9.8 years, SD = 8.1). The participants reported varied experiences with VR and MR: Most participants had tried out VR (8/15) and MR (7/15) one or two times, fewer were frequent users of VR (5/15) and MR (2/15). Only 2/15 participants had never used VR before and 6/15 participants had never used MR before. The participants’ experience with VR and MR was assessed to control for novelty effects in the study’s outcomes, which is crucial for interpreting their responses. The participants were recruited via email through the network of our partner research and education organization for medical first responders10. Selection criteria included individuals currently in occupation as MFRs or in training, with the goal to include a wide range of experience levels. This approach ensures a sample reflective of the broader MFR community, from novices to experienced professionals. The recruitment process was based on convenience sampling, tapping into the readily available network provided by our partner organization. Although this method was efficient, it is important to acknowledge that convenience sampling may limit the generalizability of our findings to all MFRs.

4.4 Measurements

4.4.1 Technology Acceptance.

To evaluate the acceptance of first responders for the three training modalities (RQ1), we utilized the framework provided by the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), as described by Venkatesh et al. [55]. We adapted items from the original UTAUT2 model based on their German translation by Harboth & Pape [16] to better align with the training contexts of the three modalities. Four sub-dimensions of the UTAUT2 model—Habit, Social Influence, Price Value, and Use Behavior—were deemed irrelevant to our research focus and were therefore excluded. The scales used in our study thus included Performance Expectancy (PE), Effort Expectancy (EE), Hedonic Motivation (HM), Facilitating Conditions (FC), and Behavioral Intention (BI). We recorded responses to the Technology Acceptance items on a 5-point Likert scale.

4.4.2 Presence.

To measure the sense of presence experienced by participants in the three training modalities (RQ1), we utilized the German-translated version [56] of the Multimodal Presence Scale (MPS) originally developed for VR settings by Makransky et al. [30]. This scale is rooted in Lee’s unified theory of presence [27], and categorizes presence into three distinct dimensions: physical, social, and self-presence. Physical presence relates to how participants experience objects and environments in the virtual setting. Social presence covers the sense of interaction with other virtual characters or users. Self-presence measures how much participants feel they are part of the virtual environment. Responses to MPS items were collected using a 5-point Likert scale.

4.4.3 Open Questions.

To gain qualitative feedback for our research questions–mainly RQ2–three open-ended questions were answered by the participants at the end of the questionnaire:
(1)
What is your overall impression of the three training methods (MR, VR, and real simulation)?
(2)
If you had to choose one training method to prepare for real-life scenarios, which one would it be and why?
(3)
Are there specific skills or situations where you think one method would be more effective than the others?

4.5 Procedure

Participants were initially greeted and provided with a briefing on the study’s contents and purpose. After completing the informed consent forms, they were randomly assigned to one of three training modalities as their first station. At each training station, participants received a specific briefing outlining the interactions expected from them.
Those assigned to the VR and MR stations were equipped with the Meta Quest Pro and Varjo XR-3 headsets, respectively, and were given time to acclimate themselves in a virtual waiting room. A detailed description of the procedure during each scenario is presented in Sections 4.2.1 to 4.2.3.
Following the completion of the scenario at each station, participants filled out a questionnaire, comprised of the MPS and UTAUT scales, using a tablet. This procedure was repeated after each of the three training stations. After completing the questionnaire for the final station, participants were also prompted to answer the open-ended questions using the tablet. Upon completion of all tasks, participants were thanked and seen off.

5 Results

5.1 Quantitative Results

5.1.1 Friedman Test.

Figure 6:
Figure 6: Boxplots and distributions for the three presence scales Physical Presence, Self Presence and Social Presence.
Figure 7:
Figure 7: Boxplots and distributions for the five technology acceptance scales Behavioral Intention, Effort Expectance, Facilitating Conditions, Hedonic Motivation and Performance Expectancy. As for high concentration of values in Effort Expectancy and Hedonic Motivation, the distribution waves for those plots are cut off for readability.
To assess the impact of the three different conditions (MR, VR, REAL) on the multiple dependent variables regarding presence and technology acceptance, we employed Friedman tests, which is a non-parametric statistical method used for analyzing ordinal data across multiple groups. This test is particularly useful when the data violate the assumptions of normality and homogeneity of variance, allowing for a robust evaluation of differences among the training modalities.
For the variable Physical Presence, the Friedman test showed no statistically significant differences, with χ2(2) = 5.92 and p = 0.052. However, for Self Presence, a statistically significant difference was observed, χ2(2) = 8.67, p = 0.013. The test for Social Presence also revealed a highly statistically significant difference, with χ2(2) = 26.07 and p < 0.0001. For Behavioral Intention, the test was not significant, showing χ2(2) = 5.15 and p = 0.076. A highly significant difference was detected for Effort Expectancy, with χ2(2) = 18.73 and p < 0.0001. The test for Facilitating Conditions yielded a significant result, χ2(2) = 7.68 and p = 0.021. Lastly, no significant differences were found for Hedonic Motivation and Performance Expectancy, with χ2(2) = 3.21, p = 0.201 and χ2(2) = 4.29, p = 0.117, respectively. Means and standard deviations are presented in Table 1, boxplots and distributions of the scales for presence and technology acceptance are displayed in Figures 6 and 7 respectively.
Table 1:
VariableMRREALVR
Physical Presence3.89 (1.01)3.61 (0.82)3.33 (1.08)
Self Presence4.21 (0.95)3.96 (1.02)3.05 (0.80)
Social Presence3.12 (0.95)4.51 (0.45)2.35 (0.81)
Behavioral Intention4.60 (0.88)4.53 (0.52)3.89 (1.07)
Effort Expectancy4.12 (0.90)4.93 (0.11)3.57 (0.93)
Facilitating Conditions4.38 (0.72)4.55 (0.71)4.05 (0.54)
Hedonic Motivation4.56 (0.66)4.71 (0.42)4.24 (0.84)
Performance Expectancy4.29 (0.85)4.42 (0.70)3.76 (1.00)
Table 1: Means and Standard Deviations for Different Groups and Variables

5.1.2 Post Hoc tests.

To further explore the observed differences, post hoc Wilcoxon signed-rank tests were performed, and resulting p-values were corrected using the False-Discovery-Rate (FDR) [4], which is a less conservative alternative to the Bonferroni correction for multiple comparisons. The results for the significant variables are detailed in Table 2.
Table 2:
VariableGroup 1Group 2pr
Self PresenceMRVR0.0050.349
Self PresenceVRREAL0.010-0.240
Social PresenceMRVR0.0100.240
Social PresenceMRREAL< 0.001-0.416
Social PresenceVRREAL< 0.001-0.493
Effort ExpectancyMRREAL0.010-0.280
Effort ExpectancyVRREAL0.004-0.449
Facilitating ConditionsMRVR0.0440.180
Facilitating ConditionsVRREAL0.038-0.258
Table 2: Post hoc Wilcoxon signed-rank tests with FDR corrected p-values and the rank-biserial correlation (r) as an effect size.
The post hoc tests revealed significant differences in Self Presence between MR and VR as well as VR and REAL, with medium effect sizes (|r| > 0.24). Similarly, Social Presence demonstrated significant differences across all pairwise comparisons with medium effect size for MR and VR (r = 0.24) and high effect sizes for MR and REAL (r = −0.416) and VR and REAL (r = −0.493). For Effort Expectancy, significant differences were found between MR and REAL (r = −0.280) with a medium effect size, and VR and REAL (r = −0.449), with a large effect size. Facilitating Conditions also showed significant differences between MR and VR, and VR and REAL, with moderate effect sizes.

5.2 Qualitative Feedback / Lessons Learned

For the three open questions we adopted an inductive categorization approach grounded in Mayring’s principles of qualitative content analysis [31]. This method facilitated a systematic examination of subjective responses to open-ended questions, enabling us to identify emergent themes from the data without the constraint of predefined categories.

5.2.1 Overall Impression of the Three Training Methods.

With the first open-ended question, the goal was to gather overall feedback for each of the three modalities.
Real Simulation was often referred to as the established standard in training, familiar to most participants (4 mentions). It was lauded for its naturalistic human interaction and tangible experience but noted for its limitations in simulating complex or high-risk scenarios (4 mentions).
VR generated mixed responses, leaning towards the view that the hand tracking implementation has not yet reached its full potential (8 mentions). Participants pointed out issues like a lack of haptic feedback, problems in communication, and limited equipment as areas that need development (4 mentions). Despite these limitations, several participants recognized its potential for organizational and leadership training (3 mentions).
MR received the most favorable reviews, often described as the best of both worlds, combining elements from real-life scenarios and virtual elements (8 mentions). Participants noted that MR could simulate scenarios more realistically and provide an effective training environment (3 mentions).
Participants’ responses provide insightful perspectives on the utility and limitations of Real Simulation, VR, and MR in training scenarios. Real Simulation, while reliable, is limited in its ability to handle complex training needs. VR shows promise, particularly for ‘simulation of organizational and leadership tasks’. MR stands out as the most balanced and promising modality, particularly for ‘training of patient treatment’, gaining the highest number of positive mentions. The choice of modality should therefore align with specific training objectives to maximize effectiveness.

5.2.2 Preferred Method.

The second open-ended question sought to understand the participants’ preferred training modality when preparing for real-life medical scenarios. Participants were asked: ‘If you had to choose a training method to prepare for real scenarios, which one would it be and why?’. The preferences were categorically divided into MR, VR, and Real Training for the purpose of analysis.
Mixed Reality. A majority of the participants, 11 out of 16, indicated a preference for MR as their training modality of choice.
Realistic Simulation: The ability of MR to seamlessly integrate real and virtual elements was cited as a significant advantage. Participants highlighted that MR provides realistic patient and environmental details (7 mentions).
Haptic Feedback: MR’s feature of allowing actual ‘hand movements’ and tactile feedback was noted as crucial for effective training (3 mentions).
Resource-Intensive but Worthwhile: Although one participant mentioned that MR is resource-intensive, they noted that the high-quality training experience justified the resource expenditure (1 mention).
Speech Interaction: One participant suggested that MR could be even more effective if improvements in voice interaction are made (1 mention).
Real Training. Four participants expressed that Real Training would be their preferred modality.
Immediate Feedback: Participants appreciated the instant communication and feedback possible with real patients (2 mentions).
Physical Interactions: The ability to perform physical tasks like bandaging and carrying patients was highlighted as an advantage (2 mentions).
Scenario-Specific: One participant additionally noted that Real Training is optimal for certain types of scenarios like confined spaces and neurological conditions (1 mention).
Virtual Reality. Notably, none of the participants selected VR as their preferred training modality. However, some did mention its utility for specific types of training scenarios, e.g. mass casualty events, as in this case the focus is on triage and not treatment, making the tangible aspect less important while still being able to simulate large environments with multiple injured persons.
Two participants indicated that a combination of MR, VR, and Real Training in a training curriculum could be beneficial, suggesting a multimodal approach depending on the specific training needs and scenarios.

5.2.3 Which Modality for Which Training Goal?

Participants often mentioned MR and VR in the same context when it comes to simulating a virtual environment effectively (4 mentions). They indicated that both these modalities are superior for creating an immersive scenario, particularly when it comes to generating a ‘general impression of an emergency scene’ and mimicking real-world distractions. Moreover, these modalities were said to offer flexibility and realism in training scenarios that are otherwise difficult or costly to stage in real life, such as mass casualty incidents or dangerous situations (1 mention).
A synergy between MR and Real Training was noted in several responses (3 mentions). MR was praised for its realistic simulation of the environment and specific medical tasks, while Real Training was cited as essential for hands-on practice with actual medical equipment and patient care. The sentiment was that, while MR offers a ‘training advantage’ by portraying the environment more realistically, the care of the patient still feels more natural in a real-world setting (1 mention).
MR was specifically highlighted for its realistic depiction of the environment (3 mentions) and for skill-specific training such as stemming a bleeding wound. It was perceived as particularly useful for training where understanding the context or environment is crucial.
VR was appreciated for its organizational training potential (2 mentions). It was also favored for the assessment of emergency patients, particularly in continuing education where less equipment is needed (1 mention). It allows for the simulation of factors like skin color and respiratory rate more effectively compared to other methods.
Real Training was explicitly mentioned for its benefit in the initial stages of training (1 mention). It was also touted for its unique ability to facilitate genuine physical interactions with patients, an aspect that is crucial in medical practice but challenging to simulate effectively in MR or VR (1 mention).
In summary, the findings reinforce the idea that each training modality has unique strengths that make it particularly well-suited for specific aspects of emergency medical training. As one participant succinctly put it: ‘VR [is best] for organizational and leadership tasks, MR [is best] for immersive patient treatment and real training [is best] for entry-level training’. This statement encapsulates well the general sentiment expressed across the responses and affirms that the best training approach depends on what needs to be achieved, while still highlighting MR as a promising future direction for MFR training.

6 Discussion

6.1 RQ1: How Do the Three Training Modalities Compare in Terms of Presence and Technology Acceptance?

The results on presence paint a mixed picture across the three modalities (MR, VR, and REAL). Starting with Self-Presence, the results revealed a significant difference, with MR and REAL standing out as superior to VR. This aligns well with the qualitative findings where MR was frequently cited for its realistic simulation of audio, visual and haptic information, which likely contributes to a greater feeling of ‘being there’ with their own body in the training scenario. Participants also expressed similar sentiments for REAL training, pointing to the tangible experience and naturalistic human interaction as strengths. Using Skarbez’ framework [48], the increased immersion due to haptic feedback and increased world knowledge through tangible interaction actually led to higher levels in self-presence. By including highly specific and realistic tangible devices into MR, and thereby increasing the haptic realism [36], specifically the sense of self-presence could be increased, nearly reaching that of training in reality.
Social Presence scores were also significantly different across the modalities, real training stood out as the best, followed by MR and then VR. This finding is intuitive, as having a tangible manikin or real actor in the scenario does seem to improve the perception of actually interacting with another person. Though MR’s social presence was not as high as in the real training, it nonetheless showed significant improvement over VR, which is in line with previous research about social presence in VR training [52]. This can be a critical factor in patient treatment training scenarios where social interactions are essential. The qualitative data corroborated these results; participants praised the real training for the naturalness of speech interaction. Social presence is closely related to the coherence of the simulation [48]. Given that VR and MR used the same speech interaction system, the blend of reality and virtuality in the MR condition apparently makes it more plausible that one is interacting with a living, embodied patient.
One notable aspect was the absence of significant differences in Physical Presence across the modalities, although on a descriptive level MR outperformed the real training, which in turn was rated higher than VR. This is somewhat counter intuitive, as the qualitative data suggests that both VR and MR provide realistic environmental simulations, while real training, despite having physical objects like a bicycle and backpack, falls short in communicating the environmental context as vividly as the digital modalities do. The environmental context is important for contributing towards perceived realism of the training and also can add a stress inducing element [37].
Nonetheless, the lack of statistical significance suggests that the three modalities are more similar in terms of Physical Presence than one might initially assume, indicating that technological augmentations are not necessarily superior in creating a sense of physical engagement.
The results for technology acceptance were also quite informative. Real training led in Effort Expectancy, which makes sense considering it’s an already established training method and would be easier to integrate as participants are familiar with it. MR also trended higher, although this was not statistically significant. This aligns with participants’ open-ended responses, where real training was often referred to as the established standard in training scenarios.
Hedonic motivation was similar across all modalities, implying that participants found each form of training to be engaging. There was also a trend in Performance Expectancy, with MR and REAL outperforming VR, but again, statistical significance was not reached.
For Facilitating Conditions, both MR and REAL outperformed VR, indicating that they were easier and more intuitive to use. This can be particularly important for training scenarios where ease of use can affect the training outcomes significantly. This is in line with our prior work [54], where it was observed that using the actual tools in a virtual environment requires very little time to learn and is very intuitive, as the sensorimotor contingencies are preserved. In contrast, the VR environment might introduce an additional layer of complexity due to the need for specialized controllers or gestures, which could interrupt the flow of the training and require additional cognitive load to master. This potentially hampers the effectiveness of the training in the VR modality, making it less conducive for quick and intuitive learning. Interestingly, MR, which blends digital and physical elements, was able to maintain a high level of facilitating conditions, perhaps due to its ability to superimpose digital information directly onto real-world objects, thus reducing the disconnect between the digital and physical worlds. This seamless integration likely makes it easier for users to adapt and perform tasks, contributing to its higher rating in facilitating conditions. These findings suggest that while VR has benefits in terms of immersive experiences, when it comes to ease of use and facilitation of effective training, MR and REAL environments may offer superior advantages.
Regarding Behavioral Intention, there were no significant differences in the quantitative data. Contrasting, qualitative responses partly speak to the intention to use: MR was chosen by 11 of 15 participants as their preferred training method, while the remaining 4 chose real training. Furthermore, MR was often cited as offering ‘the best of both worlds’. This relates to [13], who state (in regard to medical simulators) that many approaches limit themselves to only mimicking the real world regarding haptic feedback. In the proposed approach, there is no need for mimicking, as real tools are integrated, a notion which was called for since early manikin integrations into VR [46], who expressed the need of utilizing interaction devices instead of–in their case– data gloves.
This discrepancy between the quantitative and qualitative data might suggest that while the measurable intention to engage with a certain modality may not vary significantly, people’s subjective preferences lean more towards MR. The appeal of MR as ‘the best of both worlds’ likely stems from its ability to provide a highly immersive experience, while still preserving the intuitiveness and ease-of-use found in real-world training.
In summary, our findings suggest that MR offers a balanced training solution, exhibiting higher levels of presence and technology acceptance than VR and coming close to real training in terms of social presence. These findings are supported by qualitative data, which revealed that MR’s integration of real and virtual elements, and its realistic simulations make it a highly promising and effective modality for training, even in its prototypical state. These findings are in line with previous research comparing social agents in virtual vs. real training, e.g. in the context of the workplace [52]. The results that while REAL training remains the gold standard, MR’s capabilities make it a strong candidate for future training, particularly those that demand a blend of realistic interaction and complex, risk-free simulations.

6.2 RQ2: What Modalities Are Most Beneficial and Best Matching for Which Training Objectives?

Our study indicates that the selection of a training modality–MR, VR, or Real Training–can have a significant impact on the use of emergency medical training programs, depending on the training goals. Consistent with participant perspectives, the data suggests that each modality offers specific advantages that make it particularly well-suited for distinct training contexts.
MR training was consistently highlighted for its realistic environmental portrayal and for skill-specific training. This finding aligns with previous research indicating that MR’s capabilities can offer a blended experience that retains the realism of a physical environment while incorporating virtual elements [15]. As participants in our study pointed out, MR can be invaluable in scenarios where understanding the context or the environment is crucial. It’s capabilities for rendering details (e.g. bleeding, bystanders or environmental threats), making it ideal for specialized, high-fidelity training scenarios.
VR training was particularly appreciated for its potential in organizational training and continuing education. This supports the notion that VR’s strength lies in abstract scenario training and procedural knowledge development [41, 59]. Moreover, VR’s advantage in simulating factors like skin color and respiratory rate efficiently adds an extra layer of fidelity in continuous medical education.
However, Real Training still offers a unique ability to facilitate actual physical interactions with real patients. It was also cited as particularly beneficial in the initial stages of training, highlighting its foundational role in building basic skills. These findings are supported by existing literature emphasizing the importance of hands-on experience in medical training [59] and also the importance of training muscle memory [26].
The quantitative data further nuanced these perspectives. For instance, the Self Presence and Social Presence scales showed significant differences among the modalities, corroborating participant claims about the immersive capabilities of MR and Real Training. MR scored highest in Self Presence, which might suggest that trainees felt most present or engaged when training in a mixed-reality environment. Real Training excelled in Social Presence, possibly capturing the irreplaceable value of human interaction in medical training scenarios.
Similarly, Effort Expectancy showed a highly significant difference, indicating that Real Training is perceived as less effortful compared to MR and VR. This aligns with the qualitative data suggesting that Real Training offers a more ’natural’ or intuitive interface, especially in contexts that require the manipulation of actual medical equipment.
Interestingly, no significant differences were observed in technology acceptance variables like Behavioral Intention, Hedonic Motivation, and Performance Expectancy, suggesting that acceptance of these modalities may be relatively uniform. It may also imply that the ’novelty’ factor associated with technological training modalities like MR and VR is not a dominant influence on trainees’ willingness to use them.

6.3 Limitations

There are certain limitations in this work that need to be discussed. First, the relatively small sample size of 15 first responders left our study statistically underpowered, meaning that only large deviations between the modalities reached statistical significance, making it harder to detect smaller, yet potentially meaningful, differences between the training modalities. Also, the study was conducted with first responders in one country from one institution. As procedures and training curricula might differ between countries or institutions, the generalizability of results needs to be confirmed in future studies involving first responders from other institutions and/or countries.
Despite this limitation, the study provides a exploration into an under-researched area within the HCI community. Given the specialized nature of this field, even a small sample can offer valuable insights into the practicality and usability of MR tools like the Green Manikin.
Second, the fidelity of the three training modalities is crucial when comparing them, as not to measure artefacts. In the case of real training it could be argued, that our variant represents a quite low-level training. Higher fidelity real training would involve make-up and silicone wounds attached to the role-player for example. Still, this low-level form of training is much more common in practice due to its simplicity - a goal that the MR and VR alternatives want to enable as well.
Lastly, a limitation of this study lies in its primary focus on Human-Computer Interaction (HCI) contributions, particularly in the evaluation and comparison of different training modalities. While this emphasis offers important insights into the interface and user experience aspects of medical training technologies, it may not comprehensively address other critical dimensions such as psychological factors, educational pedagogy, or training effectiveness.

6.4 Future Work

Future research in the human-computer interaction (HCI) domain has several future directions to explore, based our findings.
One next step is to conduct longitudinal studies. While our research provides an initial snapshot of user experiences across MR, VR, and REAL modalities, a longitudinal approach could track these experiences over time. This would offer a more comprehensive understanding of how user adaptability and the sustained effectiveness of these interactive technologies evolve in the context of medical training.
Another critical research direction pertains to the concept of presence, which yielded mixed results across the modalities in our study. It would be beneficial for HCI research to delve deeper into the specific elements that contribute to presence in these environments in this special context. Identifying the factors, be it realism or interaction capabilities, that influence self, social, and physical presence, could greatly inform the design and application of future interactive systems for training.
The issue of technology acceptance also extends to the type of interfaces and tools used in the training modalities. Specialized controllers and tangible input devices, for example, could significantly impact the training experience in MR and VR scenarios. Future work in HCI could explore how these specialized tools compare to generic ones in terms of engagement and training effectiveness.
Lastly, understanding how these new modalities can be integrated into existing medical training curricula is an essential question. As educational institutions often rely on well-established teaching methods, the seamless integration of emerging interactive technologies like MR and VR could offer a substantial advantage. Future research should therefore also focus on the practical aspects of such integration, potentially guiding institutions on how to best complement traditional methods with interactive systems.

7 Conclusion

Medical first responders increasingly train in various modalities for medical emergencies. Virtual Reality and Mixed Reality solutions are on the rise, and rival traditional forms of training, like role-player based simulation training. In this work, we introduced the Green Manikin, a full-body, fully tracked physical manikin that is integrated into MR, enabling the use of real tools and hands to train patient treatment in an immersive and tangible environment. We compared the proposed MR solution to two other training modalities, VR and real training. MR yielded promising results as the modality that encapsulates the best of both virtual and real world training. Participants experienced high levels of presence: Social presence was significantly higher in MR than VR, and came closer to naturalistic social interaction in reality. Self presence was rated similarly in VR and reality, but significantly lower in VR. Interactions in MR further were experienced as intuitive to use as in reality, again with VR being rated significantly lower. Together with the qualitative feedback, this suggests that MR has the potential to offering highly immersive and tangible training, where skills can be practised in a safe yet realistic environment.

Acknowledgments

This work was funded by the project MED1stMR (No 101021775) of the European Union’s Horizon 2020 Research and Innovation Program and by the industrial PhD program of the austrian research promotion agency (FFG) (No FO999887876).

Footnotes

4
www.johanniter.at
10
www.johanniter.at

Supplemental Material

MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation

References

[1]
Hamidreza Aghababaeian, Soheila Sedaghat, Noorallah Tahery, Ali Sadeghi Moghaddam, Mohammad Maniei, Nosrat Bahrami, and Ladan Araghi Ahvazi. 2013. A Comparative Study of the Effect of Triage Training by Role-Playing and Educational Video on the Knowledge and Performance of Emergency Medical Service Staffs in Iran. Prehospital and Disaster Medicine 28, 6 (2013), 605–609. https://doi.org/10.1017/S1049023X13008911
[2]
Narmeen N Al-Hiyari and Shaidah S Jusoh. 2021. Healthcare Training Application: 3D First Aid Virtual Reality. In International Conference on Data Science, E-learning and Information Systems 2021. 107–116.
[3]
Anke S Baetzner, Rafael Wespi, Yannick Hill, Lina Gyllencreutz, Thomas C Sauter, Britt-Inger Saveman, Stefan Mohr, Georg Regal, Cornelia Wrzus, and Marie O Frenkel. 2022. Preparing medical first responders for crises: a systematic literature review of disaster training programs and their effectiveness. Scandinavian journal of trauma, resuscitation and emergency medicine 30, 1 (2022), 76.
[4]
Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) 57, 1 (1995), 289–300. Publisher: Wiley Online Library.
[5]
Olaf Binsch, Charelle Bottenheft, Annemarie Landman, Linsey Roijendijk, and Eric HGJM Vermetten. 2021. Testing the applicability of a virtual reality simulation platform for stress training of first responders. Military Psychology 33, 3 (2021), 182–196.
[6]
Kristina Bucher, Tim Blome, Stefan Rudolph, and Sebastian von Mammen. 2019. VReanimate II: training first aid and reanimation in virtual reality. Journal of Computers in Education 6, 1 (2019), 53–78.
[7]
Joe Cecil, Sam Kauffman, Avinash Gupta, Vern McKinney, and MD Miguel Pirela-Cruz. 2021. Design of a human centered computing (HCC) based virtual reality simulator to train first responders involved in the Covid-19 pandemic. In 2021 IEEE International Systems Conference (SysCon). IEEE, 1–7.
[8]
Mark Xavier Cicero, Travis Whitfill, Frank Overly, Janette Baird, Barbara Walsh, Jorge Yarzebski, Antonio Riera, Kathleen Adelgais, Garth D. Meckler, Carl Baum, David Christopher Cone, and Marc Auerbach. 2017. Pediatric Disaster Triage: Multiple Simulation Curriculum Improves Prehospital Care Providers’ Assessment Skills. Prehospital Emergency Care 21, 2 (2017), 201–208. https://doi.org/10.1080/10903127.2016.1235239 arXiv:https://doi.org/10.1080/10903127.2016.1235239PMID: 27749145.
[9]
Laura Cowling, Kylen Swartzberg, and Anita Groenewald. 2021. Knowledge retention and usefulness of simulation exercises for disaster medicine-what do specialty trainees know and think?African Journal of Emergency Medicine 11, 3 (2021), 356–360.
[10]
Glen Cuttance, Kathryn Dansie, and Tim Rayner. 2017. Paramedic Application of a Triage Sieve: A Paper-Based Exercise. Prehospital and Disaster Medicine 32, 1 (2017), 3–13. https://doi.org/10.1017/S1049023X16001163
[11]
Michael S Dittmar, Philipp Wolf, Marc Bigalke, Bernhard M Graf, and Torsten Birkholz. 2018. Primary mass casualty incident triage: evidence for the benefit of yearly brief re-training from a simulation study. Scandinavian journal of trauma, resuscitation and emergency medicine 26, 1 (2018), 1–8.
[12]
Zachariah S. Edinger, Kelly A. Powers, Kathleen S. Jordan, and David W. Callaway. 2019. Evaluation of an Online Educational Intervention to Increase Knowledge and Self-efficacy in Disaster Responders and Critical Care Transporters Caring for Individuals with Developmental Disabilities. Disaster Medicine and Public Health Preparedness 13, 4 (2019), 677–681. https://doi.org/10.1017/dmp.2018.129
[13]
David Escobar-Castillejos, Julieta Noguez, Luis Neri, Alejandra Magana, and Bedrich Benes. 2016. A Review of Simulators with Haptic Devices for Medical Training. J. Med. Syst. 40, 4 (apr 2016), 1–22. https://doi.org/10.1007/s10916-016-0459-8
[14]
Danilo Gasques Rodrigues, Ankur Jain, Steven R. Rick, Liu Shangley, Preetham Suresh, and Nadir Weibel. 2017. Exploring Mixed Reality in Specialized Surgical Environments. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI EA ’17). Association for Computing Machinery, New York, NY, USA, 2591–2598. https://doi.org/10.1145/3027063.3053273
[15]
Jens Emil Sloth Grønbæk, Ken Pfeuffer, Eduardo Velloso, Morten Astrup, Melanie Isabel Sønderkær Pedersen, Martin Kjær, Germán Leiva, and Hans Gellersen. 2023. Partially Blended Realities: Aligning Dissimilar Spaces for Distributed Mixed Reality Meetings. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16.
[16]
David Harborth and Sebastian Pape. 2018. German translation of the unified theory of acceptance and use of technology 2 (UTAUT2) questionnaire. Available at SSRN 3147708 (2018).
[17]
Cuan M Harrington, Dara O Kavanagh, John F Quinlan, Donncha Ryan, Patrick Dicker, Dara O’Keeffe, Oscar Traynor, and Sean Tierney. 2018. Development and evaluation of a trauma decision-making simulator in Oculus virtual reality. The American Journal of Surgery 215, 1 (2018), 42–47.
[18]
Jason Haskins, Bolin Zhu, Scott Gainer, Will Huse, Suraj Eadara, Blake Boyd, Charles Laird, JJ Farantatos, and Jason Jerald. 2020. Exploring VR training for first responders. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 57–62.
[19]
Filip Jaśkiewicz, Krystyna Teresa Frydrysiak, Katarzyna Starosta-Głowińska, and Dariusz Timler. 2019. The applicability of virtual reality in cardiopulmonary resuscitation training-opinion of medical prefessionals and students. Emergency Medical Service. Ratownictwo Medyczne 6, 1 (2019), 32–36.
[20]
Jerrilyn Jones, Ricky Kue, Patricia Mitchell, Sgt. Gary Eblan, and K. Sophia Dyer. 2014. Emergency Medical Services Response to Active Shooter Incidents: Provider Comfort Level and Attitudes Before and After Participation in a Focused Response Training Program. Prehospital and Disaster Medicine 29, 4 (2014), 350–357. https://doi.org/10.1017/S1049023X14000648
[21]
Joshua Benjamin Kaplan, Aaron L Bergman, Michael Christopher, Sarah Bowen, and Matthew Hunsinger. 2017. Role of resilience in mindfulness training for first responders. Mindfulness 8, 5 (2017), 1373–1380.
[22]
Hannes Götz Kenngott, Anas Amin Preukschas, Martin Wagner, Felix Nickel, Michael Müller, Nadine Bellemann, Christian Stock, Markus Fangerau, Boris Radeleff, Hans-Ulrich Kauczor, 2018. Mobile, real-time, and point-of-care augmented reality is robust, accurate, and feasible: a prospective pilot study. Surgical endoscopy 32, 6 (2018), 2958–2967.
[23]
Peter Kern. 2017. Polizei und taktische Kommunikation. Springer Fachmedien Wiesbaden GmbH, Wiesbaden s.l. https://doi.org/10.1007/978-3-658-17197-1
[24]
James F Knight, Simon Carley, Bryan Tregunna, Steve Jarvis, Richard Smithies, Sara de Freitas, Ian Dunwell, and Kevin Mackway-Jones. 2010. Serious gaming technology in major incident triage training: a pragmatic controlled trial. Resuscitation 81, 9 (2010), 1175–1179.
[25]
George Koutitas, Scott Smith, and Grayson Lawrence. 2021. Performance evaluation of AR/VR training technologies for EMS first responders. Virtual Reality 25 (2021), 83–94.
[26]
George Koutitas, Scott Smith, Grayson Lawrence, and Keith Noble. 2020. Smart responders for smart cities: A VR/AR training approach for next generation first responders. In Smart Cities in Application. Springer, 49–66.
[27]
Kwan Min Lee. 2004. Presence, explicated. Communication theory 14, 1 (2004), 27–50.
[28]
Jie Li, Guo Chen, Huib de Ridder, and Pablo Cesar. 2020. Designing a Social VR Clinic for Medical Consultations. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3334480.3382836
[29]
Elhassan Makled, Amal Yassien, Passant Elagroudy, Mohamed Magdy, Slim Abdennadher, and Nabila Hamdi. 2019. PathoGenius VR: VR Medical Training. In Proceedings of the 8th ACM International Symposium on Pervasive Displays (Palermo, Italy) (PerDis ’19). Association for Computing Machinery, New York, NY, USA, Article 31, 2 pages. https://doi.org/10.1145/3321335.3329694
[30]
Guido Makransky, Lau Lilleholt, and Anders Aaby. 2017. Development and validation of the Multimodal Presence Scale for virtual reality environments: A confirmatory factor analysis and item response theory approach. Computers in Human Behavior 72 (2017), 276–285.
[31]
Philipp Mayring 2004. Qualitative content analysis. A companion to qualitative research 1, 2 (2004), 159–176.
[32]
Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino. 1995. Augmented reality: A class of displays on the reality-virtuality continuum. In Telemanipulator and telepresence technologies, Vol. 2351. Spie, 282–292.
[33]
Brennen Mills, Peggy Dykstra, Sara Hansen, Alecka Miles, Tim Rankin, Luke Hopper, Luke Brook, and Danielle Bartlett. 2020. Virtual Reality Triage Training Can Provide Comparable Simulation Efficacy for Paramedicine Students Compared to Live Simulation-Based Scenarios. Prehospital Emergency Care 24, 4 (2020), 525–536. https://doi.org/10.1080/10903127.2019.1676345 arXiv:https://doi.org/10.1080/10903127.2019.1676345PMID: 31580178.
[34]
Kristina Lennquist Montán, Per Örtenwall, and Sten Lennquist. 2015. Assessment of the accuracy of the Medical Response to Major Incidents (MRMI) course for interactive training of the response to major incidents and disasters. American journal of disaster medicine 10, 2 (2015), 93–107.
[35]
Ivette Motola, William A Burns, Angel A Brotons, Kelly F Withum, Richard D Rodriguez, Salma Hernandez, Hector F Rivera, Saul Barry Issenberg, and Carl I Schulman. 2015. Just-in-time learning is effective in helping first responders manage weapons of mass destruction events. Journal of trauma and acute care surgery 79, 4 (2015), S152–S156.
[36]
Thomas Muender, Michael Bonfert, Anke Verena Reinschluessel, Rainer Malaka, and Tanja Döring. 2022. Haptic fidelity framework: Defining the factors of realistic haptic feedback for virtual reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–17.
[37]
Quynh Nguyen, Emma Jaspaert, Markus Murtinger, Helmut Schrom-Feiertag, Sebastian Egger-Lampl, and Manfred Tscheligi. 2021. Stress Out: Translating Real-World Stressors into Audio-Visual Stress Cues in VR for Police Training. In IFIP Conference on Human-Computer Interaction – INTERACT 2021. Springer, Cham, 551–561. https://doi.org/10.1007/978-3-030-85616-8_32
[38]
Felix Nickel, Julia A Brzoska, Matthias Gondan, Henriette M Rangnick, Jackson Chu, Hannes G Kenngott, Georg R Linke, Martina Kadmon, Lars Fischer, and Beat P Müller-Stich. 2015. Virtual reality training versus blended learning of laparoscopic cholecystectomy: a randomized controlled trial with laparoscopic novices. Medicine 94, 20 (2015).
[39]
Federica Pallavicini, Luca Argenton, Nicola Toniazzi, Luciana Aceti, and Fabrizia Mantovani. 2016. Virtual reality applications for stress management training in the military. Aerospace medicine and human performance 87, 12 (2016), 1021–1030.
[40]
George Papagiannakis, Paul Zikas, Nick Lydatakis, Steve Kateros, Mike Kentros, Efstratios Geronikolakis, Manos Kamarianakis, Ioanna Kartsonaki, and Giannis Evangelou. 2020. MAGES 3.0: Tying the Knot of Medical VR. In ACM SIGGRAPH 2020 Immersive Pavilion (Virtual Event, USA) (SIGGRAPH ’20). Association for Computing Machinery, New York, NY, USA, Article 6, 2 pages. https://doi.org/10.1145/3388536.3407888
[41]
Maximilian Rettinger, Niklas Müller, Christopher Holzmann-Littig, Marjo Wijnen-Meijer, Gerhard Rigoll, and Christoph Schmaderer. 2021. VR-Based Equipment Training for Health Professionals. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Article 252, 6 pages. https://doi.org/10.1145/3411763.3451766
[42]
Maximilian Rettinger and Gerhard Rigoll. 2022. Defuse the Training of Risky Tasks: Collaborative Training in XR. In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 695–701. https://doi.org/10.1109/ISMAR55827.2022.00087
[43]
Maximilian Rettinger and Gerhard Rigoll. 2023. Touching the Future of Training: Investigating Tangible Interaction in Virtual Reality. Frontiers in Virtual Reality 4 (2023).
[44]
Jessica Röhner and Astrid Schütz. 2020. Psychologie der Kommunikation (3., aktualisierte und überarbeitete auflage ed.). Springer, Berlin.
[45]
David Scherfgen and Jonas Schild. 2021. Estimating the Pose of a Medical Manikin for Haptic Augmentation of a Virtual Patient in Mixed Reality Training. In Symposium on Virtual and Augmented Reality (Virtual Event, Brazil) (SVR’21). Association for Computing Machinery, New York, NY, USA, 33–41. https://doi.org/10.1145/3488162.3488166
[46]
Federico Semeraro, Antonio Frisoli, Massimo Bergamasco, and Erga L. Cerchiari. 2009. Virtual reality enhanced mannequin (VREM) that is well received by resuscitation experts. Resuscitation 80, 4 (2009), 489–492. https://doi.org/10.1016/j.resuscitation.2008.12.016
[47]
Thomas Semmel. 2020. ABCDE-die Beurteilung von Notfallpatienten. Elsevier Health Sciences.
[48]
Richard Skarbez, Missie Smith, and Mary C Whitton. 2021. Revisiting milgram and kishino’s reality-virtuality continuum. Frontiers in Virtual Reality 2 (2021), 647997.
[49]
Mel Slater. 2009. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philosophical Transactions of the Royal Society B: Biological Sciences 364, 1535 (2009), 3549–3557.
[50]
Mario S. Staller, Swen Koerner, Valentina Heil, Andy Abraham, and Jamie M. Poolton. 2021. Police Recruits’ Perception of Skill Transfer from Training to the Field. (2021). https://doi.org/10.13140/RG.2.2.16219.90404 Publisher: Unpublished.
[51]
Mario S. Staller, Swen Koerner, and Benjamin Zaiser. 2021. Professionelle polizeiliche Kommunikation: sich verstehen. Forensische Psychiatrie, Psychologie, Kriminologie 15, 4 (Nov. 2021), 345–354. https://doi.org/10.1007/s11757-021-00684-7 Number: 4.
[52]
Jakob Carl Uhl, Klaus Neundlinger, and Georg Regal. 2023. Social presence as a training resource: comparing VR and traditional training simulations. Research in Learning Technology 31, 1063519 (2023), 1–14. https://doi.org/10.25304/rlt.v31.2827
[53]
Jakob C Uhl, Georg Regal, Helmut Schrom-Feiertag, Markus Murtinger, and Manfred Tscheligi. 2023. XR for First Responders: Concepts, Challenges and Future Potential of Immersive Training. In International Conference on Virtual Reality and Mixed Reality. Springer, 192–200.
[54]
Jakob Carl Uhl, Helmut Schrom-Feiertag, Georg Regal, Katja Gallhuber, and Manfred Tscheligi. 2023. Tangible Immersive Trauma Simulation : Is Mixed Reality the next level of medical skills training?. In ACM CHI Conference on Human Factors in Computing Systems 2023.
[55]
Viswanath Venkatesh, James YL Thong, and Xin Xu. 2012. Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS quarterly (2012), 157–178.
[56]
Torben Volkmann, Daniel Wessel, Nicole Jochems, and Thomas Franke. 2018. German Translation of the Multimodal Presence Scale. Mensch und Computer 2018-Tagungsband (2018).
[57]
Jennifer Wild, Neil Greenberg, Michelle L Moulds, Marie-Louise Sharp, Nicola Fear, Samuel Harvey, Simon Wessely, and Richard A Bryant. 2020. Pre-incident training to build resilience in first responders: recommendations on what to and what not to do. Psychiatry 83, 2 (2020), 128–142.
[58]
William Wilkerson, Dan Avstreih, Larry Gruppen, Klaus-Peter Beier, and James Woolliscroft. 2008. Using immersive simulation for training first responders for mass casualty incidents. Academic emergency medicine 15, 11 (2008), 1152–1159.
[59]
Frederik Winther, Linoj Ravindran, Kasper Paabøl Svendsen, and Tiare Feuchtner. 2020. Design and Evaluation of a VR Training Simulation for Pump Maintenance. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3334480.3375213
[60]
Youichi Yanagawa, Kazuhiko Omori, Kouhei Ishikawa, Ikuto Takeuchi, Kei Jitsuiki, Toshihiko Yoshizawa, Jun Sato, Hideyuki Matsumoto, Masaru Tsuchiya, Hiromichi Osaka, and et al.2018. Difference in First Aid Activity During Mass Casualty Training Based on Having Taken an Educational Course. Disaster Medicine and Public Health Preparedness 12, 4 (2018), 437–440. https://doi.org/10.1017/dmp.2017.99
[61]
Olivia Zechner, Daniel García Guirao, Helmut Schrom-Feiertag, Georg Regal, Jakob Carl Uhl, Lina Gyllencreutz, David Sjöberg, and Manfred Tscheligi. 2023. NextGen Training for Medical First Responders: Advancing Mass-Casualty Incident Preparedness through Mixed Reality Technology. Multimodal Technologies and Interaction 7, 12 (2023), 113.
[62]
Paul Zikas, Manos Kamarianakis, Ioanna Kartsonaki, Nick Lydatakis, Steve Kateros, Mike Kentros, Efstratios Geronikolakis, Giannis Evangelou, Achilles Apostolou, Paolo Alejandro Alejandro Catilo, and George Papagiannakis. 2021. Covid-19 - VR Strikes Back: Innovative Medical VR Training. In ACM SIGGRAPH 2021 Immersive Pavilion (Virtual Event, USA) (SIGGRAPH ’21). Association for Computing Machinery, New York, NY, USA, Article 11, 2 pages. https://doi.org/10.1145/3450615.3464546

Cited By

View all
  • (2024)Integrating GPT-Based AI into Virtual Patients to Facilitate Communication Training Among Medical First Responders: Usability Study of Mixed Reality SimulationJMIR Formative Research10.2196/586238(e58623)Online publication date: 11-Dec-2024

Index Terms

  1. Choosing the Right Reality: A Comparative Analysis of Tangibility in Immersive Trauma Simulations
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
        May 2024
        18961 pages
        ISBN:9798400703300
        DOI:10.1145/3613904
        This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License.

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 11 May 2024

        Check for updates

        Author Tags

        1. chroma-key
        2. comparative study
        3. medical first responder
        4. mixed reality
        5. modality
        6. training

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        Conference

        CHI '24

        Acceptance Rates

        Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

        Upcoming Conference

        CHI 2025
        ACM CHI Conference on Human Factors in Computing Systems
        April 26 - May 1, 2025
        Yokohama , Japan

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)1,021
        • Downloads (Last 6 weeks)209
        Reflects downloads up to 10 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Integrating GPT-Based AI into Virtual Patients to Facilitate Communication Training Among Medical First Responders: Usability Study of Mixed Reality SimulationJMIR Formative Research10.2196/586238(e58623)Online publication date: 11-Dec-2024

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media