[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Eye Tracking and Semantic Evaluation for Ceramic Teapot Product Modeling
Next Article in Special Issue
Steering Assist Control for Bicycles with Variable Trail Effect
Previous Article in Journal
Experimental Study on the Effects of Dynamic High Water Pressure on the Deformation Characteristics of Limestone
Previous Article in Special Issue
Autonomous Navigation for Personal Mobility Vehicles Considering Passenger Tolerance to Approaching Pedestrians
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing User Acceptance in Human–Machine Interfaces Assessments of Shared Autonomous Vehicles: A Standardized Test Procedure

1
Department of Design, Politecnico di Milano, 20158 Milan, Italy
2
Department of Mechanical Engineering, Politecnico di Milano, 20156 Milan, Italy
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(1), 45; https://doi.org/10.3390/app15010045
Submission received: 17 November 2024 / Revised: 19 December 2024 / Accepted: 21 December 2024 / Published: 25 December 2024
(This article belongs to the Special Issue Advances in Autonomous Driving and Smart Transportation)
Figure 1
<p>AV is classified into two categories: the vehicle’s ownership attributes (private or shared) and the implementation scenarios (private destination or uniform journey) [<a href="#B10-applsci-15-00045" class="html-bibr">10</a>].</p> ">
Figure 2
<p>The three related pillars of the acceptance concept include definition, acceptance model, and assessment structure.</p> ">
Figure 3
<p>Correlation between the users’ response and ergonomic analysis.</p> ">
Figure 4
<p>Overview of the different approaches to select relevant use cases of a human–machine interface (HMI) and their categorization.</p> ">
Figure 5
<p>Fifteen selected examples of Autonomous Shuttle Buses (ASBs) for public transportation to deliver humans.</p> ">
Figure 6
<p>Configurations and classifications for external HMI in selected Autonomous Shuttle Buses (ASBs).</p> ">
Figure 7
<p>Display configurations and interior layout classifications for internal HMI in selected Autonomous Shuttle Buses (ASBs).</p> ">
Figure 8
<p>Display-based approach with one filter. Grey squares indicate redundant display locations.</p> ">
Figure 9
<p>The testing environment should include the three conflicting situations. The blue cube represents the interaction partner, and the gray arrow indicates the motion trajectory of the interaction object.</p> ">
Figure 10
<p>Procedure with measured parameters.</p> ">
Figure 11
<p>Virtual display of the testing scene inside the vehicle.</p> ">
Figure 12
<p>Schematic diagram of the specific scenarios and tasks of the entire test.</p> ">
Figure 13
<p>One possible scenario for the eHMI test is to check whether the participant wearing the HMD crosses the road from point A to point B based on the sign displayed by the SAV.</p> ">
Figure 14
<p>Comprehensive data collection approach: quantitative and qualitative data perspectives through diverse methodologies [<a href="#B47-applsci-15-00045" class="html-bibr">47</a>,<a href="#B74-applsci-15-00045" class="html-bibr">74</a>].</p> ">
Versions Notes

Abstract

:
Human–Machine Interfaces (HMIs) in autonomous driving technology have recently gained significant research interest in public transportation. However, most of the studies are biased towards qualitative methods, while combining quantitative and qualitative approaches has yet to receive commensurate attention in measuring user acceptance of design outcome evaluation. To the best of our knowledge, no standardized test procedure that combines quantitative and qualitative methods has been formed to evaluate and compare the interrelationships between different designs of HMIs and their psychological effects on users. This paper proposes a practical and comprehensive protocol to guide assessments of user acceptance of HMI design solutions. We first defined user acceptance and analyzed the existing evaluation methods. Then, specific ergonomic factors and requirements that the designed output HMI should meet were identified. Based on this, we developed a protocol to evaluate a particular HMI solution from in- and out-of-vehicle perspectives. Our theoretical protocol combines objective and subjective measures to compare users’ behavior when interacting with Autonomous Vehicles (AVs) in a virtual experimental environment, especially in public transportation. Standardized testing procedures provide researchers and interaction designers with a practical framework and offer theoretical support for subsequent studies.

1. Introduction

With the advent of innovative vehicular technologies, the application of autonomous driving (AD) technologies in public transportation systems is increasingly widespread, and its commercialization is expected to be soon realized on a large scale [1]. AD has already been implemented in fixed areas like underground [2,3]. Shared Autonomous Vehicles (SAV) [4], especially automated shuttles, have garnered significant attention in this context. In this study, SAV refers to shared ownership of public vehicles with private destinations (see Figure 1), commonly known as robot taxis or automated shuttle buses (ASB) [5]. This type of vehicle demonstrates the capability to autonomously navigate through most driving scenarios without the presence of a service attendant onboard, achieving a Level 4 autonomy rating on a scale ranging from 0 to 5 [6]. Distinct levels of autonomy correspond to varying degrees of human intervention, with levels 1–3 necessitating a human driver, whereas levels 4 and 5 facilitate driverless vehicle operation [7]. Contemporary vehicles are equipped with various advanced technologies, including optical radar, cameras, touch panels, GPS sensors, and algorithms that analyze the driver’s steering behavior. Future vehicles are expected to expand the scope of potential interactions, shifting gradually from driving as the central task to prioritizing leisure and entertainment [8]. The communication interfaces between users and AVs will become more prominent in this evolving landscape. Consequently, examining the transformations in HMIs is crucial [9].
Users’ acceptance of a new technology determines its development and smooth implementation [11,12,13]. Therefore, this research focuses on the user acceptance and experience of HMI in AVs [14]. Since most people have no experience with AVs, they are unfamiliar with this technology and cannot imagine what higher levels of automation could be [15]. As a result, people’s acceptance of this technology is low [16,17,18,19]. While some studies utilize user acceptance models to statistically analyze user experience data related to AVs, most research evaluates user acceptance by examining respondents’ intentions to use and purchase behavior, emphasizing subjective data collection. Moreover, scholars have different definitions of user acceptance of AVs.
Standardized test protocols are currently lacking for evaluating user acceptance of SAV’s HMIs. Agreement has not been reached on the pertinent use cases, assessment criteria, and appropriate experimental design process. Standardizing the evaluation process is essential for advancing HMI development. In response to this need, our study introduces a novel methodological guideline that standardizes the assessment of HMI acceptability in SAVs.

2. Literature Review

“Acceptance” denotes the level of receptivity towards utilizing innovative technologies or products [13], contrasting with rejection [20]. Within the realm of design, “user acceptance” is examined as a crucial aspect for exploring potential users’ overall attitudes and behavioral reactions toward introducing novel technologies [21]. In the field of market economics, three criteria are established to gauge user acceptance, encompassing broad acceptance, willingness to pay (WTP), and behavioral intention (BI) [12]. Studies on acceptance can indirectly influence users’ readiness to pay for new technology, which holds significant importance in the commercial deployment of autonomous vehicles. Nevertheless, given our focus on public transportation, the willingness to pay is not a predominant factor influencing user acceptance. Our primary interest lies in users’ favorable attitudes and feedback after interacting with AD technology. However, it is crucial to assess the level of user acceptance. The following subsection will describe the standard methods to detect user acceptance.

2.1. Analysis Methods for User Acceptance Evaluation

Utilizing key terms related to our research topic, we searched through the primary scholarly publication databases—Web of Science, Scopus, and Google Scholar—to retrieve published studies on the acceptance of AVs. The search terms comprised a combination of keywords such as (Automated OR Autonomous OR Driverless OR Self-driving OR Smart Vehicles OR Robot car) AND (Acceptance OR Acceptability OR accept* OR adopt*) AND (HMI OR HCI OR Interaction Design OR User and AVs) AND (test OR experiment). Given the innovative nature of AD technology, our search was confined to the last decade, specifically from 2014 to 2024. This search effort resulted in 472 findings, including academic papers, journal publications, conference proceedings articles, reports, and presentation slides, with duplicates excluded.
By reviewing the relevant literature, we found that research in this field usually forms a user acceptance model through development and testing, thus reducing the likelihood of user resistance or rejection [22]. A detailed description of the evolution of user acceptance models will be provided hereafter. The most representative models and those related to autonomous driving technology are summarized in Table 1.
In 1975, Fishbein and Arjen proposed the Theory of Reasoned Action (TRA) model, developing sociology and psychology research. This model uses three main cognitive components to predict and explain user behavior, including attitudes (people’s feeling of being unfavorable or beneficial), social norms (social influence), and intentions (individual’s decision to do or not to do a behavior) [25,26]. Subsequently, in 1985, Ajzen combined TRA with perceived behavioral control (PBC) and proposed the Theory of Planned Behaviors (TPB) [32]. Three factors affect Behavioral Intention to Use (BI): perceived behavioral control, subjective norm, and behavioral attitude. Combining the above two models, the theory of interpersonal behavior (TIB) has been generated, which includes all aspects of TRA and TPB and habits, conveniences, and influences that have been added to improve predictive capabilities. Another extension of TRA is the Technology Acceptance Model (TAM) proposed by Davis et al. in 1989. This model primarily explains users’ acceptance motivation through perceived usefulness, ease of use, and attitude toward use [37]. However, due to its focus on individual characteristics and neglect of the cognitive impact of technology during the acceptance process and its social influence, it has some limitations in application [38]. Regardless, it is one of the most frequently used user acceptance models and can also be helpful in autonomous driving.
Venkatesh et al. (2003) introduced the Unified Theory of Acceptance and Use of Technology (UTAUT) [39], which is a modification of TAM and provides a comprehensive explanation of acceptance by integrating determinants from various models. UTAUT posits that the intention to use information technology (IT) can be influenced by three factors: performance expectancy, effort expectancy, and social norms (subjective norms) [25]. Recent research has proposed new models to comprehend the acceptance and adoption of autonomous vehicles (AVs). For instance, Osswald et al. (2012) developed the Car Technology Acceptance Model (CTAM) [40,41] as an extension of UTAUT, incorporating additional attitudinal constructs such as safety, anxiety, task-related factors, self-efficacy, and general attitude towards technology in the context of manual car adoption. Additionally, Ghazizadeh et al. (2012) expanded the TAM model by introducing compatibility and trust, leading to the Automation Acceptance Model (AAM) [42]. Compatibility refers to the alignment between users, technology, task performance, and context, assessing the consistency of technology with user values, experiences, and needs. Trust, on the other hand, directly influences behavioral willingness. Trust and compatibility impact attitude and Behavioral Intention to Use (BI) through Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) [43]. This model holds significant value in the age of artificial intelligence [44].
Expanding on previous research, Hewitt et al. (2019) enhanced the Car Technology Acceptance Model (CTAM) by developing the Autonomous Vehicle Acceptance Model (AVAM) to establish a standardized approach for acceptance studies in the AV field [15]. The AVAM is a modification of the Unified Theory of Acceptance and Use of Technology (UTAUT) and CTAM [35]. It integrates the eight critical factors from UTAUT—Performance Expectancy, Effort Expectancy, Attitude Towards Technology, Social Influence, Facilitating Conditions, Self-Efficacy, Anxiety, and Behavioral Intention—and an additional factor introduced by CTAM, Perceived Safety. Additionally, Nordhoff et al. (2019) introduced the Model of Automated Vehicle Acceptance (MAVA), which considers personal exposure to AVs, systematic evaluation, and individual variances such as socio-demographics, personality traits, and travel behavior [45]. The MAVA model is based on the Unified Theory of Acceptance and Use of Technology (UTAUT3, Venkatesh et al., 2003) [36] and CTAM.

2.2. Existing Research Gaps

The above models and their extensions have improved the understanding of the factors affecting user acceptance related to autonomous driving technology. Nevertheless, most studies that utilize user acceptance models to examine AD technology are restricted to evaluating the general perception of this technology and defining each pertinent factor. Although there are various definitions and assessment methods for interactive experiences and user acceptance studies, there is no user acceptance model designed explicitly for the evaluation process of HMI in SAV [22], which means there is no standardized test procedure to stipulate relevant use cases, evaluation requirements, and appropriate experimental design. It is essential to standardize the testing procedures to facilitate the development and comparison of HMIs.

2.3. Research Aim

A conceptual model and experimental testing procedure that can measure and compare user acceptances are required before AD technologies are fully developed and applied to public transportation. This research aims to develop a method for testing whether HMIs can compensate for the communication deficit between users and AVs in public transportation systems and encourage user acceptance. Therefore, the focus of this study is to summarize existing AV acceptance models and update the multi-level model to explain the block diagram by identifying decisive factors that influence users’ support or opposition to this new technology [26,46]. This requires comparing existing experimental methods and processes by understanding the specific measurement methods for influencing factors to summarize a protocol suitable for this field to reflect the needs and motivations of users to use SAV.

3. Materials and Methods

To compare user acceptance effectively, it is necessary to employ clearly defined measurement methods and tools informed by an understanding of the concept of acceptance (definition) and its contextual boundaries within the domain of acceptance measurement (such as transportation or driver assistance systems). Figure 2 illustrates the three interconnected pillars of the acceptance concept. Therefore, the research strategy and methodology for the standardized test procedure we proposed in this study are as follows:
  • Definition of user acceptance requirements: The acceptance criteria for HMI were defined based on the original user acceptance models proposed by other scholars. Thus, the acceptance of an SAV in an HMI is based on it being helpful, efficient, compelling, learnable, satisfying, and accessible. In assessing the fulfillment of these criteria by an HMI, we established suitable parameters and criteria for each requirement.
  • Definition of relevant use cases: Identifying pertinent use cases forms the foundation for a testing protocol to assess the user acceptance of HMIs. We proposed a method to classify relevant use cases and develop corresponding comparative testing procedures for different classification results.
  • Test protocol for empirical studies: It outlines the methodological details for empirically evaluating a specific HMI through a user study. This includes the experimental framework, such as the sample, test environment and apparatus, procedure and instruction, and data collection and analysis methods.

3.1. Definition of User Acceptance Requirements

Prior research on SAV’s HMI has not yet reached a consensus on user acceptance criteria. As summarized in Section 2.1, many scholars have developed a series of models to evaluate the HMI of autonomous driving systems and summarize the relevant factors that can affect user acceptance. However, no standardized requirements exist to compare user acceptance between different HMIs. Table 2 consolidates nearly all the factors that impact user acceptance in those models and categorizes these variables. The influencing factors with personal characteristics such as “social demographics” and “travel behavior” are eliminated. Hence, we establish six criteria for determining the acceptability parameters for an HMI in a SAV.
The HMI should support users in completing the defined task while interacting with the ASB (usefulness) efficiently and accurately (efficiency). Furthermore, it should prevent wrong behavior from passengers and other road users (effectiveness) without requiring advanced competencies (learnability). Additionally, it should be designed for easy comprehension, promote positive user perception (satisfaction), and be accessible to individuals with disabilities (accessibility).
Since this study aims to compare user acceptance between different HMIs, six parameters require a relative instead of an absolute criterion. In addition, the test procedure needs to determine the specific method for evaluating these six parameters. As Van der Laan et al. (1997) stated, user acceptance is directed more toward assessing the system’s ergonomics [47]. According to Crilly et al. (2004), the consumer’s post-use reaction to HMI can be divided into three categories: behavioral response, cognitive response, and affect [48]. Their specific performance factors are also sorted out minutely in Figure 3, which shows the categories and directions of user responses in the field of ergonomics.
Clarify the six parameters that must be examined while engaging with AV interfaces in an ergonomic study environment. In conjunction with the analysis presented in Table 2, categorize the precise description of each influencing factor in the field of ergonomics, together with the specific metrics in which they were measured and the corresponding type of user response, as outlined in Table 3.
The outcome is presented in Table 4, which displays the assessment metrics for user acceptance. It encompasses five categories of ergonomic analysis and incorporates the stated indications for each factor analysis of user acceptability. Additionally, it includes the specific measurement items and procedures for data collection.
The assessments can be achieved by defining the factors that need to be measured. Using the above scale, participants’ interactive behavior could be recorded. Once the specific measurement indicators for different user acceptance factors are defined, it is convenient to collect the embodied data by analyzing user interaction behavior and HMIs. Jeffries et al. (1991) examined four methodologies for assessing user interfaces: heuristic evaluation, software guidelines, cognitive walkthroughs, and usability testing [53]. These four methods have room for application in this test protocol. As data collection methods are shown in Table 4, investigation techniques can be classified by the timeline based on qualitative and quantitative ways.
(1)
Pre-test Questionnaire survey for the public before the experiment;
(2)
During the experiment, record the physiological signal data and process of training subjects through different physical devices and collect the real-time data through guidelines of the staff;
(3)
Post-experiment interviews, questionnaires, and heuristic evaluations.
The above methods should be guided by user interaction and ergonomics. The Likert scale should assign a value from 1 to 5 to each metric, which could reflect and compare subjective and objective indicators. In addition, cameras should be equipped throughout the experiment to capture user interaction dynamics continuously. Recording the specified tasks and the activities of sample users, such as conversations, nonverbal interactions, interface usage, etc., is helpful.

3.2. Definition of Relevant Use Cases

HMI [54] encompasses the devices that facilitate interaction between individuals and vehicles to accomplish seamless human-computer collaboration in AD scenarios [55]. As Johannsen (2009) has noted, HMI is a communication channel that enables interaction between human users and AVs [53], divided into internal HMI (iHMI) and external HMI (eHMI).
Previous studies have predominantly concentrated on the interactions between vehicles and pedestrians at crosswalks in urban settings, particularly at low speeds, examining eHMI aspects and the transition between AD systems and human drivers, focusing on iHMI perspectives. Nonetheless, this research scope is limited and does not encompass the full range of potential HMI applications. To assess user acceptance of an HMI in a standardized manner, study participants must encounter the HMI within a diverse set of relevant use cases. Consequently, defining pertinent use cases is pivotal in the evaluation process, ensuring that the testing protocol yields comparable and meaningful outcomes.
This research employed a systematic process to address a comprehensive range of use cases for an HMI. Figure 4 provides a visual representation of it. Specifically, the procedure includes collecting and consolidating use cases and their specifications, refining them iteratively by eliminating redundancies, and considering various factors.

3.2.1. Defining the Use Case of an eHMI and iHMI

The foundation of this methodology was the articulation of an HMI use case. According to the different use scenarios, HMI in SAV can be divided into external HMI (eHMI) and internal HMI (iHMI) [56].
A scenario involving an SAV and at least one human road user sharing the same space and time is considered a use case for an eHMI [57]. In this situation, eHMI plays a crucial role in helping human road users comprehend and anticipate the actions of AVs. This assistance allows them to promptly modify their intended maneuvers, for instance, by altering speed or direction. Traffic conflicts arise when the movements of two or more road users intersect, underscoring the importance of eHMI as a necessary interactive medium between road users and AVs to mitigate potential traffic conflicts [58].
In addition, an iHMI use case is defined as a situation in which at least one passenger interacts within the SAV [56]. In this case, the HMI is dedicated to enhancing passenger engagement and usability through demand-driven interactive services. The interactive behavior here means that passengers can understand the intention of the SAV through the interface and acquire as much information as possible about the SAV’s driving situation and other information during the ride. Using iHMI as a means of in-vehicle information transmission can enhance the transparency of communication and help users understand the necessity and realizability of remote operation and control transfer [59].
This allows users to draw conclusions about their own interactive behavior from both an eHMI perspective and an iHMI perspective.

3.2.2. Display-Based Approach and Layout-Based Approach

Those two approaches gather all the possible information that an HMI of an SAV can present and may appear in all the different locations and carriers. An examination of 15 prevalent concept ASBs (refer to Figure 5) was carried out via content analysis of a diverse range of media, showcasing the interactive display capabilities of the HMI.
The existing HMI solutions from these case studies are summarized in two aspects:
  • Information exchange of external HMI:
External HMI (eHMI) elucidates ASBs’ communication and interaction capabilities with other road users (ORU), encompassing VRUs like pedestrians, cyclists, and wheelchair users, as well as human drivers of manually operated vehicles [60].
The external interaction is progressively being emphasized. As depicted in Figure 6, the ASB’s eHMI information display and notification mechanisms currently fall into three categories. The emphasis on user experience has increased from communicating information solely through lights to installing displays at the front-end/rear-end/body-end of ASB for text, expressions, and other signals. Therefore, all cases where there is no display of information and carriers on the outside of the vehicle are filtered out because, in this case, the user cannot interact substantially with the SAV. Moreover, it is worthwhile to investigate how to avoid information overload while captivating users’ attention and allowing them to trust ASB through appropriate interaction.
  • Collaborative construction of internal HMI:
The absence of a driver will increase the demand for reliable and easily accessible information, which gives the passenger access to non-driving-related activities (NDRA). As shown in Figure 7, the interior HMIs from the 15 selected ASBs are analyzed in terms of the different interior layouts and display carriers. Displays and physical controls dominate the ASB for the internal HMI, with an application on the user’s mobile device serving as a secondary means. ASBs have essentially fixed routes and locations, and an app is used to obtain information about scheduled stops in the case of on-demand routes. It is important to exclude cases where only an app serves as the interface for human–vehicle interaction. In such instances, users do not engage with the vehicle in any physical sense, and therefore, we consider this to be beyond the scope of our research. Additionally, due to the layout and seating position of the ASB, the internal display has progressively evolved from a single primary screen to multiple screens of diverse sizes and positions [8]. The seating arrangement of ASBs differs from that of conventional buses; the majority have a circular layout, and passengers face various directions during operation, necessitating multiple displays to convey information.
The resulting five generic displays obtained by the display-based approach are depicted in the lower row of Figure 8, which illustrates the complete process of implementing this result. Furthermore, as shown in Figure 4, the layout-based method is tailored explicitly for iHMI. Based on the analysis mentioned above, passenger seating arrangements within the vehicle can be categorized into the following three configurations: semi-circular C-shaped layout, front and back rows of seats facing each other, and all seats uniformly facing forward.

3.2.3. Situation-Based Approach

Providing sufficient travel services for residents and tourists is a complex socio-technical task for urban public transportation systems. People’s flow is at the core of future urban transportation planning and decision-making. The complexity of the situation lies in defining all the factors that interact with traffic participants. For users, the only scenario inside the vehicle is riding, while there are various scenarios outside the vehicle; hence, this approach focuses on eHMI analysis. When the driving direction of a bus does not conflict with the interactive objects outside (such as pedestrians, other AVS, motor vehicles, non-motor vehicles, etc.), the vehicle the application of AD technology on the bus does not have a substantial impact, and these scenarios do not affect users’ acceptance of this technology. As summarized by Christina et al. in 2020 [58], the following three situations contain all cases of traffic conflicts, as shown in Figure 9. Additionally, since this situation-based approach does not consider the specific background of the occurrence, such as the urban environment, highways, intersections, or parking areas, these factors will be gathered as part of the Collection of Situation-Specific Factors in Section 3.2.6.
(1)
The automated vehicle is approached frontally by the interaction partner;
(2)
Orthogonally from the side;
(3)
Merges in front of the automated vehicle with a lateral approach direction.

3.2.4. Maneuvers-Based Approach

This approach, similar to the situation-based approach discussed in Section 3.2.3, applies only to the scenarios involving eHMI. Specifically, the maneuvers-based method was utilized to compile all potential driving actions that an SAV can perform. Driving actions were classified into longitudinal and lateral maneuvers. Longitudinal maneuvers involve maintaining a constant speed and reducing and increasing speed while the vehicle is in motion. Lateral maneuvers include driving straight, making turns (left, right), and changing lanes (left, right). As the analysis of traffic conflict cases in Section 3.2.3 already covered lateral maneuvers, all such maneuvers are filtered in this section.
Although objects interacting with AVs can infer the vehicle’s anticipated behavior through subtle implicit information such as the vehicle’s speed, turn signals, and brake lights, indicating actions like maintaining a constant speed, accelerating, or decelerating, eHMI can help other road users anticipate the driving intentions of autonomous vehicles through explicit signals. Therefore, this study’s five longitudinal driving maneuvers include maintaining a constant speed while driving, accelerating, decelerating, and starting or stopping.

3.2.5. Collection of Displayed Information

To ensure a comprehensive range of use cases, we gathered all presented information that could impact the SAV’s partner’s interaction. This information displayed by the interior and exterior HMIs is summarized in Table 5 in addition to their form and classification, which lists the critical information and notifications the SAV should provide to the user at various phases. In the detailed planning of the test protocol, it is crucial to select the specific types of display information to be tested to enhance the precision of the testing process.

3.2.6. Collection of Situation-Specific Factors

As described in Section 3.2.3, all specific contextual factors influencing the interaction between the SAV and its interaction partner should also be considered. Therefore, we have collected a comprehensive set of relevant use cases. Drawing on the methodology outlined by Fuest et al. (2017) [61] and Kaß, Christina, et al. (2020) [58], we assigned value facets to each of the identified factors. The collected situational factors and their corresponding detailed classification are presented in Table 6.

3.2.7. Selection of Relevant Use Cases

This approach outlined in this research offers an explicit and repeatable procedure for selecting relevant use cases to evaluate user acceptance of HMIs in SAVs. The current set of use cases encompasses all relevant scenarios for testing user acceptance of HMIs during interactions with SAVs. Researchers and practitioners intending to utilize this systematic process to define relevant use cases should exercise caution in its application, ensuring its expansibility and validity are strengthened.
Based on Section 3.1 and 3.2, the experimental protocol will be detailed in Section 4.

4. Test Protocol for Empirical Studies

The proposed requirements and criteria in Section 3.1 contribute to standardizing test procedures for evaluating the user acceptance of AV’s HMI in public transportation from an ergonomic perspective. We developed a test protocol for empirical evaluation using Virtual Reality (VR) technology to demonstrate whether the HMI design meets ergonomic-related requirements. Considering cost and safety concerns related to AD technology, this testing protocol will be applied in the design and iteration phases of the HMI to reduce error costs. It is divided into three steps: (1) Preliminary preparation, (2) Executing the test, (3) Aggregating data. The first step encompasses the selection of testing environments and instruments and the identification of testing personnel, as described in Section 4.1 and 4.2. The specific categorization and process details of the second step are outlined in Section 4.3, with Section 4.4 addressing data aggregation.

4.1. Test Environment and Apparatus

As a public vehicle that serves multiple target user communities [62,63], the implementation and planning of SAVs require the support and collaboration of various entities such as government agencies, city road planners, and transportation systems [64]. Therefore, the primary obstacle in assessing user acceptance of SAVs is the need for a tangible prototype [65]. Due to manufacturing costs and lead times, user research should be conducted on autonomous shuttles before their formal market deployment to ensure user acceptance. Thus, technologies other than physical prototypes are more suitable for testing in such research.
As summarized by previous studies, Wizard-of-Oz experiments [66], structured questionnaires [67], and VR experiments [53] are three standard methods for testing user acceptance [8]. Wizard-of-Oz experiments pose safety risks as they require simulation in actual road conditions. At the same time, structured questionnaires are unsuitable for this study as they cannot verify whether participants’ responses align with their actual behavior. In contrast, VR technology is highly applicable in investigating user acceptance of emerging technologies. The advantages of this technology in experimental testing, as highlighted by Rebelo et al. (2012), include safety, availability, and data provision [68]. Safety is unquestionably crucial, as VR technology can prevent injuries compared to experimental procedures in real-world settings. Availability refers to the ability to repeatedly simulate specific tasks without the time and cost required to set up actual scenes. Data provision entails collecting participants’ test data through software programs to aid researchers in updating and iterating designs. VR technology also has its limitations. A potential risk involves the potential for “motion sickness”, which is a temporary feeling of nausea that may occur after prolonged use of a VR headset.
We recommend utilizing widely employed game development engines to construct virtual testing environments within experimental devices. Specifically, an automated shuttle is used as a prototype to maneuver in the virtual setting following a predetermined driving route, aided by a head-mounted display (HMD) to facilitate user immersion in the virtual environment for real-time interaction with SAV and non-motorized interaction partners inside and outside the vehicle.

4.2. Participants

Due to its public service nature, interaction with automated shuttles will involve the entire societal group. The selection of participants should encompass various aspects and should not be limited by personal attributes such as nationality, gender, or educational level. Furthermore, to obtain a representative age distribution, it is recommended to follow the guidelines of the National Highway Traffic Safety Administration (NHTSA) by selecting an equal number of participants from four different age groups: 18–24, 25–39, 40–54, and 54 and above [69]. In addition, participants are required to have certain capabilities. Although automated shuttles generally do not require human driver assistance, familiarity with advanced driving assistance systems (i.e., lane-keeping assistance and adaptive cruise control) and holding a driver’s license are considered favorable criteria for selection. Moreover, individuals with prior experience using VR head-mounted displays are less likely to experience motion sickness.

4.3. Procedure and Instruction

Before testing, participants should receive information regarding the study’s objectives, the potential risks associated with the experimental process, the personal data to be collected, and the procedures for handling these data post-experiment, all of which will be communicated through an informed consent form. Following this, participants will be given around 5 min for a warm-up session to acquaint themselves with the experimental equipment. During this phase, participants will be introduced to all the HMI use cases for user testing to ensure a thorough understanding of the testing protocol.
During the test, participant’s activities will be recorded. Moreover, they will be asked to complete questionnaires before and after the experiment, with personal data and information about their thoughts and experiences with SAV and VR during the test. Audio and video recording will be conducted during the VR experiment. After the testing period with the VR scenario, a short-structured interview is conducted. Figure 10 offers a comprehensive depiction of the sequential arrangement of the different stages encompassed in the test, with distinct colors to highlight the specific data that must be gathered at each step.
For HMI exploration of public transportation, inside and outside scenarios that closely interact with the public should be considered. Therefore, this protocol recommends including the following two experimental procedures. The test process and execution sequence should be defined in advance. These tasks are then grouped into the relevant use cases that include specific test locations, interaction goals, the information needed to perform the tasks, etc.

4.3.1. Testing Process of HMI Inside the SAV

The latest research on HMI systems for SAV internal scenarios primarily focuses on remote operation, enhancing information transparency, expressing the needs of different user groups, and transferring control rights [59]. The goal is to increase user acceptance and trust in SAV through interaction with interior HMIs. In-vehicle HMIs should support passengers in supervising the driving environment when necessary and assist them in handling Non-Driving Related Activities (NDRA) based on their needs [70]. Therefore, testing subjects should be placed in the vehicle environment to assess internal HMIs. Exploring all sensory feedback from participants during interactions, including visual attention, response time, the impact of notification pop-up methods on information acquisition, the role of prompt volume, and user actions, is crucial [71]. For example, creating the following scenario in a virtual environment: the experience of users navigating specific areas (such as university campuses, see Figure 11) within an automated shuttle bus. The following tasks could be defined and should be completed by participants.
  • Scenario initiation: Participants will position themselves at the entrance of an automated shuttle bus and select a seating or standing location.
  • Confirm destination: Participants will be instructed that their goal is to complete the following tasks as the bus travels and eventually get off at a specific site.
    (I)
    Homepage exploration: According to the instructions, the participants should go to the information display screen in the operation interface to understand its primary functions.
    (II)
    Visualizing the travel details: Participants locate the relevant information display area of the vehicle stop to review the on-site information.
    (III)
    Observing the vehicle’s operational status: Throughout the process, the participants could clearly understand the bus’s operation, such as approaching a zebra to wait for pedestrians crossing the street or merging with other vehicles at intersections. They were then prompted to express their real-time emotional responses.
    (IV)
    Visualize the entertainment information: Participants were asked to browse and select entertainment information and functions, such as checking today’s weather or changing the car’s background music.
    (V)
    Transition of vehicle control: Participants were instructed to execute the transfer of vehicle control among multiple users.
  • Conclusion of the scenario: Upon reaching the destination, participants will be notified by the bus through a verbal announcement.

4.3.2. Testing Process of HMI Outside the SAV

For the application scenarios of HMI outside the bus, more attention should be paid to whether the external interacting objects can better understand the driving intentions of the bus through HMI. The interacting objects here refer to pedestrians, other autonomous vehicles, motor vehicles, non-motor vehicles, etc. When the driving direction of the bus does not conflict with the external interacting objects, the application of AD technology on the bus will not have a substantial impact, and the content displayed by HMI will not affect the user’s acceptance of this technology. Therefore, the experimental design of eHMI will focus on the three conflict scenarios mentioned in Section 3.2.3 and ensure that the tests are conducted in a traffic environment without right-of-way rules, as at this point, eHMI is the sole means for users to receive information, aiding in variable control. The experiment selects pedestrians outside the vehicle as the perspective for testing the user experience, which can be divided into two processes. In the first process, the primary objective is to assess user acceptance levels when operating a conventional vehicle under human control instead of an SAV equipped with eHMI. In contrast, the second process aims to convey information to pedestrians through visual technology, thereby enabling a comparison of user acceptance across the selected eHMI concepts. The whole experiment should be conducted in a controlled environment with a trackable area to ensure the participant’s safety and range of movement.
The first process of the experiment should include specific tasks from A to E, as illustrated in Figure 12. These tasks are presented in a randomized order, with each task separated by a set time and distance to maintain experimental randomness.
  • A to B: In a virtual environment, participants start at a one-way street and are instructed to cross to point B on the opposite side. AVs or conventional vehicles pass uninterruptedly from the side, and participants use the information displayed on the eHMI to decide when to cross.
  • B to C: When approaching a vehicle from a rear diagonal position, participants assess its type (AV or conventional) and status using the HMI displayed on the vehicle, enabling them to navigate the intersection safely.
  • D to E: Participants encounter an AV or conventional vehicle from the opposite side. Due to road construction, only one party can pass through first from the narrow gap, and the eHMI provides information on which party has priority.
The second phase of the experiment, focusing on the A-to-B step in Figure 12, involved instructing participants to repeatedly cross the intersection between points A and B via the pedestrian crosswalk, facing only Avs (see Figure 13). The number of repetitions corresponded to the different eHMI concepts being tested. To mitigate potential order effects and balance experimental conditions, a randomized Latin square design was implemented to determine participant assignments.

4.4. Data Aggregation and Analysis

The findings were shared and organized during the data aggregation phase to extract valuable insights relevant to the investigation focus. The data obtained from the tests helped identify the most favored alternative from the users’ perspective, offering significant guidance for further design development. As Table 4 of Section 3.1 outlines, comprehensively gathering quantitative and qualitative data is essential.
Assessing user acceptance involves evaluating six dimensions: usefulness, efficiency, effectiveness, learnability, satisfaction, and accessibility. These dimensions can be examined through operational behavior and emotional cognition. Operational behavior is commonly assessed using methods like the driving quality scale (Brookhuis, 1993) [72], the usability questionnaire (Brooke, 1996) [73], the usefulness and satisfaction scale (Van der Laan et al., 1997) [47], and a willingness-to-pay questionnaire (Brookhuis, Uneken, and Nilsson, 2001) [74]. Emotional cognition measurement typically utilizes individual interviews, focus groups, standardized questionnaires, and self-reporting methods.
The test protocol outlined in this paper involves analyzing and comparing user behavior from both operational behavior and emotional cognition perspectives. Functional factors that primarily represent the user’s operational behavior include physical human factors (e.g., posture, occlusion, and mental load), physiological data (e.g., heart rate variability (HRV) and Galvanic Skin Response (GSR)), and task-related metrics (e.g., completion times, number of clicks, and success and failure rates). Furthermore, using an eye tracker to capture information on eye-fixation position, sequence, and duration on the interface can aid in evaluating design rationale [8]. In contrast, emotional factors focus on the user’s subjective feelings and satisfaction, making them more suitable for qualitative data collection methods. The specific data that need to be collected are detailed in Figure 14.

5. Discussion and Conclusions

The fundamental purpose of HMI is to facilitate seamless communication between humans and vehicles, particularly in scenarios where vehicles operate at higher levels of AD technology [75]. Future research on SAV design is expected to focus on understanding users’ perceptions, optimizing information organization, and advancing HMI development. Enhancing user acceptance and fostering receptiveness to new technologies is crucial for businesses and researchers seeking to improve HMI design and predict users’ reactions [76].
In this study, we introduce a methodological framework to standardize the assessment of user acceptance of HMIs in SAV. The test protocol encompasses deriving relevant use cases, defining parameters associated with user acceptance, and specifying the influencing factors for each parameter. By implementing this framework, the test protocol can effectively evaluate whether these requirements are met, providing a reliable basis for HMI evaluations and enabling meaningful comparisons among different HMI variants [77]. The standardized test protocol offers a valuable framework for researchers and practitioners and benefits the field. This contribution is poised to lay the groundwork for future interaction design and testing standards within autonomous driving.
However, as this article remains theoretical, the proposed test protocol has yet to be applied. Additionally, mapping the data collected during experiments to the six criteria influencing user acceptance is challenging. Therefore, the following research phase will involve the practical application of the test protocol with diverse HMI design variations and specifications of autonomous driving systems. The method can be further enhanced through iterative refinement and detailed adjustments based on accumulated experience.
Furthermore, due to its focus on public transportation, this study suggests that special groups in society, such as pregnant women, elderly individuals, persons with disabilities, and other mobility-challenged populations, should receive more attention. Future research efforts should compare the acceptance levels of different user groups towards various HMIs in different circumstances, such as extreme weather conditions, hazardous road sections, and traffic congestion, to evaluate the effectiveness of different HMIs.

Author Contributions

Conceptualization, M.Y. and G.C.; methodology, M.Y. and G.C.; formal analysis, M.Y.; investigation, M.Y.; resources, M.Y.; data curation, M.Y.; writing—original draft preparation, M.Y.; writing—review and editing, L.R. and G.C.; visualization, M.Y.; supervision, L.R. and G.C.; project administration, M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The current study was conducted at Politecnico di Milano’s i.Drive (Interaction of Driver, Road, Infrastructure, Vehicle, and Environment) Laboratory (http://www.idrive.polimi.it/, accessed on 1 May 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yan, M.; Rampino, L.; Caruso, G. Fostering User Acceptance in Shared Autonomous Vehicles: A Framework for HMI Design. Multimodal Technol. Interact. 2024, 8, 94. [Google Scholar] [CrossRef]
  2. Lim, H.S.M.; Taeihagh, A. Autonomous Vehicles for Smart and Sustainable Cities: An in-Depth Exploration of Privacy and Cybersecurity Implications. Energies 2018, 11, 1062. [Google Scholar] [CrossRef]
  3. Hu, J.; Bhowmick, P.; Arvin, F.; Lanzon, A.; Lennox, B. Cooperative Control of Heterogeneous Connected Vehicle Platoons: An Adaptive Leader-Following Approach. IEEE Robot. Autom. Lett. 2020, 5, 977–984. [Google Scholar] [CrossRef]
  4. Yan, M.; Lu, P.; Arquilla, V.; Brevi, F.; Rampino, L.; Caruso, G. Systemic Design Strategies for Shaping the Future of Automated Shuttle Buses. Appl. Sci. 2023, 13, 11767. [Google Scholar] [CrossRef]
  5. Narayanan, S.; Chaniotakis, E.; Antoniou, C. Shared Autonomous Vehicle Services: A Comprehensive Review. Transp. Res. Part C Emerg. Technol. 2020, 111, 255–293. [Google Scholar] [CrossRef]
  6. Woolridge, E.; Chan-Pensley, J. Measuring the User Comfort of Autonomous Vehicles; Human Drive: Milton Keynes, UK, 2020. [Google Scholar]
  7. Burns, C.G.; Oliveira, L.; Thomas, P.; Iyer, S.; Birrell, S. Pedestrian Decision-Making Responses to External Human-Machine Interface Designs for Autonomous Vehicles. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; Volume 2019, pp. 70–75. [Google Scholar] [CrossRef]
  8. Yan, M.; Geng, W.; Hui, P. Towards a 3D Evaluation Dataset for User Acceptance of Automated Shuttles. In Proceedings of the 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Shanghai, China, 25–29 March 2023; pp. 89–93. [Google Scholar]
  9. Ahangar, M.N.; Ahmed, Q.Z.; Khan, F.A.; Hafeez, M. A Survey of Autonomous Vehicles: Enabling Communication Technologies and Challenges. Sensors 2021, 21, 706. [Google Scholar] [CrossRef]
  10. Richter, M.A.; Hagenmaier, M.; Bandte, O.; Parida, V.; Wincent, J. Smart Cities, Urban Mobility and Autonomous Vehicles: How Different Cities Needs Different Sustainable Investment Strategies. Technol. Forecast. Soc. Chang. 2022, 184, 121857. [Google Scholar] [CrossRef]
  11. Schoettle, B.; Sivak, M. A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the US, UK and Australia. UMTRI Transp. Res. Inst. 2014, 1–38. [Google Scholar]
  12. Liu, X.; He, P.; Chen, W.; Gao, J. Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding. arXiv 2019, arXiv:1904.09482. [Google Scholar]
  13. Schuitema, G.; Steg, L.; van Kruining, M. When Are Transport Pricing Policies Fair and Acceptable? Soc. Justice Res. 2011, 24, 66–84. [Google Scholar] [CrossRef]
  14. Shariff, A.; Bonnefon, J.F.; Rahwan, I. Psychological Roadblocks to the Adoption of Self-Driving Vehicles. Nat. Hum. Behav. 2017, 1, 694–696. [Google Scholar] [CrossRef]
  15. Detjen, H.; Faltaous, S.; Pfleging, B.; Geisler, S.; Schneegass, S. How to Increase Automated Vehicles’ Acceptance through In-Vehicle Interaction Design: A Review. Int. J. Hum.-Comput. Interact. 2021, 37, 308–330. [Google Scholar] [CrossRef]
  16. Kim, H.-C.; Kim, H.-C. Acceptability Engineering: The Study of User Acceptance of Innovative Technologies. J. Appl. Res. Technol. 2015, 13, 230–237. [Google Scholar] [CrossRef]
  17. Bjørner, T. Aalborg Universitet A Priori User Acceptance and the Perceived Driving Pleasure in Semi-Autonomous and Autonomous Vehicles. In Proceedings of the European Transport Conference 2015, Frankfurt, Germany, 28–30 September 2015; pp. 1–13. [Google Scholar]
  18. Miglani, A.; Diels, C.; Terken, J. Compatibility between Trust and Non—Driving Related Tasks in UI Design for Highly and Fully Automated Driving. In Proceedings of the AutomotiveUI 2016—8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Adjunct Proceedings, Ann Arbor, MI, USA, 24–26 October 2016; Association for Computing Machinery, Inc.: New York, NY, USA, 2016; pp. 75–80. [Google Scholar]
  19. Xu, Z.; Zhang, K.; Min, H.; Wang, Z.; Zhao, X.; Liu, P. What Drives People to Accept Automated Vehicles? Findings from a Field Experiment. Transp. Res. Part C Emerg. Technol. 2018, 95, 320–334. [Google Scholar] [CrossRef]
  20. Fraedrich, E.; Cyganski, R.; Wolf, I.; Lenz, B. User Perspectives on Autonomous Driving; Humboldt-Universität zu Berlin: Berlin, Germany, 2016; ISBN 9783662458532. [Google Scholar]
  21. Pigeon, C.; Alauzet, A.; Paire-Ficout, L. Factors of Acceptability, Acceptance and Usage for Non-Rail Autonomous Public Transport Vehicles: A Systematic Literature Review. Transp. Res. Part F Traffic Psychol. Behav. 2021, 81, 251–270. [Google Scholar] [CrossRef]
  22. Taherdoost, H. A Review of Technology Acceptance and Adoption Models and Theories. Procedia Manuf. 2018, 22, 960–967. [Google Scholar] [CrossRef]
  23. Zhang, T.; Tao, D.; Qu, X.; Zhang, X.; Zeng, J.; Zhu, H.; Zhu, H. Automated Vehicle Acceptance in China: Social Influence and Initial Trust Are Key Determinants. Transp. Res. Part C Emerg. Technol. 2020, 112, 220–233. [Google Scholar] [CrossRef]
  24. Yuen, K.F.; Cai, L.; Qi, G.; Wang, X. Factors Influencing Autonomous Vehicle Adoption: An Application of the Technology Acceptance Model and Innovation Diffusion Theory. Technol. Anal. Strateg. Manag. 2021, 33, 505–519. [Google Scholar] [CrossRef]
  25. Holsapple, C.W.; Wu, J. User Acceptance of Virtual Worlds. In ACM SIGMIS Database: The DATABASE for Advances in Information Systems; Association for Computing Machinery: New York, NY, USA, 2007; Volume 38, pp. 86–89. [Google Scholar] [CrossRef]
  26. Madigan, R.; Louw, T.; Wilbrink, M.; Schieben, A.; Merat, N. What Influences the Decision to Use Automated Public Transport? Using UTAUT to Understand Public Acceptance of Automated Road Transport Systems. Transp. Res. Part F Traffic Psychol. Behav. 2017, 50, 55–64. [Google Scholar] [CrossRef]
  27. Liang, Y.; Zhang, G.; Xu, F.; Wang, W. User Acceptance of Internet of Vehicles Services: Empirical Findings of Partial Least Square Structural Equation Modeling (PLS-SEM) and Fuzzy Sets Qualitative Comparative Analysis (fsQCA). Mob. Inf. Syst. 2020, 2020, 6630906. [Google Scholar] [CrossRef]
  28. Yuen, K.F.; Choo, L.Q.; Li, X.; Wong, Y.D.; Ma, F.; Wang, X. A Theoretical Investigation of User Acceptance of Autonomous Public Transport. Transportation 2022, 50, 545–569. [Google Scholar] [CrossRef]
  29. Nordhoff, S.; De Winter, J.; Kyriakidis, M.; Van Arem, B.; Happee, R. Acceptance of Driverless Vehicles: Results from a Large Cross-National Questionnaire Study. J. Adv. Transp. 2018, 2018, 5382192. [Google Scholar] [CrossRef]
  30. Nordhoff, S.; Stapel, J.; He, X.; Gentner, A.; Happee, R. Perceived Safety and Trust in SAE Level 2 Partially Automated Cars: Results from an Online Questionnaire. PLoS ONE 2021, 16, e0260953. [Google Scholar] [CrossRef]
  31. Bornholt, J.; Heidt, M. Association for Information Systems Association for Information Systems.To Drive or Not to Drive-A Critical Review Regarding the Acceptance of Autonomous Vehicles. In Proceedings of the ICIS 2019 Proceedings (5), Munich, Germany, 15–18 December, 2019; Volume 2019, pp. 1–17. [Google Scholar]
  32. Adnan, N.; Md Nordin, S.; bin Bahruddin, M.A.; Ali, M. How Trust Can Drive Forward the User Acceptance to the Technology? In-Vehicle Technology for Autonomous Vehicle. Transp. Res. Part A Policy Pract. 2018, 118, 819–836. [Google Scholar] [CrossRef]
  33. Vokrinek, J.; Schaefer, M.; Pinotti, D. Multi-Agent Traffic Simulation for Human-in-the-Loop Cooperative Drive Systems Testing. In Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), Paris, France, 5–9 May 2014; Volume 2, pp. 1691–1692. [Google Scholar]
  34. Ghazizadeh, M.; Lee, J.D.; Boyle, L.N. Extending the Technology Acceptance Model to Assess Automation. Cogn. Technol. Work 2012, 14, 39–49. [Google Scholar] [CrossRef]
  35. Hewitt, C.; Politis, I.; Amanatidis, T.; Sarkar, A. Assessing Public Perception of Self-Driving Cars. In Proceedings of the IUI’19: 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20 March 2019; pp. 518–527. [Google Scholar] [CrossRef]
  36. Nordhoff, S.; Kyriakidis, M.; van Arem, B.; Happee, R. A Multi-Level Model on Automated Vehicle Acceptance (MAVA): A Review-Based Study. Theor. Issues Ergon. Sci. 2019, 20, 682–710. [Google Scholar] [CrossRef]
  37. Grover, P.; Kar, A.K.; Janssen, M.; Ilavarasan, P.V. Perceived Usefulness, Ease of Use and User Acceptance of Blockchain Technology for Digital Transactions—Insights from User-Generated Content on Twitter. Enterp. Inf. Syst. 2019, 13, 771–800. [Google Scholar] [CrossRef]
  38. vom Brocke, J.; Simons, A.; Riemer, K.; Niehaves, B.; Plattfaut, R.; Cleven, A. Standing on the Shoulders of Giants: Challenges and Recommendations of Literature Search in Information Systems Research. Commun. Assoc. Inf. Syst. 2015, 37, 205–224. [Google Scholar] [CrossRef]
  39. Johnsen, A. D2.1 Literature Review on the Acceptance and Road Safety, Ethical, Legal, Social and Economic Implications of Automated Vehicles Trafiksyn View Project Methods and Metrics for Assessing Societal Effects of Transport Automation View Project. 2018. Available online: https://www.researchgate.net/publication/325786957_D21_Literature_review_on_the_acceptance_and_road_safety_ethical_legal_social_and_economic_implications_of_automated_vehicles (accessed on 20 December 2024).
  40. Kruse, D. Consumer Acceptance of Shared Autonomous Vehicles. Master’s Thesis, Copenhagen Business School, Frederiksberg, Denmark, 2018. [Google Scholar]
  41. Reig, S.; Norman, S.; Morales, C.G.; Das, S.; Steinfeld, A.; Forlizzi, J. A Field Study of Pedestrians and Autonomous Vehicles. In Proceedings of the 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI, Toronto, ON, Canada, 23–25 September 2018; pp. 198–209. [Google Scholar] [CrossRef]
  42. Zhang, B.-H.; Fang, Y. Foundation items: Shanghai Automotive Industry Technology Development Fund (1717). J. Graph. 2020, 41, 1012. [Google Scholar]
  43. Zhang, T.; Tao, D.; Qu, X.; Zhang, X.; Lin, R.; Zhang, W. The Roles of Initial Trust and Perceived Risk in Public’s Acceptance of Automated Vehicles. Transp. Res. Part C Emerg. Technol. 2019, 98, 207–220. [Google Scholar] [CrossRef]
  44. Rahman, M.M.; Deb, S.; Strawderman, L.; Burch, R.; Smith, B. How the Older Population Perceives Self-Driving Vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 242–257. [Google Scholar] [CrossRef]
  45. Paddeu, D.; Shergold, I.; Parkhurst, G. The Social Perspective on Policy towards Local Shared Autonomous Vehicle Services (LSAVS). Transp. Policy 2020, 98, 116–126. [Google Scholar] [CrossRef]
  46. Golbabaei, F.; Yigitcanlar, T.; Paz, A.; Bunker, J. Individual Predictors of Autonomous Vehicle Public Acceptance and Intention to Use: A Systematic Review of the Literature. J. Open Innov. Technol. Mark. Complex. 2020, 6, 106. [Google Scholar] [CrossRef]
  47. Van Der Laan, J.D.; Heino, A.; De Waard, D. A Simple Procedure for the Assessment of Acceptance of Advanced Transport Telematics. Transp. Res. Part C Emerg. Technol. 1997, 5, 1–10. [Google Scholar] [CrossRef]
  48. Crilly, N.; Moultrie, J.; Clarkson, P.J. Seeing Things: Consumer Response to the Visual Domain in Product Design. Des. Stud. 2004, 25, 547–577. [Google Scholar] [CrossRef]
  49. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. Manag. Inf. Syst. 1989, 13, 319–339. [Google Scholar] [CrossRef]
  50. Lin, X.F.; Tang, D.; Lin, X.; Liang, Z.M.; Tsai, C.C. An Exploration of Primary School Students’ Perceived Learning Practices and Associated Self-Efficacies Regarding Mobile-Assisted Seamless Science Learning. Int. J. Sci. Educ. 2019, 41, 2675–2695. [Google Scholar] [CrossRef]
  51. Wang, S.; Jiang, Z.; Noland, R.B.; Mondschein, A.S. Attitudes towards Privately-Owned and Shared Autonomous Vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2020, 72, 297–306. [Google Scholar] [CrossRef]
  52. Liu, P.; Yang, R.; Xu, Z. Public Acceptance of Fully Automated Driving: Effects of Social Trust and Risk/Benefit Perceptions. Risk Anal. 2019, 39, 326–341. [Google Scholar] [CrossRef]
  53. Frenkler, F.; Stadler, S.; Cornet, H. Towards User Acceptance of Autonomous Vehicles: A Virtual Reality Study on Human-Machine Interfaces. Int. J. Technol. Mark. 2019, 13, 1. [Google Scholar] [CrossRef]
  54. Bischoff, S.; Ulrich, C.; Dangelmaier, M.; Widlroither, H.; Diederichs, F. Emotion Recognition in User-Centered Design for Automotive Interior and Automated Driving. In Proceedings of the Stuttgarter Symposium für Produktentwicklung (SSP 2017), Stuttgart, Germany, 28–29 June 2017; Volume 2017, pp. 193–200. [Google Scholar]
  55. Bevan, N.; Carter, J.; Earthy, J.; Geis, T.; Harker, S. New ISO Standards for Usability, Usability Reports and Usability Measures. Lect. Notes Comput. Sci. 2016, 9731, 268–278. [Google Scholar] [CrossRef]
  56. Yan, M.; Rampino, L.; Caruso, G. User Acceptance of Autonomous Vehicles: Review and Perspectives on the Role of the Human-Machine Interfaces. Comput. Des. Appl. 2023, 20, 987–1004. [Google Scholar] [CrossRef]
  57. Markkula, G.; Madigan, R.; Nathanael, D.; Portouli, E.; Lee, Y.M.; Dietrich, A.; Billington, J.; Schieben, A.; Merat, N. Defining Interactions: A Conceptual Framework for Understanding Interactive Behaviour in Human and Automated Road Traffic. Theor. Issues Ergon. Sci. 2020, 21, 728–752. [Google Scholar] [CrossRef]
  58. Kaß, C.; Schoch, S.; Naujoks, F.; Hergeth, S.; Keinath, A.; Neukum, A. Standardized Test Procedure for External Human–Machine Interfaces of Automated Vehicles. Information 2020, 11, 173. [Google Scholar] [CrossRef]
  59. Yan, M.; Lin, Z.; Lu, P.; Wang, M.; Rampino, L.; Caruso, G. Speculative Exploration on Future Sustainable Human-Machine Interface Design in Automated Shuttle Buses. Sustainability 2023, 15, 5497. [Google Scholar] [CrossRef]
  60. Dey, D.; Habibovic, A.; Löcken, A.; Wintersberger, P.; Pfleging, B.; Riener, A.; Martens, M.; Terken, J. Taming the eHMI Jungle: A Classification Taxonomy to Guide, Compare, and Assess the Design Principles of Automated Vehicles’ External Human-Machine Interfaces. Transp. Res. Interdiscip. Perspect. 2020, 7, 100174. [Google Scholar] [CrossRef]
  61. Fuest, T.; Sorokin, L.; Bellem, H.; Bengler, K. Taxonomy of Traffic Situations for the Interaction between Automated Vehicles and Human Road Users. In Advances in Human Aspects of Transportation; Stanton, N.A., Ed.; Springer International Publishing: Cham, Switzerland, 2018; pp. 708–719. [Google Scholar]
  62. Liu, M.; Wu, J.; Zhu, C.; Hu, K. Factors Influencing the Acceptance of Robo-Taxi Services in China: An Extended Technology Acceptance Model Analysis. J. Adv. Transp. 2022, 2022, 8461212. [Google Scholar] [CrossRef]
  63. Hallewell, M.J.; Hughes, N.; Large, D.R.; Harvey, C.; Springthorpe, J.; Burnett, G. Deriving Personas to Inform HMI Design for Future Autonomous Taxis: A Case Study on User Requirement Elicitation. J. Usability Stud. 2022, 17, 41–64. [Google Scholar]
  64. Golbabaei, F.; Yigitcanlar, T.; Bunker, J. The Role of Shared Autonomous Vehicle Systems in Delivering Smart Urban Mobility: A Systematic Review of the Literature. Int. J. Sustain. Transp. 2021, 15, 731–748. [Google Scholar] [CrossRef]
  65. Yan, M.; Rosa Elena Rampino, L.; Giandomenico, C.; Zhao, H. Implications of Human-Machine Interface for Inclusive Shared Autonomous Vehicles. Hum. Factors Transp. 2022, 60, 542–550. [Google Scholar] [CrossRef]
  66. Ranasinghe, C.; Holländer, K.; Currano, R.; Sirkin, D.; Moore, D.; Schneegass, S.; Ju, W. Autonomous Vehicle-Pedestrian Interaction across Cultures: Towards Designing Better External Human Machine Interfaces (eHMIs). In Proceedings of the CHI EA’20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–8. [Google Scholar] [CrossRef]
  67. Faas, S.M.; Baumann, M. Yielding Light Signal Evaluation for Self-Driving Vehicle and Pedestrian Interaction. In Advances in Intelligent Systems and Computing; Springer Verlag: Berlin/Heidelberg, Germany, 2020; Volume 1026, pp. 189–194. [Google Scholar]
  68. Rebelo, F.; Noriega, P.; Duarte, E.; Soares, M. Using Virtual Reality to Assess User Experience. Hum. Factors 2012, 54, 964–982. [Google Scholar] [CrossRef] [PubMed]
  69. Naujoks, F.; Hergeth, S.; Wiedemann, K.; Schömig, N.; Forster, Y.; Keinath, A. Test Procedure for Evaluating the Human–Machine Interface of Vehicles with Automated Driving Systems. Traffic Inj. Prev. 2019, 20, S146–S151. [Google Scholar] [CrossRef]
  70. Tinga, A.M.; Cleij, D.; Jansen, R.J.; van der Kint, S.; van Nes, N. Human Machine Interface Design for Continuous Support of Mode Awareness during Automated Driving: An Online Simulation. Transp. Res. Part F Traffic Psychol. Behav. 2022, 87, 102–119. [Google Scholar] [CrossRef]
  71. Şen, G.; Şener, B. Experience Prototyping through Virtual Reality Head-Mounted Displays: Design Appraisals of Automotive User Interfaces. Des. J. 2022, 25, 807–827. [Google Scholar] [CrossRef]
  72. Picardi, A.; Ballabio, G.; Arquilla, V.; Caruso, G. A Study on Haptic Actuators to Improve the User Experience of Automotive Touchscreen Interfaces. Comput. Des. Appl. 2024, 22, 136–149. [Google Scholar] [CrossRef]
  73. Lund, A. Measuring Usability with the USE Questionnaire. Usability Interface 2001, 8, 3–6. [Google Scholar]
  74. Nordhoff, S.; de Winter, J.; Madigan, R.; Merat, N.; van Arem, B.; Happee, R. User Acceptance of Automated Shuttles in Berlin-Schöneberg: A Questionnaire Study. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 843–854. [Google Scholar] [CrossRef]
  75. Vrščaj, D.; Nyholm, S.; Verbong, G.P.J. Is Tomorrow’s Car Appealing Today? Ethical Issues and User Attitudes beyond Automation. AI Soc. 2020, 35, 1033–1046. [Google Scholar] [CrossRef]
  76. Aylward, K.; Weber, R.; Man, Y.; Lundh, M.; Mackinnon, S.N. “Are You Planning to Follow Your Route?” The Effect of Route Exchange on Decision Making, Trust, and Safety. J. Mar. Sci. Eng. 2020, 8, 280. [Google Scholar] [CrossRef]
  77. Merat, N.; Ruth, M.; Nordhoff, S. Human Factors, User Requirements, and User Acceptance of Ride-Sharing in Automated Vehicles. Available online: https://www.itf-oecd.org/human-factors-user-requirements-and-user-acceptance-ride-sharing-automated-vehicles (accessed on 20 December 2024).
Figure 1. AV is classified into two categories: the vehicle’s ownership attributes (private or shared) and the implementation scenarios (private destination or uniform journey) [10].
Figure 1. AV is classified into two categories: the vehicle’s ownership attributes (private or shared) and the implementation scenarios (private destination or uniform journey) [10].
Applsci 15 00045 g001
Figure 2. The three related pillars of the acceptance concept include definition, acceptance model, and assessment structure.
Figure 2. The three related pillars of the acceptance concept include definition, acceptance model, and assessment structure.
Applsci 15 00045 g002
Figure 3. Correlation between the users’ response and ergonomic analysis.
Figure 3. Correlation between the users’ response and ergonomic analysis.
Applsci 15 00045 g003
Figure 4. Overview of the different approaches to select relevant use cases of a human–machine interface (HMI) and their categorization.
Figure 4. Overview of the different approaches to select relevant use cases of a human–machine interface (HMI) and their categorization.
Applsci 15 00045 g004
Figure 5. Fifteen selected examples of Autonomous Shuttle Buses (ASBs) for public transportation to deliver humans.
Figure 5. Fifteen selected examples of Autonomous Shuttle Buses (ASBs) for public transportation to deliver humans.
Applsci 15 00045 g005
Figure 6. Configurations and classifications for external HMI in selected Autonomous Shuttle Buses (ASBs).
Figure 6. Configurations and classifications for external HMI in selected Autonomous Shuttle Buses (ASBs).
Applsci 15 00045 g006
Figure 7. Display configurations and interior layout classifications for internal HMI in selected Autonomous Shuttle Buses (ASBs).
Figure 7. Display configurations and interior layout classifications for internal HMI in selected Autonomous Shuttle Buses (ASBs).
Applsci 15 00045 g007
Figure 8. Display-based approach with one filter. Grey squares indicate redundant display locations.
Figure 8. Display-based approach with one filter. Grey squares indicate redundant display locations.
Applsci 15 00045 g008
Figure 9. The testing environment should include the three conflicting situations. The blue cube represents the interaction partner, and the gray arrow indicates the motion trajectory of the interaction object.
Figure 9. The testing environment should include the three conflicting situations. The blue cube represents the interaction partner, and the gray arrow indicates the motion trajectory of the interaction object.
Applsci 15 00045 g009
Figure 10. Procedure with measured parameters.
Figure 10. Procedure with measured parameters.
Applsci 15 00045 g010
Figure 11. Virtual display of the testing scene inside the vehicle.
Figure 11. Virtual display of the testing scene inside the vehicle.
Applsci 15 00045 g011
Figure 12. Schematic diagram of the specific scenarios and tasks of the entire test.
Figure 12. Schematic diagram of the specific scenarios and tasks of the entire test.
Applsci 15 00045 g012
Figure 13. One possible scenario for the eHMI test is to check whether the participant wearing the HMD crosses the road from point A to point B based on the sign displayed by the SAV.
Figure 13. One possible scenario for the eHMI test is to check whether the participant wearing the HMD crosses the road from point A to point B based on the sign displayed by the SAV.
Applsci 15 00045 g013
Figure 14. Comprehensive data collection approach: quantitative and qualitative data perspectives through diverse methodologies [47,74].
Figure 14. Comprehensive data collection approach: quantitative and qualitative data perspectives through diverse methodologies [47,74].
Applsci 15 00045 g014
Table 1. Overview of the prevailing theoretical models on technology acceptance.
Table 1. Overview of the prevailing theoretical models on technology acceptance.
Ref.Theory NameDescriptionInfluencing FactorsDefinition
[23,24,25]Technology Acceptance Model (TAM)It is a widely accepted model in information systems and expands in driving environments to predict driver behavior, such as in-vehicle navigation, cruise control, and other assistance systems.Perceived usefulness (PU)The extent to which an individual perceives that utilizing a specific system would improve their job effectiveness.
Perceived ease-of-use (PEOU)The extent to which an individual perceives that using a specific system would require minimal effort.
Attitude Toward UsingAn individual evaluates the appeal of utilizing a particular information system application.
Behavioral intention to use (BI)An individual’s likelihood of engaging in certain behaviors.
[26,27,28,29]Unified Theory of Acceptance and Use of Technology (UTAUT)It aims to explain user intentions and behavior. This model is frequently used in transport studies from a technology acceptance standpoint.Performance Expectancy (PE)The extent to which an individual perceives that utilizing a system would contribute to improving job performance.
Effort Expectancy (EE)The connections between the effort exerted in the workplace, the performance attained, and the rewards garnered.
Social Influence (SI)The attitudes, beliefs, or behavior of an individual are influenced by the presence or actions of others.
Facilitating Conditions (FC)The extent to which an individual perceives the presence of organizational and technical infrastructure to provide support.
[30,31,32]Car Technology Acceptance Model (CTAM)It is a variation of the UTAUT that specifically targets in-car technology instead of overall car technologies.Perceived SafetyThe extent to which an individual perceives that the use of AVs will affect his or her well-being.
Self-EfficacyAn individual’s belief in the capacity to produce specific performance attainments.
Attitude Towards Using TechnologyOne’s positive or negative evaluation towards the introduction of new technologies.
[33,34]Automation Acceptance Model (AAM)It draws upon cognitive engineering perspectives and examines the dynamic and multi-level aspects of automation utilization, emphasizing its impact on attitudes.CompatibilityThe capacity for two systems to work together without having to be altered.
External VariablesThe degree of automation is proposed to impact perceptions of compatibility with the situation and context.
[35]Autonomous Vehicle Acceptance model (AVAM)It is an adaptation of the UTAUT and CTAM for AV technologies.AVAM consists of the same elements as CTAM.
[36]Model of Automated Vehicle Acceptance (MAVA)It is a process-oriented model designed to predict the acceptance of autonomous vehicles. It comprises four stages, from individual exposure to AVs to final decision-making.Service and vehicle characteristicsAvailability, adaptability, travel time/speed/expenses, ease of use, comfort, charging duration, compatibility, dimensions, exterior and interior design, illumination, visual appeal, brand, etc.
Hedonic motivationThe influence of pleasure and pain receptors on willingness towards a goal or away from a threat.
Perceived benefitsHigher productivity; environmental benefits; increased mobility, independence, and freedom; no need for driver’s licenses; lower repair costs and insurance premiums; etc.
Perceived risksLegal liability; data privacy; traffic delays; loss interacting with Vulnerable Road Users (VRUs); lack of assistance for the disabled; ethical/social consequences, etc.
Socio-demographicsIndividual characteristics, such as age, gender, income, employment and living situation, level of education, etc.
Travel behaviorPurpose or attitude of travel; mode or frequency of travel, distance, accidents, and medical assistance.
PersonalityTrust, technology savviness, sharing AV with strangers, etc.
Table 2. Summarized parameters from the technology acceptance theoretical models in Table 1.
Table 2. Summarized parameters from the technology acceptance theoretical models in Table 1.
Influencing FactorsTheory/Model Name (Abbr.)Parameters
Perceived usefulness (PU)TAM; AAMUsefulness
Performance Expectancy (PE)UTAUT; CTAM; AVAM; MAVA
Perceived SafetyCTAM; AVAM; MAVA
Perceived ease-of-use (PEOU)TAM; AAMEfficiency
Self-EfficacyCTAM; AVAM
Effort Expectancy (EE)UTAUT; CTAM; AVAM; MAVAEffectiveness
Attitude Toward UsingTAM; CTAM; AAM; AVAMSatisfaction
Social Influence (SI)UTAUT; CTAM; AVAM
Hedonic motivationMAVA
Perceived benefits/risksMAVA
Perceived ease-of-use (PEOU)TAM; AAMAccessibility
Social Influence (SI)UTAUT; CTAM; AVAM
Facilitating Conditions (FC)UTAUT; CTAM; AVAM; MAVA
Service and vehicle characteristicsMAVA
Facilitating Conditions (FC)UTAUT; CTAM; AVAM; MAVALearnability
CompatibilityAAM
Service and vehicle characteristicsMAVA
Table 3. Description and measured metrics from ergonomics dimensions of each influencing factor in Table 2.
Table 3. Description and measured metrics from ergonomics dimensions of each influencing factor in Table 2.
Ref.Influencing FactorsDescription and Measured MetricsErgonomics Dimensions
[43]Perceived UsefulnessQuality/Efficiency/Control/Productivity/Performance/Completion/Effectiveness/Useful/Critical/Difficulty of WorkPostural analysis
[30]Performance ExpectancyManifestations of fatigue or distraction include fluctuations in concentration, drowsiness, tiredness, and responsiveness.Mental load analysis
[43]Perceived SafetySafety impressions include drowsiness, fatigue, decreased performance and variability, and less adaptability and responsiveness.Mental load analysis;
Emotional analysis
[49]Perceived ease-of-useAssess the degree of physical and mental engagement necessary to complete a designated task.Postural analysis
[50]Self-EfficacyThe choice of activities, the degree of effort expended, and the persistence of effort.Occlusion analysis
[26]Effort ExpectancyIt pertains to how users interact with the HMIs, which can be assessed through the body posture adopted.Postural analysis;
Mental load analysis
[51]Attitude toward usingThe interaction with the system makes the driving or riding environment attractive.Emotional analysis
[23]Social InfluencePublic perception of autonomous drivingEmotional analysis
[27]Hedonic motivationMental workload assesses the extent of cognitive stress and tension experienced during task execution, while the simplicity of action evaluates the clarity, conciseness, compatibility, and controllability of the HMIs.Mental load analysis;
Emotional analysis
[52]Perceived benefits/risksThe metrics of benefits-related position, such as product pleasantness and perceived reliability/The level of risks-related position.Emotional analysis
[28]Facilitating ConditionsInformation availability: the information necessary for the specified task; Information quality: influences the learnability, clarity, and understanding of the perceived information.Occlusion analysis;
Touch and feel analysis
[36]Service and vehicle characteristicsVisibility: be accessible; Accessibility pertains to being reachable from the relevant body part for manipulation; Sensorial feedback encompasses touch, hearing, and sight; Interaction support guides users’ actions in the appropriate operational sequence.Occlusion analysis;
Touch and feel analysis
[18]CompatibilityMeasure the simplicity of actions. Users can adapt to the system without many changes.Mental load analysis
Table 4. Evaluation metrics for the six parameters of user acceptance dimensions and correlation with the ergonomic analysis.
Table 4. Evaluation metrics for the six parameters of user acceptance dimensions and correlation with the ergonomic analysis.
ParametersMeasured Ergonomics DimensionsUsers
Response
Type of Data CollectionSpecific Measurement ObjectsData Collection Methods
UsefulnessPostural analysisBehavioralQuantitativeCompletion degree; Error rate; Time to complete each task.Automatic recording by equipment
Mental load analysisBehavioralQuantitative; QualitativeSymbols to understand; The level of mental stress.Recorded during and questionnaire after the test
Emotional analysisCognitiveQuantitative; QualitativeThe number of operation errors and user tension (muscle fatigue, psychological stress, etc.)Physiological signals, questionnaires during the experiment
EfficiencyPostural analysisBehavioralQuantitativeRequests explanation times;
Error rate.
Recorded by equipment and staff
Occlusion analysisBehavioralQuantitativeDecision times/error ratesRecorded during the experiment
EffectivenessPostural analysisBehavioralQuantitativeSitting postureRecorded during the experiment;
Mental load analysisBehavioralQuantitative; QualitativeMuscle fatigue; Degree of perceived fatiguequestionnaire
SatisfactionEmotional analysisCognitiveQualitativeQuestionnaire; interview; The NASA Task Load IndexAfter the experiment
Mental load analysis;BehavioralQualitativeThe number of misunderstandings; The attitudes that indicate fatigue or distractionQuestionnaire and interview after the experiment
AccessibilityOcclusion analysis; Touch and feel analysisBehavioral CognitiveQuantitativeThe frequency of the error in tasks; The time taken to finish a designated task; The step count in a task compared to the predetermined minimum.Automatic recording by equipment during the experiment
Postural analysisBehavioralQuantitativeRequests explanation times; Error rateRecorded by equipment and staff
Emotional analysisCognitiveQualitativeTo understand public attitudes through questionnaires.Questionnaire before and after
LearnabilityOcclusion analysis; Touch and feel analysisBehavioral CognitiveQuantitativeThe duration required to accomplish a designated task; The frequency of the error in tasks.Automatic recording by equipment during the experiment
Mental load analysisBehavioralQualitativeUser’s subjective responseQuestionnaire after the experiment
Table 5. HMI information content and classification for Shared Autonomous Vehicles (SAV).
Table 5. HMI information content and classification for Shared Autonomous Vehicles (SAV).
FunctionContentInformationUse Case
CommunicationControl and on-trip
information
Route information (Schedule + upcoming stops)iHMI
Remaining timeiHMI
Map (Position: Street name+ Schedule)iHMI
Communication barEmergency situation reporting (Input methods: voice, touch screen, physical buttons)iHMI/eHMI
Vehicle StatusDriving status (whether it will stop)iHMI/eHMI
Door status (open or close)iHMI/eHMI
NotificationsStateWeatheriHMI
Interior temperatureiHMI
Sensorics (Obstacles)iHMI/eHMI
BatteryiHMI
Details in progressShuttle No.iHMI/eHMI
Incoming notificationArrival alerts, accidents (technical malfunction), etc.iHMI/eHMI
EntertainmentAudioMusiciHMI
Table 6. Compilation of Situation-specific factors, their detailed classification, and the application of different use cases.
Table 6. Compilation of Situation-specific factors, their detailed classification, and the application of different use cases.
Situation-Specific FactorsClassificationUse Case
Type of roadHighwayiHMI/eHMI
Rural
Urban
Right of waySAVeHMI
Interaction partner
Undefined
Traffic environmentOn the roadiHMI/eHMI
Intersection
Parking
Type of interaction partnerMotorizedeHMI
Non-motorized
Speed at the beginning of the interactionSpeed of the SAVeHMI
Speed of interaction partner
Visibility conditionsNormaleHMI
Bad
Interior environmentOne interaction partneriHMI
Multiple interaction partners
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, M.; Rampino, L.; Caruso, G. Comparing User Acceptance in Human–Machine Interfaces Assessments of Shared Autonomous Vehicles: A Standardized Test Procedure. Appl. Sci. 2025, 15, 45. https://doi.org/10.3390/app15010045

AMA Style

Yan M, Rampino L, Caruso G. Comparing User Acceptance in Human–Machine Interfaces Assessments of Shared Autonomous Vehicles: A Standardized Test Procedure. Applied Sciences. 2025; 15(1):45. https://doi.org/10.3390/app15010045

Chicago/Turabian Style

Yan, Ming, Lucia Rampino, and Giandomenico Caruso. 2025. "Comparing User Acceptance in Human–Machine Interfaces Assessments of Shared Autonomous Vehicles: A Standardized Test Procedure" Applied Sciences 15, no. 1: 45. https://doi.org/10.3390/app15010045

APA Style

Yan, M., Rampino, L., & Caruso, G. (2025). Comparing User Acceptance in Human–Machine Interfaces Assessments of Shared Autonomous Vehicles: A Standardized Test Procedure. Applied Sciences, 15(1), 45. https://doi.org/10.3390/app15010045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop