[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Cooperative Intelligence in Automated Driving-2nd Edition

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (20 October 2024) | Viewed by 9115

Special Issue Editors


E-Mail Website
Guest Editor
Human-Computer Interaction Group, Technische Hochschule Ingolstadt, 85049 Ingolstadt, Germany
Interests: user experience design; automotive user interfaces; human-computer interaction; intelligent user interfaces; AR/VR applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Industrial and Systems Engineering; Department of Computer Science (by courtesy), Virginia Tech, Blacksburg, VA 24061, USA
Interests: auditory displays; affective computing; automotive user interfaces; assistive robotics; aesthetic computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Centre for Accident Research and Road Safety-Queensland (CARRS-Q), Queensland University of Technology, Brisbane, QLD 4000, Australia
Interests: automotive user interfaces; autonomous driving; intelligent transport systems; road safety; games; augmented reality; user experience research
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Our previous Special Issue on “Cooperative Intelligence in Automated Driving” (https://www.mdpi.com/journal/mti/special_issues/Driving), published in a 2022 volume of MTI, was a great success. We were able to publish several research and review articles therein.

Since then, you might have completed new works that fit into this Special Issue, and we wanted to give you the space to present your work to a broad audience. That is why we decided to present a follow-up Special Issue of "Cooperative Intelligence in Automated Driving" for the year 2023.

Although the research field of automated driving has experienced a major surge in development in recent years, major challenges remain unsolved. The legal framework for the operation of automated vehicles has been established in most areas. A high level of usability and good user experience continue to require new and innovative solutions to lead to widespread public acceptance and thus drive the adoption of automated vehicles. The UX/UI and human factors community is facing the challenge of exploring human-centric solutions, with the goal of making automated driving a successful reality.

With this Special Issue and its scientific research papers, we would like to highlight research problems related to human interactions with automated vehicles and automated driving, drawing on fields such as human–computer interaction, human factors, and interaction design. We want to show how good system design, well-defined interfaces, aligned UI design principles, evaluation methods, and user experience metrics can help to engage the user and thus create enticing and successful automated driving products.

We encourage researchers and practitioners from academia and industry to submit novel (unpublished, according to the journal specifications/regulations) contributions. We are soliciting original research contributions on the following topics of interest:

  • HCXAI for automotive applications: e.g., “black boxes” representing artificial intelligence are starting to make safety-critical driving decisions, but what needs to be explained to the users to understand and trust these decisions (e.g., transparent displays), and how and when should this be done?
  • Engagement, situation/mode awareness: e.g., as drivers are free from the driving task, what level of engagement in the driving task is still required? What level of situation/mode awareness is needed? How can this be maintained, measured, etc.?
  • Trust in future mobility.
  • Design for marginal groups: e.g., how can we ensure marginal groups, such as users with disabilities, have user-friendly access to novel mobility technologies without being marginalized and/or patronized?
  • In-vehicle intelligent agents: what novel functions, modalities, and interactions with intelligent agents meet end-user needs and wants in the automated driving context?
  • Emotional experiences and well-being in automated driving.
  • Augmented perception and cognition (HUDs, ambient display, sonification, olfactory displays, etc.)
  • Forms of cooperation:
    1. AVs cooperating with other AVs;
    2. AVs cooperating (communicating) with external humans (VRUs/other drivers);
    3. AVs cooperating with their users (driver/passengers).
  • Novel methods/tools, in particular, those focusing on human interactions with AVs.

Facts & Figures:

  • Abstract/title submission deadline: 17 November 2023 (optional)
  • Manuscript deadline: 20 December 2023

All deadlines are AoE (anywhere on earth) for the date shown.

Prof. Dr. Andreas Riener
Dr. Myounghoon Jeon (Philart)
Dr. Ronald Schroeter
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 7210 KiB  
Article
Unlocking Trust and Acceptance in Tomorrow’s Ride: How In-Vehicle Intelligent Agents Redefine SAE Level 5 Autonomy
by Cansu Demir, Alexander Meschtscherjakov and Magdalena Gärtner
Multimodal Technol. Interact. 2024, 8(12), 111; https://doi.org/10.3390/mti8120111 - 17 Dec 2024
Viewed by 414
Abstract
As fully automated vehicles (FAVs) advance towards SAE Level 5 automation, the role of in-vehicle intelligent agents (IVIAs) in shaping passenger experience becomes critical. Even at SAE Level 5 automation, effective communication between the vehicle and the passenger will remain crucial to ensure [...] Read more.
As fully automated vehicles (FAVs) advance towards SAE Level 5 automation, the role of in-vehicle intelligent agents (IVIAs) in shaping passenger experience becomes critical. Even at SAE Level 5 automation, effective communication between the vehicle and the passenger will remain crucial to ensure a sense of safety, trust, and engagement. This study explores how different types and combinations of information provided by IVIAs influence user experience, acceptance, and trust. A sample of 25 participants was recruited for the study, which experienced a fully automated ride in a driving simulator, interacting with Iris, an IVIA designed for voice-only communication. The study utilized both qualitative and quantitative methods to assess participants’ perceptions. Findings indicate that critical and vehicle-status-related information had the highest positive impact on trust and acceptance, while personalized information, though valued, raised privacy concerns. Participants showed high engagement with non-driving-related activities, reflecting a high level of trust in the FAV’s performance. Interaction with the anthropomorphic IVIA was generally well received, but concerns about system transparency and information overload were noted. The study concludes that IVIAs play a crucial role in fostering passenger trust in FAVs, with implications for future design enhancements that emphasize emotional intelligence, personalization, and transparency. These findings contribute to the ongoing development of IVIAs and the broader adoption of automated driving technologies. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Participant reading and playing games while driving in FAV.</p>
Full article ">Figure 2
<p>Rating overview of all IVIA-generated information.</p>
Full article ">Figure 3
<p>Situational Trust Scale for Automated Driving (<span class="html-italic">n</span> = 25).</p>
Full article ">Figure 4
<p>Car Technology Acceptance Model (<span class="html-italic">n</span> = 25).</p>
Full article ">Figure 5
<p>Subjective Assessment of Speech System Interfaces (<span class="html-italic">n</span> = 25).</p>
Full article ">Figure 6
<p>UEQ with trust, novelty and perspicuity scales (<span class="html-italic">n</span> = 25).</p>
Full article ">Figure 7
<p>Ratings of information types.</p>
Full article ">Figure 8
<p>Ratings of information types sub-categories.</p>
Full article ">Figure A1
<p>Driving simulation route with event descriptions.</p>
Full article ">
35 pages, 5660 KiB  
Article
“Warning!” Benefits and Pitfalls of Anthropomorphising Autonomous Vehicle Informational Assistants in the Case of an Accident
by Christopher D. Wallbridge, Qiyuan Zhang, Victoria Marcinkiewicz, Louise Bowen, Theodor Kozlowski, Dylan M. Jones and Phillip L. Morgan
Multimodal Technol. Interact. 2024, 8(12), 110; https://doi.org/10.3390/mti8120110 - 5 Dec 2024
Viewed by 531
Abstract
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The [...] Read more.
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The aim of the current paper is to investigate the efficacy of informational assistants (IAs) varying by anthropomorphism (humanoid robot vs. no robot) and dialogue style (conversational vs. informational) on trust in and blame on a highly autonomous vehicle in the event of an accident. The accident scenario involved a pedestrian violating the Highway Code by stepping out in front of a parked bus and the AV not being able to stop in time during an overtake manoeuvre. The humanoid (Nao) robot IA did not improve trust (across three measures) or reduce blame on the AV in Experiment 1, although communicated intentions and actions were perceived by some as being assertive and risky. Reducing assertiveness in Experiment 2 resulted in higher trust (on one measure) in the robot condition, especially with the conversational dialogue style. However, there were again no effects on blame. In Experiment 3, participants had multiple experiences of the AV negotiating parked buses without negative outcomes. Trust significantly increased across each event, although it plummeted following the accident with no differences due to anthropomorphism or dialogue style. The perceived capabilities of the AV and IA before the critical accident event may have had a counterintuitive effect. Overall, evidence was found for a few benefits and many pitfalls of anthropomorphising an AV with a humanoid robot IA in the event of an accident situation. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Screenshot of one of the videos created using the SSGA method for Experiment 1. In this–a physical embodiment agent condition-the Nao robot was always positioned to the bottom left of the video image. The view of the robot is from the right-hand seat perspective, with the robot positioned on the dashboard on the passenger side. Within this example (a speech condition), the robot turns to face the passenger as it speaks and turns back and faces the road ahead at all other times.</p>
Full article ">Figure 2
<p>Mean ratings of trust (single measure) across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 3
<p>TiAS and STS-AD mean ratings across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 4
<p>Mean levels of blame on the AV and the pedestrian across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 5
<p>Mean ratings of trust (single measure) across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 6
<p>TiAS and STS-AD mean ratings across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 7
<p>Mean levels of blame on the AV and the pedestrian across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 8
<p>Mean ratings of perceived riskiness across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 9
<p>Event sequence in Experiment 3.</p>
Full article ">Figure 10
<p>Mean TiAS across agent embodiment and dialogue conditions after Events 1–5 ((<b>a</b>) Voice Only, (<b>b</b>) Robot). Error bars are ±SE.</p>
Full article ">Figure 11
<p>Mean single measure and STS-AD trust ratings across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 12
<p>Mean levels of blame on the AV and the pedestrian across agent embodiment and dialogue conditions. Error bars are ±SE.</p>
Full article ">Figure 13
<p>Mean ratings of RoSAS competence across agent embodiment and dialogue conditions after Event 1–5 ((<b>a</b>) Voice Only, (<b>b</b>) Robot). Error bars are ±SE.</p>
Full article ">Figure 13 Cont.
<p>Mean ratings of RoSAS competence across agent embodiment and dialogue conditions after Event 1–5 ((<b>a</b>) Voice Only, (<b>b</b>) Robot). Error bars are ±SE.</p>
Full article ">Figure 14
<p>Mean ratings of RoSAS warmth and discomfort across agent embodiment and dialogue conditions after Event 5. Error bars are ±SE.</p>
Full article ">Figure 15
<p>Mean ratings of perceived riskiness across agent embodiment and dialogue conditions after Event 1–5 ((<b>a</b>) Speech Only, (<b>b</b>) Robot). Error bars are ±SE.</p>
Full article ">Figure 15 Cont.
<p>Mean ratings of perceived riskiness across agent embodiment and dialogue conditions after Event 1–5 ((<b>a</b>) Speech Only, (<b>b</b>) Robot). Error bars are ±SE.</p>
Full article ">
20 pages, 1121 KiB  
Article
Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System
by Setareh Zafari, Jesse de Pagter, Guglielmo Papagni, Alischa Rosenstein, Michael Filzmoser and Sabine T. Koeszegi
Multimodal Technol. Interact. 2024, 8(3), 20; https://doi.org/10.3390/mti8030020 - 1 Mar 2024
Viewed by 2259
Abstract
This article reports on a longitudinal experiment in which the influence of an assistive system’s malfunctioning and transparency on trust was examined over a period of seven days. To this end, we simulated the system’s personalized recommendation features to support participants with the [...] Read more.
This article reports on a longitudinal experiment in which the influence of an assistive system’s malfunctioning and transparency on trust was examined over a period of seven days. To this end, we simulated the system’s personalized recommendation features to support participants with the task of learning new texts and taking quizzes. Using a 2 × 2 mixed design, the system’s malfunctioning (correct vs. faulty) and transparency (with vs. without explanation) were manipulated as between-subjects variables, whereas exposure time was used as a repeated-measure variable. A combined qualitative and quantitative methodological approach was used to analyze the data from 171 participants. Our results show that participants perceived the system making a faulty recommendation as a trust violation. Additionally, a trend emerged from both the quantitative and qualitative analyses regarding how the availability of explanations (even when not accessed) increased the perception of a trustworthy system. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Kinesthetic learning style. The kynesthetic learning style consists of highlighted portions of the original full text.</p>
Full article ">Figure 2
<p>Auditory learning style. The auditory learning style consists of a textual summary of the original content and an additional auditory reading of the summary.</p>
Full article ">Figure 3
<p>Reading/writing learning style. The visual learning style consists of a graphic rendering of the key points of the original full text.</p>
Full article ">Figure 4
<p>Visual learning style. The reading/writing learning style consists of a series of bullet points containing key chunks of the original full text, adapted from the summary.</p>
Full article ">Figure 5
<p>Malfunction explanation.</p>
Full article ">Figure 6
<p>Trust development in each group.</p>
Full article ">
30 pages, 541 KiB  
Article
How to Design Human-Vehicle Cooperation for Automated Driving: A Review of Use Cases, Concepts, and Interfaces
by Jakob Peintner, Bengt Escher, Henrik Detjen, Carina Manger and Andreas Riener
Multimodal Technol. Interact. 2024, 8(3), 16; https://doi.org/10.3390/mti8030016 - 26 Feb 2024
Cited by 2 | Viewed by 2480
Abstract
Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance [...] Read more.
Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance and user experience. Yet, the application of these concepts in real driving situations remains unclear, and a holistic approach to driving cooperation is missing. Existing research has primarily focused on testing specific interaction scenarios and implementations. To address this gap and offer a contemporary perspective on designing human–vehicle cooperation in automated driving, we have developed a three-part taxonomy with the help of an extensive literature review. The taxonomy broadens the notion of driving cooperation towards a holistic and application-oriented view by encompassing (1) the “Cooperation Use Case”, (2) the “Cooperation Frame”, and (3) the “Human–Machine Interface”. We validate the taxonomy by categorizing related literature and providing a detailed analysis of an exemplar paper. The proposed taxonomy offers designers and researchers a concise overview of the current state of driver cooperation and insights for future work. Further, the taxonomy can guide automotive HMI designers in ideation, communication, comparison, and reflection of cooperative driving interfaces. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Top level view of our proposed 3-step taxonomy—For designing cooperation, we are starting with identifying (1) the cooperation use case, (2) deducting a suitable cooperation strategy in the given frame, and (3) designing the HMI based on that frame.</p>
Full article ">Figure 2
<p>Overview of the PRISMA process with the number for the reports included and excluded for each of the steps.</p>
Full article ">Figure 3
<p>Detailed view of the cooperation use case, mainly defined by the agents, the motivation for the cooperation, and the scenario categorized by its criticality and urgency.</p>
Full article ">Figure 4
<p>Detailed view of the Cooperation Frame, defined by the Cooperation Dynamics (determinism) and the Task Agent Residuals, which are split up into the task level and the exact step of the driving loop at which the cooperative task is located.</p>
Full article ">Figure 5
<p>Detailed view of the HMI taxonomy—this taxonomy represents the HMI part of the cooperation. The input and output channels define the interaction principles of the HMI, which in return inspire the metaphors used for the cooperation concept.</p>
Full article ">Figure A1
<p>Cooperative Driving Design Guide—The guide can be used in design sessions to facilitate ideation and communication about cooperative driving.</p>
Full article ">
14 pages, 2306 KiB  
Article
Are Drivers Allowed to Sleep? Sleep Inertia Effects Drivers’ Performance after Different Sleep Durations in Automated Driving
by Doreen Schwarze, Frederik Diederichs, Lukas Weiser, Harald Widlroither, Rolf Verhoeven and Matthias Rötting
Multimodal Technol. Interact. 2023, 7(6), 62; https://doi.org/10.3390/mti7060062 - 16 Jun 2023
Cited by 3 | Viewed by 2211
Abstract
Higher levels of automated driving may offer the possibility to sleep in the driver’s seat in the car, and it is foreseeable that drivers will voluntarily or involuntarily fall asleep when they do not need to drive. Post-sleep performance impairments due to sleep [...] Read more.
Higher levels of automated driving may offer the possibility to sleep in the driver’s seat in the car, and it is foreseeable that drivers will voluntarily or involuntarily fall asleep when they do not need to drive. Post-sleep performance impairments due to sleep inertia, a brief period of impaired cognitive performance after waking up, is a potential safety issue when drivers need to take over and drive manually. The present study assessed whether sleep inertia has an effect on driving and cognitive performance after different sleep durations. A driving simulator study with n = 13 participants was conducted. Driving and cognitive performance were analyzed after waking up from a 10–20 min sleep, a 30–60 min sleep, and after resting without sleep. The study’s results indicate that a short sleep duration does not reliably prevent sleep inertia. After the 10–20 min sleep, cognitive performance upon waking up was decreased, but the sleep inertia impairment faded within 15 min. Although the driving parameters showed no significant difference between the conditions, participants subjectively felt more tired after both sleep durations compared to resting. The small sample size of 13 participants, tested in a within-design, may have prevented medium and small effects from becoming significant. In our study, take-over was offered without time pressure, and take-over times ranged from 3.15 min to 4.09 min after the alarm bell, with a mean value of 3.56 min in both sleeping conditions. The results suggest that daytime naps without previous sleep deprivation result in mild and short-term impairments. Further research is recommended to understand the severity of impairments caused by different intensities of sleep inertia. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Changes in the adverse and beneficial effects of brief (e.g., 10 min), short (e.g., 30 min), and long (e.g., 1 h) naps after waking from sleep. The figure was adapted with the permission from Ref. [<a href="#B18-mti-07-00062" class="html-bibr">18</a>]. 2022, Lovato, N.; Lack, L.</p>
Full article ">Figure 2
<p>Immersive driving simulator at the Fraunhofer IAO (<b>a</b>) and a driver sleeping in a traffic jam when SAE level 4 ADS was activated (<b>b</b>).</p>
Full article ">Figure 3
<p>Sequences of the test drive with their duration, automation status, and performance parameters.</p>
Full article ">Figure 4
<p>(<b>a</b>) Mean speed of drivers and (<b>b</b>) standard deviation of participants after L4 time split by the sleep conditions. Both variables showed no effects of sleep duration.</p>
Full article ">Figure 5
<p>(<b>a</b>) Participants’ lane-keeping ability and (<b>b</b>) their average number of zero-crossings on the steering wheel per kilometer after L4 time split by the sleep conditions. No significant effects of sleep duration were shown.</p>
Full article ">Figure 6
<p>The average reaction time split by the sleep conditions. Brake reaction time in L4 time 20 min sleep was significantly slower than in L4 time 60 min sleep.</p>
Full article ">Figure 7
<p>(<b>a</b>) The participants’ attempted sums in two min and (<b>b</b>) their percentage of correct answers in the addition test split by sleep conditions and time points. The cognitive speed is impaired immediately after waking from a 10–20 min nap. No significant effects of sleep duration is shown for the measure of accuracy.</p>
Full article ">
Back to TopTop