Abstract
Designing trust-based in-car interfaces is critical for the adoption of self-driving cars. Indeed, latest studies revealed that a vast majority of drivers are not willing to trust this technology.
Although previous research showed that visually embodying a robot can have a positive impact on the interaction with a user, the influence of this visual representation on user trust is less understood.
In this study, we assessed the trustworthiness of different models of visual embodiment such as abstract, human, animal, mechanical, etc., using a survey and a trust scale. For those reasons, we considered a virtual assistant designed to support trust in automated driving and particularly in critical situations. This assistant role is to take full control of the driving task whenever the driver activates the self-driving mode, and provide a trustworthy experience.
We first selected a range of visual embodiment models based on a design space for robot visual embodiment and visual representations for each of these models. Then we used a card sorting procedure (19 selected participants) in order to select the most significant visual representations for each model. Finally, we conducted a survey (146 participants) to evaluate the impact of the selected models of visual embodiment on user trust and user preferences.
With our results, we attempt to provide an answer for the question of the best visual embodiment to instill trust in a virtual agent capacity to handle critical driving situations. We present possible guidelines for real-world implementation and we discuss further directions for a more ecological evaluation.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In the past decade, we have witnessed the birth of a new kind of technology: virtual assistants. They provide a new way of interacting with more and more complex machines and are meant to support the user in his daily activities. In the context of automated driving, the virtual assistant is the spokesperson of the automated car. He acts as a mediator between the car and the driver and embodies the car intelligence [1]. Among the different concepts presented by car manufacturers we can find invisible characters only present by their voice (YUI from Toyota) [2], or cartoon characters represented on a screen (Hana concept from Honda), or abstract representations (Dragon Drive concept from Nuance). We lack scientific publications that support those design choices especially in regards to trust since different studies [9, 19,20,21,22] showed that a large majority of people do not trust highly automated cars.
In our study we assess trustworthiness of various visual embodiment models for virtual assistant. We needed first to select a range of representative images for our models. For this purpose, we conducted to a picture sorting procedure. Once our visual representation sample was defined we ran an online survey to actually assess trustworthiness.
2 Related Work
2.1 Visual Embodiment and Trust
Researchers have been investigating the visual representation of virtual assistants from varying perspectives. Some of them focus on the question of the importance of a visual embodiment [14, 15, 17]. Their results showed that visual embodiment is important for a pleasant interaction especially when the user’s visual attention is not required. However the realism of the embodiment might be of little importance. Others focus on the advantages of designing a humanlike face for a virtual assistant [13, 16, 18] but also the most important features to implement on the face. For example Disalvo & al. found in [11] that to project a high level of humanness a robot face should have a mouth, a nose, and eyelids. In [10], the authors found that the faces ranked as least friendly (without pupils or mouth, with eyelids) were also the ones ranked as less trustworthy. Similarly, Li et al. showed in their study [3] that a robot’s visual embodiment has an impact on user’s likeability and the found a significant correlation between likeability and trust in the robots.
2.2 Design Space for Virtual Assistants
It is worth noting that all the research work presented above focus on the “Humanlike” visual embodiment. However, the design space is much larger and there are many other models of visual embodiment to choose from. In [5], Haake & Gulz suggested a 3 dimensional design space for visual embodiment including basic model, graphical style and physical properties. The basic model refers to the constitution of the visual embodiment which can follow the form of a human, an animal, a fantasy concept, an inanimate object or a combination of these. The graphical style which can be naturalistic or stylized refers to the degree of details used in the visual design. Considering this design space, the possibilities are countless and the question of which one is most trustworthy depending on the role of the virtual assistant remains. In our study, we focus on the dimension of basic model for a virtual assistant in a highly automated car. We hypothesized that depending on the assistant visual embodiment model, user trust level in the automated system will be different. Hence some visual embodiment models might instill higher levels of trust than others.
3 Research Methodology
Our study articulates two procedures. The first one is a picture sorting procedure meant to help us select images that represented the most each of the predefined virtual assistant models We then incorporated these selected visuals in our survey.
3.1 Picture Sorting Procedure
Picture sorting [6], one of the many card sorting techniques, is used to study user’s mental models and how they categorize different type of images.
19 participants [4] (11 male, 8 female, mean age = 28,10) recruited from IRT SystemX and from a student house (Paris, France) took part in the card sorting procedure.
We collected 82 pictures from the website Pinterest based on ten (10) predefined models: “Human Naturalistic”, “Human stylized”, “Animal Naturalistic”, “Animal stylized”, “Human Mechanical”, “Animal Mechanical”, “Mini Mechanical”, “Abstract”, “Inanimate”, “Fantasy”. Those models were formed based on the first 2 dimensions of the design space proposed in [5].
All models labels were in French during the experiment (translated here). Many other possibilities of models can be identified but for this experiment we chose to focus on only 10.
Pictures were printed and placed on a large table. Models labels were also placed on a table next to the pictures. (see setting in Fig. 1 and Fig. 2)
Firstly, we red to the participants the definition of every label and answered their questions to make sure they understood what each label meant. Then we asked them to sort the pictures by label following two rules:
-
A picture can be placed in more than one group. If that’s the case participant can put the picture in a group and use additional post-it to specify the other group(s) where the image might also be classified.
-
In the case where a picture cannot be placed in any of the predefined groups, the participant can put it in a separate “Non categorized” group.
At the end of the process, we ask the participant to explain his sorting and especially the pictures that were not sorted. Then we asked for their age and professional background before they leave.
3.2 Picture Sorting Results
Each participant sorting results were saved in an excel file and analyzed using the spread rate of each image in predefined groups [7]. Two criteria were used for selection. A model is selected if it has at least 4 representative pictures. A picture is representative if it have been placed in the same group by at least 70% of participants.
Using these criteria, we were able to select 5 of the predefined models and for each of them 4 pictures (Fig. 3).
3.3 Survey Procedure
Participants were invited to this online study via a link shared on social Media like Facebook, Whaller and different mailing lists (universities, student associations and professional associations). 146 participants (88 female, Mean age = 36.92 SD = 14.04 ranging from 19 to 72) completed the online survey. 124 (82.87%) participants had a driver’s license.
When they clicked on the link they were forwarded to a hosted (at Université de Poitiers) version of LimeSurvey, the survey tool that we used for this study.
On the first page participants red general information about the study procedure and gave their informed consent and data assessment agreement. The survey consisted of five parts:
-
(1)
An introduction with questions on driving habits and virtual assistant’s usage;
-
(2)
A short text instructing participants to imagine sitting in a highly automated car with a virtual assistant handling the driving task when automated mode is activated. We asked a few questions on participant behavior in manual mode and once automated mode is activated;
-
(3)
Participants are now instructed to imagine a critical driving situation in automated mode -with the virtual assistant in charge of the driving task- (shifting manoeuver in busy traffic to give passage to an ambulance coming from behind). Then they are presented with each of the 5 visual embodiment models (one model at a time and each model represented by 4 different images; example of abstract model in Fig. 4) selected in the picture sorting procedure. For each model they are asked to fill a 16 items questionnaire (on a scale of 0 to 10) that assessed perceived anthropomorphism, liking and self-reported trust towards the model.
We used a translated version of the questionnaire developed and used in a simulator study by Waytz et al. (2014) in [8]. The order in which models were presented was automatically randomized by Limesurvey.
-
(4)
Here participants can choose the best and worst assistant between the ones presented in part (3).
-
(5)
Participants are asked to fill their personal information as age, gender, education level, country of residence, professional activity.
3.4 Preliminary Survey Results
Nine items in the questionnaires assessed self-reported trust towards each model. The items were averaged to form a single composite (α = 0.97), namely the trust score.
Our results (Table 1) based on this trust score are showing that the Mechanical Human category followed by the Human and Abstract ones were ranked with the highest scores in trust. Conversely, Animal and Mechanical Animal are appearing to be the least appropriate to elicit trust with virtual assistant in autonomous car (Table 2).
Figure 5 show on one hand the distribution of trust score in our study; and on the other hand, it shows how many times each model has been ranked on that particular score. First, trust score distribution shows a frequent scoring between five (the middle of the scale) and eight. This seems to indicate that participants had a relatively positive attitude towards trusting the proposed models (median is generally above five, except for animal model). Very high scores in trust (nine or ten) were rather rare; a non-negligible part of our results shows that a lack of trust may also occur (scores between zero and four). Indeed, looking at model categories, our findings are pointing out that the Animal and Mechanical-Animal categories are more represented in low trust score. Conversely, Human, Mechanical-Human and Abstract are categories the most represented in higher trust scores (third quartile above 7).
4 General Discussion
The objective of this study was to investigate the impact of virtual assistant’s visual embodiment model on user trust in a highly automated car. We measured trust through an online trust questionnaire assessing a range of visual embodiment models, selected in a prior picture sorting procedure.
Our preliminary results are pointing to the Mechanical Human model followed by the Human and Abstract to be the most suitable embodiment models for representing a virtual assistant in an autonomous driving context; while Animal and Mechanical Animal models must be avoided.
Of course, this study by its methodology cannot answer all questions we may raise on virtual assistant assessment. Especially, on-line studies may induce many uncontrolled factors such as (the screen size, participant reading & understanding, doing the survey alone or with someone’s help). Furthermore, participants had to rely on a static picture which might have influence their answers. Seeing a virtual assistant in movement might be very different. For a better ecological validity, future studies might replicate this experiment in a controlled environment like a driving simulator or even a real car with a real-life automated driving experience.
Despite these limitations we have been able through this study to demonstrate a difference in visual embodiment models in regards to trust. Further analysis will be performed on our dataset firstly to identify the impact of the different models on anthropomorphism and liking measures but also correlations between these 3 measures. We will also investigate the hypothesis of potential user profiles related to preference of specific visual embodiment.
References
Nuance: Automotive Assistants, Anthropomorphism and autonomous vehicles (2017). http://engage.nuance.com/wp-autonomous-driving
Okamoto, S., Sano, S.: Anthropomorphic AI agent mediated multimodal interactions in vehicles, pp. 110–114 (2017). https://doi.org/10.1145/3131726.3131736. Author, F., Author, S.: Title of a proceedings paper. In: Editor, F., Editor, S. (eds.) CONFERENCE 2016, LNCS, vol. 9999, pp. 1–13. Springer, Heidelberg (2016)
Dingjun, L., Rau, P.-L., Li, Y.: A cross-cultural study: effect of robot appearance and task. Int. J. Soc. Robot. 2, 175–186 (2010). https://doi.org/10.1007/s12369-010-0056-9
Nielsen, J.: Card Sorting: How Many Users to Test. Alertbox Column (2004). http://www.useit.com/alertbox/20040719.html
Haake, M., Gulz, A.A.: Look at the roles of look & roles in embodied pedagogical agents – a user preference perspective. Int. J. Artif. Intell. Educ. 19(1), 39–71 (2009)
Lobinger, K., Brantner, C.: Picture-sorting techniques. card sorting and Q-sort as alternative and complementary approaches in visual social research. In: Pauwels, L., Mannay, D. (eds.) The Sage Handbook of Visual Research Methods, 2nd Revised and Expanded Edition, pp. 309–321. Sage, London (2020)
Paul, C.L.: Analyzing card-sorting data using graph visualization. J. Usab. Stud. 9(3), 87–104 (2014)
Waytz, A., Heafner, J., Epley, N.: The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 52, 113–117 (2014). https://doi.org/10.1016/j.jesp.2014.01.005
Lazányi, K., Maráczi, G.: Dispositional trust—Do we trust autonomous cars? In: 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, pp. 000135–000140 (2017)
Kalegina, A., Schroeder, G., Allchin, A., Berlin, K., Cakmak, M.: Characterizing the design space of rendered robot faces. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI 2018, pp. 96–104 (2018). https://doi.org/10.1145/3171221.3171286
Disalvo, C., Gemperle, F., Forlizzi, J., Kiesler, S.: All robots are not created equal: the design and perception of humanoid robot heads. In: Proceedings 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, pp. 321–326 (2002). https://doi.org/10.1145/778712.778756
Klamer, T., Allouch, S.: Acceptance and use of a zoomorphic robot in a domestic setting. In: Proceedings of EMCSR, pp. 553–558 (2010)
Edsinger, A.: Designing a humanoid robot face to fulfill a social contract (2000)
Reinhardt, J., Hillen, L., Wolf, K.: Embedding conversational agents into AR: invisible or with a realistic human body? In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI 2020). Association for Computing Machinery, New York, NY, USA, pp. 299–310. https://doi.org/10.1145/3374920.3374956
Yee, N., Bailenson, J.N., Rickertsen, K.: A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. In: Proceedings of CHI 2007, San Jose, CA, USA, pp. 1–10 (2007)
Blow, M., Dautenhahn, K., Appleby, A., Nehaniv, C., Lee, D.: The art of designing robot faces: dimensions for human-robot interaction, 331–332 (2006). https://doi.org/10.1145/1121241.1121301
Kim, K., Bölling, L., Haesler, S., Bailenson, J., Bruder, G., Welch, G.: Does a digital assistant need a body? the influence of visual embodiment and social behavior on the perception of intelligent virtual agents in AR, 105–114 (2018). https://doi.org/10.1109/ISMAR.2018.00039
Breazeal, C.L.: Designing Sociable Robots. MIT Press, Cambridge (2002)
http://newsroom.aaa.com/2017/03/americans-feel-unsafe-sharing-road-fully-self-driving-cars/
Hooft van Huysduynen, H., Terken, J., Eggen, B.: Why disable the autopilot?, pp. 247–257 (2018). https://doi.org/10.1145/3239060.3239063
https://newsroom.aaa.com/2019/03/americans-fear-self-driving-cars-survey/
Fradrich, E., Cyganski, R., Wolf, I., Lenz, B.: User perspectives on autonomous driving. a use-case-driven study in Germany, (Arbeitsberichte des Geogtraphischen Instituts der Humboldt-Universität Berlin, Heft 187) (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Lawson-Guidigbe, C., Louveton, N., Amokrane-Ferka, K., LeBlanc, B., Andre, JM. (2020). Impact of Visual Embodiment on Trust for a Self-driving Car Virtual Agent: A Survey Study and Design Recommendations. In: Stephanidis, C., Antona, M. (eds) HCI International 2020 - Posters. HCII 2020. Communications in Computer and Information Science, vol 1226. Springer, Cham. https://doi.org/10.1007/978-3-030-50732-9_51
Download citation
DOI: https://doi.org/10.1007/978-3-030-50732-9_51
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50731-2
Online ISBN: 978-3-030-50732-9
eBook Packages: Computer ScienceComputer Science (R0)