[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3610977.3634990acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article
Open access

When Do People Want an Explanation from a Robot?

Published: 11 March 2024 Publication History

Abstract

Explanations are a critical topic in AI and robotics, and their importance in generating trust and allowing for successful human-robot interactions has been widely recognized. However, it is still an open question when and in what interaction contexts users most want an explanation from a robot. In our pre-registered study with 186 participants, we set out to identify a set of scenarios in which users show a strong need for explanations. Participants are shown 16 videos portraying seven distinct situation types, from successful human-robot interactions to robot errors and robot inabilities. Afterwards, they are asked to indicate if and how they wish the robot to communicate subsequent to the interaction in the video. The results provide a set of interactions, grounded in literature and verified empirically, in which people show the need for an explanation. Moreover, we can rank these scenarios by how strongly users think an explanation is necessary and find statistically significant differences. Comparing giving explanations with other possible response types, such as the robot apologizing or asking for help, we find that why-explanations are always among the two highest-rated responses, with the exception of when the robot simply acts normally and successfully. This stands in stark contrast to the other possible response types that are useful in a much more restricted set of situations. Lastly, we test for factors of an individual that might influence their response preferences, for example, their general attitude towards robots, but find no significant correlations. Our results can guide roboticists in designing more user-centered and transparent interactions and let explainability researchers develop more pinpointed explanations.

Supplemental Material

ZIP File
The PDF contains the results of all statistical tests, i.e., all p-values and effect sizes. Data and code can be found here: https://github.com/lwachowiak/HRI-Video-Survey-on-Preferred-Robot-Responses

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138-- 52160. https://doi.org/10.1109/ACCESS.2018.2870052
[2]
Jakob Ambsdorf, Alina Munir, YiyaoWei, Klaas Degkwitz, Harm Matthias Harms, Susanne Stannek, Kyra Ahrens, Dennis Becker, Erik Strahl, Tom Weber, and Stefan Wermter. 2022. Explain yourself! Effects of Explanations in Human- Robot Interaction. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 393--400. https://doi.org/10.1109/ROMAN53752.2022.9900558
[3]
Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable Agents and Robots: Results from a Systematic Literature Review. In Proceedings of the 18th International Conference on Autonomous Agents and Multi- Agent Systems (Montreal QC, Canada) (AAMAS '19). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1078--1088.
[4]
Yoav Benjamini and Yosef Hochberg. 1995. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society: Series B (Methodological) 57, 1 (1995), 289--300. https: //doi.org/10.1111/j.2517--6161.1995.tb02031.x
[5]
Martim Brandao, Gerard Canal, Senka Krivi?, and Daniele Magazzeni. 2021. Towards providing explanations for robot motion planning. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 3927--3933.
[6]
Martim Brandao, Masoumeh Mansouri, Areeb Mohammed, Paul Luff, and Amanda Coles. 2022. Explainability in Multi-Agent Path/Motion Planning: User- Study-Driven Taxonomy and Requirements. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (Virtual Event, New Zealand) (AAMAS '22). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 172--180.
[7]
Sungwoo Choi, Anna S. Mattila, and Lisa E. Bolton. 2021. To Err Is Human(-oid): How Do Consumers React to Robot Service Failure and Recovery? Journal of Service Research 24, 3 (2021), 354--371. https://doi.org/10.1177/1094670520978798
[8]
Devleena Das, Siddhartha Banerjee, and Sonia Chernova. 2021. Explainable AI for Robot Failures: Generating Explanations That Improve User Assistance in Fault Recovery. In Proceedings of the 2021 ACM/IEEE International Conference on Human- Robot Interaction (Boulder, CO, USA) (HRI '21). Association for Computing Machinery, New York, NY, USA, 351--360. https://doi.org/10.1145/3434073.3444657
[9]
Connor Esterwood and Lionel P. Robert. 2021. Do You Still Trust Me? Human- Robot Trust Repair Strategies. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). 183--188. https://doi.org/10. 1109/RO-MAN50785.2021.9515365
[10]
Connor Esterwood and Lionel P Robert. 2022. Having the Right Attitude: How Attitude Impacts Trust Repair in Human-Robot Interaction. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 332--341.
[11]
Franz Faul, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior research methods 39, 2 (2007), 175--191.
[12]
Kerstin Fischer, Hanna Mareike Weigelin, and Leon Bodenhagen. 2018. Increasing Trust in Human--Robot Medical Interactions: Effects of Transparency and Adaptability. Paladyn, Journal of Behavioral Robotics 9, 1 (2018), 95--109.
[13]
Haley N Green, Md Mofijul Islam, Shahira Ali, and Tariq Iqbal. 2022. Who's Laughing NAO?: Examining Perceptions of Failure in a Humorous Robot Partner. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 313--322.
[14]
Kasper Hald, Katharina Weitz, Elisabeth André, and Matthias Rehm. 2021. An Error Occurred!"-Trust Repair With Virtual Robot Using Levels of Mistake Explanation. In Proceedings of the 9th International Conference on Human-Agent Interaction. 218--226.
[15]
Adriana Hamacher, Nadia Bianchi-Berthouze, Anthony G Pipe, and Kerstin Eder. 2016. Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 493--500.
[16]
Zhao Han, Elizabeth Phillips, and Holly A. Yanco. 2021. The Need for Verbal Robot Explanations and How PeopleWould Like a Robot to Explain Itself. J. Hum.-Robot Interact. 10, 4, Article 36 (sep 2021), 42 pages. https://doi.org/10.1145/3469652
[17]
Sture Holm. 1979. A Simple Sequentially Rejective Multiple Test Procedure. Scandinavian Journal of Statistics 6, 2 (1979), 65--70. http://www.jstor.org/stable/ 4615733
[18]
Shanee Honig and Tal Oron-Gilad. 2018. Understanding and Resolving Failures in Human--Robot Interaction: Literature Review and Model Development. Frontiers in Psychology 9 (2018), 861.
[19]
Ulas Berk Karli, Shiye Cao, and Chien-Ming Huang. 2023. " What If It Is Wrong": Effects of Power Dynamics and Trust Repair Strategy on Trust and Compliance in HRI. In Proceedings of the 2023 ACM/IEEE International Conference on Human- Robot Interaction. 271--280.
[20]
Taemie Kim and Pamela Hinds. 2006. Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction. In ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication. 80--85. https://doi.org/10.1109/ROMAN.2006.314398
[21]
Dimosthenis Kontogiorgos, Sanne van Waveren, Olle Wallberg, Andre Pereira, Iolanda Leite, and Joakim Gustafson. 2020. Embodiment Effects in Interactions with Failing Robots. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376372
[22]
Mika Koverola, Anton Kunnari, Jukka Sundvall, and Michael Laakasuo. 2022. General Attitudes Towards Robots Scale (GAToRS): A New Instrument for Social Surveys. International Journal of Social Robotics 14, 7 (2022), 1559--1581.
[23]
Lea Krause and Piek Vossen. 2020. When to explain: Identifying explanation triggers in human-agent interaction. In 2ndWorkshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Jose M. Alonso and Alejandro Catala (Eds.). Association for Computational Linguistics, Dublin, Ireland, 55--60. https://aclanthology.org/2020.nl4xai-1.12
[24]
William H Kruskal and W Allen Wallis. 1952. Use of ranks in one-criterion variance analysis. Journal of the American statistical Association 47, 260 (1952), 583--621.
[25]
Min Kyung Lee, Sara Kielser, Jodi Forlizzi, Siddhartha Srinivasa, and Paul Rybski. 2010. Gracefully Mitigating Breakdowns in Robotic Services. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (Osaka, Japan) (HRI '10). IEEE Press, 203--210.
[26]
Joseph B Lyons, Izz aldin Hamdan, and Thy Q Vo. 2023. Explanations and trust: What happens to trust when a robot partner does something unexpected? Computers in Human Behavior 138 (2023), 107473.
[27]
Bertram F Malle. 2006. How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. MIT press.
[28]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1--38. https://doi.org/10.1016/j.artint. 2018.07.007
[29]
Nicole Mirnig, Manuel Giuliani, Gerald Stollnberger, Susanne Stadler, Roland Buchner, and Manfred Tscheligi. 2015. Impact of Robot Actions on Social Signals and Reaction Times in HRI Error Situations. In Social Robotics, Adriana Tapus, Elisabeth André, Jean-Claude Martin, François Ferland, and Mehdi Ammi (Eds.). Springer International Publishing, Cham, 461--471.
[30]
Nicole Mirnig, Gerald Stollnberger, Markus Miksch, Susanne Stadler, Manuel Giuliani, and Manfred Tscheligi. 2017. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot. Frontiers in Robotics and AI 4 (2017). https://doi.org/10.3389/frobt.2017.00021
[31]
Matt Molineaux, Matthew Klenk, and David W. Aha. 2010. Goal-Driven Autonomy in a Navy Strategy Simulation. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (Atlanta, Georgia) (AAAI'10). AAAI Press, 1548--1554.
[32]
Daichi Morimoto, Jani Even, and Takayuki Kanda. 2020. Can a Robot Handle Customers with Unreasonable Complaints?. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI '20). Association for Computing Machinery, New York, NY, USA, 579--587. https://doi.org/10.1145/3319502.3374830
[33]
Rafael Papallas and Mehmet R. Dogar. 2022. To ask for help or not to ask: A predictive approach to human-in-the-loop motion planning for robot manipulation tasks. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 649--656. https://doi.org/10.1109/IROS47612.2022.9981679
[34]
Thomas Parr, Giovanni Pezzulo, and Karl J Friston. 2022. Active inference: the free energy principle in mind, brain, and behavior. MIT Press.
[35]
Alex Raymond, Hatice Gunes, and Amanda Prorok. 2020. Culture-Based Explainable Human-Agent Deconfliction. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (Auckland, New Zealand) (AAMAS '20). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1107--1115.
[36]
Allen Z Ren, Anushri Dixit, Alexandra Bodrova, Sumeet Singh, Stephen Tu, Noah Brown, Peng Xu, Leila Takayama, Fei Xia, Jake Varley, et al. 2023. Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners. arXiv preprint arXiv:2307.01928 (2023).
[37]
Avi Rosenfeld and Ariella Richardson. 2019. Explainability in Human--Agent Systems. Autonomous Agents and Multi-Agent Systems 33, 6 (nov 2019), 673--705. https://doi.org/10.1007/s10458-019-09408-y
[38]
Fatai Sado, Chu Kiong Loo,Wei Shiung Liew, Matthias Kerzel, and StefanWermter. 2023. Explainable Goal-Driven Agents and Robots - A Comprehensive Review. ACM Comput. Surv. 55, 10, Article 211 (feb 2023), 41 pages. https://doi.org/10. 1145/3564240
[39]
Tatsuya Sakai and Takayuki Nagai. 2022. Explainable autonomous robots: a survey and perspective. Advanced Robotics 36, 5--6 (2022), 219--238. https: //doi.org/10.1080/01691864.2022.2029720
[40]
Roger C Schank and Robert P Abelson. 2013. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Psychology Press.
[41]
Rossitza Setchi, Maryam Banitalebi Dehkordi, and Juwairiya Siraj Khan. 2020. Explainable Robotics in Human-Robot Interactions. Procedia Computer Science
[42]
Sarath Sreedharan, Tathagata Chakraborti, and Subbarao Kambhampati. 2021. Foundations of explanations as model reconciliation. Artificial Intelligence 301 (2021), 103558. https://doi.org/10.1016/j.artint.2021.103558
[43]
Sarath Sreedharan, Siddharth Srivastava, David Smith, and Subbarao Kambhampati. 2019. Why Can't You Do That HAL? Explaining Unsolvability of Planning Tasks. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (Macao, China) (IJCAI'19). AAAI Press, 1422--1430.
[44]
Vasant Srinivasan and Leila Takayama. 2016. Help Me Please: Robot Politeness Strategies for Soliciting Help From Humans. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 4945--4955. https://doi.org/10.1145/2858036.2858217
[45]
John D. Storey and Robert Tibshirani. 2003. Statistical significance for genomewide studies. Proceedings of the National Academy of Sciences 100, 16 (2003), 9440--9445. https://doi.org/10.1073/pnas.1530509100 arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.1530509100
[46]
Maarten Van Someren, Yvonne F Barnard, and J Sandberg. 1994. The think aloud method: a practical approach to modelling cognitive. London: AcademicPress 11 (1994), 29--41.
[47]
András Vargha and Harold D Delaney. 2000. A critique and improvement of the CL common language effect size statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics 25, 2 (2000), 101--132.
[48]
Lennart Wachowiak, Peter Tisnikar, Gerard Canal, Andrew Coles, Matteo Leonetti, and Oya Celiktutan. 2022. Analysing Eye Gaze Patterns during Confusion and Errors in Human--Agent Collaborations. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 224--229. https://doi.org/10.1109/RO-MAN53752.2022.9900589
[49]
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. The Impact of POMDPGenerated Explanations on Trust and Performance in Human-Robot Teams. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (Singapore, Singapore) (AAMAS '16). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 997--1005.

Cited By

View all
  • (2024)GPT-4 as a Moral Reasoner for Robot Command RejectionProceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3688319(54-63)Online publication date: 24-Nov-2024
  • (2024)Effects of Incoherence in Multimodal Explanations of Robot FailuresCompanion Proceedings of the 26th International Conference on Multimodal Interaction10.1145/3686215.3690155(6-10)Online publication date: 4-Nov-2024
  • (2024)A Time Series Classification Pipeline for Detecting Interaction Ruptures in HRI Based on User ReactionsProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3688386(657-665)Online publication date: 4-Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '24: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
March 2024
982 pages
ISBN:9798400703225
DOI:10.1145/3610977
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 March 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. error mitigation
  2. explainability
  3. hri
  4. human-agent interaction
  5. user study
  6. user-centered ai
  7. xai

Qualifiers

  • Research-article

Funding Sources

  • UKRI
  • Royal Academy
  • EPSRC COHERENT
  • EPSRC LISI

Conference

HRI '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)649
  • Downloads (Last 6 weeks)86
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)GPT-4 as a Moral Reasoner for Robot Command RejectionProceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3688319(54-63)Online publication date: 24-Nov-2024
  • (2024)Effects of Incoherence in Multimodal Explanations of Robot FailuresCompanion Proceedings of the 26th International Conference on Multimodal Interaction10.1145/3686215.3690155(6-10)Online publication date: 4-Nov-2024
  • (2024)A Time Series Classification Pipeline for Detecting Interaction Ruptures in HRI Based on User ReactionsProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3688386(657-665)Online publication date: 4-Nov-2024
  • (2024)Templated vs. Generative: Explaining Robot Failures2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN)10.1109/RO-MAN60168.2024.10731331(1346-1353)Online publication date: 26-Aug-2024
  • (2024)A Taxonomy of Explanation Types and Need Indicators in Human–Agent CollaborationsInternational Journal of Social Robotics10.1007/s12369-024-01148-816:7(1681-1692)Online publication date: 5-Jun-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media