[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3319502.3374804acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Examining Profiles for Robotic Risk Assessment: Does a Robot's Approach to Risk Affect User Trust?

Published: 09 March 2020 Publication History

Abstract

As autonomous robots move towards ubiquity, the need for robots to make decisions under risk that are trustworthy becomes increasingly significant; both to aid acceptance and to fully utilise their autonomous capabilities. We propose that incorporating a human approach to risk assessment into a robot's decision making process will increase user trust. This work investigates four robotic approaches to risk and, through a user study, explores the levels of trust placed in each. These approaches are: risk averse, risk seeking, risk neutral and a human approach to risk. Risk is artificially stimulated through performance-based compensation, in line with previous studies. The study was conducted in a virtual nuclear environment created using the Unity games engine. Forty participants were asked to complete a robot supervision task, in which they observed a robot making risk based decisions and were able to question the robot, question the robot further and ultimately accept or alter the robot's decision. It is shown that a robot that is risk seeking is significantly less trusted than a risk averse robot, a risk neutral robot and a robot utilising human approach to risk. There was found to be no significant difference between the levels of trust placed in the risk averse, risk neutral and human approach to risk. It is also found that the level to which participants question a robot's decisions does not form an accurate measure of trust. The results suggest that when designing a robot that must make risk based decisions during teleoperation in a hazardous environment, an engineer should avoid a risk seeking robot. However, that same engineer may choose whichever of the remaining risk profiles best suits the implementation, with knowledge that the trust in their system is unlikely to be significantly affected.

Supplementary Material

MP4 File (p23-bridgwater.mp4)

References

[1]
W Bainbridge, J Hart, E Kim, and B Scassellati. 2008. The effect of presence on human-robot interaction. RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication (2008).
[2]
Wilma A Bainbridge, Justin W Hart, Elizabeth S Kim, and Brian Scassellati. 2011. The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, Vol. 3, 1 (2011), 41--52.
[3]
Deborah Billings, Kristin Schaefer, Jessie Chen, and Peter Hancock. 2012. Human-robot interaction: developing trust in robots. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction .
[4]
Colin F Camerer. 1989. An experimental test of several generalized utility theories. Journal of Risk and uncertainty, Vol. 2, 1 (1989), 61--104.
[5]
Ewart J de Visser, Samuel S Monfort, Ryan McKendrick, Melissa AB Smith, Patrick E McKnight, Frank Krueger, and Raja Parasuraman. 2016. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, Vol. 22, 3 (2016), 331.
[6]
M Desai, M Medvedev, M Vázquez, S McSheehy, S Gadea-Omelchenko, C Bruggeman, A Steinfeld, and H Yanco. 2012. Effects of changing reliability on trust of robot systems. Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (2012).
[7]
M Dzindolet, Peterson S, R Pomranky, L Pierce, and H Beck. 2003. The role of trust in automation reliance. International journal of human-computer studies (2003).
[8]
Mary T Dzindolet, Linda G Pierce, Hall P Beck, and Lloyd A Dawe. 2002. The perceived utility of human and automated aids in a visual detection task. Human Factors, Vol. 44, 1 (2002), 79--94.
[9]
Peter C Fishburn. 1988. Nonlinear preference and utility theory . Number 5. Johns Hopkins University Press.
[10]
Will Goldstone. 2009. Unity game development essentials .Packt Publishing Ltd.
[11]
P Hancock, D Billings, K Schaefer, J Chen, E De Visser, and R Parasuraman. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human factors (2011).
[12]
Matthias Heger. 1994. Consideration of risk in reinforcement learning. In Machine Learning Proceedings 1994 . Elsevier, 105--111.
[13]
Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, Vol. 4, 1 (2000), 53--71.
[14]
Daniel Kahneman and Amos Tversky. 2013. Prospect theory: An analysis of decision under risk. In Handbook of the fundamentals of financial decision making: Part I. World Scientific, 99--127.
[15]
J.D. Lee and K.A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors (2004).
[16]
John D. Lee and Neville Moray. 1994. Trust, self-confidence, and operators' adaptation to automation. International journal of human-computer studies (1994).
[17]
Mark J Machina. 1992. Choice under uncertainty: Problems solved and unsolved. In Foundations of Insurance Economics . Springer, 49--82.
[18]
Poornima Madhavan and Douglas A Wiegmann. 2007. Similarities and differences between human-human and human--automation trust: an integrative review. Theoretical Issues in Ergonomics Science, Vol. 8, 4 (2007), 277--301.
[19]
Anirudha Majumdar and Marco Pavone. 2017. How should a robot assess risk? towards an axiomatic theory of risk in robotics. arXiv preprint arXiv:1710.11040 (2017).
[20]
Bonnie M. Muir. 1987. Trust between humans and machines, and the design of decision aids. International journal of man-machine studies (1987).
[21]
Keiji Nagatani, Seiga Kiribayashi, Yoshito Okada, Kazuki Otake, Kazuya Yoshida, Satoshi Tadokoro, Takeshi Nishimura, Tomoaki Yoshida, Eiji Koyanagi, Mineo Fukushima, et almbox. 2013. Emergency response to the nuclear accident at the Fukushima Daiichi Nuclear Power Plants using mobile rescue robots. Journal of Field Robotics, Vol. 30, 1 (2013), 44--63.
[22]
Raja Parasuraman and Victor Riley. 1997. Humans and automation: Use, misuse, disuse, abuse. Human factors (1997).
[23]
Irene Rae, Leila Takayama, and Bilge Mutlu. 2013. In-body experiences: embodiment, control, and trust in robot-mediated communication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems .
[24]
Paul Robinette, Ayanna M Howard, and Alan R Wagner. 2017. Effect of robot performance on human--robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, Vol. 47, 4 (2017), 425--436.
[25]
Paul Robinette, Wenchen Li, Robert Allen, Ayanna M Howard, and Alan R Wagner. 2016. Overtrust of robots in emergency evacuation scenarios. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction. IEEE Press, 101--108.
[26]
Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, and Kerstin Dautenhahn. 2015. Would you trust a (faulty) robot?: Effects of error, task type and personality on human-robot cooperation and trust. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction .
[27]
Julian Sanchez. 2006. Factors that affect trust and reliance on an automated aid . Ph.D. Dissertation. Georgia Institute of Technology.
[28]
Katie Seaborn and Deborah I Fels. 2015. Gamification in theory and action: A survey. International Journal of human-computer studies, Vol. 74 (2015), 14--31.
[29]
Amos Tversky and Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, Vol. 5, 4 (1992), 297--323.
[30]
Anouk van Maris, Hagen Lehmann, Lorenzo Natale, and Beata Grzyb. 2017. The influence of a robot's embodiment on trust: A longitudinal study. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 313--314.
[31]
N. Wang, D Pynadath, E Rovira, M Barnes, and S Hill. 2018. Is it my looks? or something i said? the impact of explanations, embodiment, and expectations on trust and performance in human-robot teams. In International Conference on Persuasive Technology .

Cited By

View all
  • (2024)“Warning!” Benefits and Pitfalls of Anthropomorphising Autonomous Vehicle Informational Assistants in the Case of an AccidentMultimodal Technologies and Interaction10.3390/mti81201108:12(110)Online publication date: 5-Dec-2024
  • (2024)Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is MadeProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642018(1-14)Online publication date: 11-May-2024
  • (2023)Being Trustworthy is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their TrustProceedings of the ACM on Human-Computer Interaction10.1145/35794607:CSCW1(1-17)Online publication date: 16-Apr-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
March 2020
690 pages
ISBN:9781450367462
DOI:10.1145/3319502
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 March 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. decision making
  2. hri
  3. nuclear
  4. performance
  5. prospect theory
  6. risk
  7. trust

Qualifiers

  • Research-article

Funding Sources

  • EPSRC

Conference

HRI '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)50
  • Downloads (Last 6 weeks)5
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)“Warning!” Benefits and Pitfalls of Anthropomorphising Autonomous Vehicle Informational Assistants in the Case of an AccidentMultimodal Technologies and Interaction10.3390/mti81201108:12(110)Online publication date: 5-Dec-2024
  • (2024)Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is MadeProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642018(1-14)Online publication date: 11-May-2024
  • (2023)Being Trustworthy is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their TrustProceedings of the ACM on Human-Computer Interaction10.1145/35794607:CSCW1(1-17)Online publication date: 16-Apr-2023
  • (2022)Real-Time Avoidance of Ionising Radiation Using Layered Costmaps for Mobile RobotsFrontiers in Robotics and AI10.3389/frobt.2022.8620679Online publication date: 17-Mar-2022
  • (2022)Configuring Humans: What Roles Humans Play in HRI Research2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI)10.1109/HRI53351.2022.9889496(478-492)Online publication date: 7-Mar-2022
  • (2021)Simulating Ionising Radiation in Gazebo for Robotic Nuclear Inspection ChallengesRobotics10.3390/robotics1003008610:3(86)Online publication date: 7-Jul-2021
  • (2021)How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical MethodologiesProceedings of the ACM on Human-Computer Interaction10.1145/34760685:CSCW2(1-39)Online publication date: 18-Oct-2021
  • (2020)Trust in Robots: Challenges and OpportunitiesCurrent Robotics Reports10.1007/s43154-020-00029-yOnline publication date: 3-Sep-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media