[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Modeling the Human-Robot Trust Phenomenon: A Conceptual Framework based on Risk

Published: 16 November 2018 Publication History

Abstract

This article presents a conceptual framework for human-robot trust which uses computational representations inspired by game theory to represent a definition of trust, derived from social psychology. This conceptual framework generates several testable hypotheses related to human-robot trust. This article examines these hypotheses and a series of experiments we have conducted which both provide support for and also conflict with our framework for trust. We also discuss the methodological challenges associated with investigating trust. The article concludes with a description of the important areas for future research on the topic of human-robot trust.

References

[1]
R. Axelrod. 1984. The Evolution of Cooperation. Basic Books, New York.
[2]
W. A. Bainbridge, J. W. Hart, E. S. Kim, and B. Scassellati. 2011. The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics 3, 1 (2011), 41--52.
[3]
B. Barber. 1983. The Logic and Limits of Trust. Rutgers University Press, New Brunswick, NJ.
[4]
S. Booth, J. Tomlin, H. Pfister, J. Waldo, K. Gajos, and R. Nagpal. 2017. Piggybacking robots: Human-robot overtrust in university dormitory security. In Proceedings of IEEE 12th International Conference on Human-Robot Interaction (HRI’17). 426--434.
[5]
J. Borenstein, A. R. Wagner, and A. Howard. 2018. Overtrust of pediatric healthcare robots: A preliminary survey of parent perspectives. IEEE Robotics and Automation Magazine 25, 1, 46--54.
[6]
R. v. Brule, D. Ron, G. Bijlstra, D. H. Wigboldus, and P. Haselager. 2014. Do robot performance and behavioral style affect human trust? International Journal of Social Robotics 6, 4 (2014), 519--531.
[7]
M. S. Carlson, M. Desai, J. L. Drury, and H. A. Yanco. 2014. Identifying factors that influence trust in automated cars and medical diagnosis systems. In Proceedings of the AAAI Spring Symposium on the Intersection of Robust Intelligence and Trust in Autonomous Systems. AAAI, Palo Alto, CA.
[8]
C. Castelfranch and R. Falcone. 2010. Trust Theory: A Socio-Cognitive and Computational Model. Wiley, New York.
[9]
J. L. Chang, B. B. Doll, M. van Wout, and M. J. Frank. 2010. Seeing is believing: Trustworthiness as a dynamic belief. Cognitive Psychology 61, 2, 87--105.
[10]
J. C. Cooper, T. A. Kreps, T. Wiebe, T. Pirkl, and B. Knutson. 2010. When giving is good: Ventromedial prefrontal cortex activation for others' intentions. Neuron 67, 3 (2010), 511--521.
[11]
M. Desai, P. Kaniarasu, M. Medvedev, A. Steinfeld, and H. Yanco. 2013. Impact of robot failures and feedback on real-time trust. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction. 251--258.
[12]
M. Deutsch. 1960. Trust, trustworthiness, and the F Scale. Journal of Abnormal and Social Psychology 61, 1 (1960), 138--140.
[13]
M. Deutsch. 1962. Cooperation and trust: Some theoretical notes. In Nebraska Symposium on Motivation. University of Nebraska, Lincoln, NE, 275--315.
[14]
J. E. Driskell and E. Salas. 1991. Group decision making under stress. Journal of Applied Psychology 76, 3 (1991), 473.
[15]
J. Engle-Warnick and R. L. Slonim. 2006. Learning to trust in indefinitely repeated games. Games and Economic Behavior 54, 1, 95--114.
[16]
D. Gambetta. 1990. Can we trust trust? In Trust, Making and Breaking Cooperative Relationships. Basil Blackwell, Oxford, England, 213--237.
[17]
P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. de Visser, and R. Parasuraman. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society 53, 5 (2011), 517--527.
[18]
Y. C. Hung, A. R. Dennis, and L. Robert. 2004. Trust in virtual teams: Towards an integrative model of trust formation. In International Conference on System Sciences.
[19]
A. Josang and S. Pope. 2005. Semantic constraints for trust transitivity. In 2nd Asia-Pacific Conference on Conceptual Modeling.
[20]
H. H. Kelley. 1984. The theoretical description of interdependence by means of transition lists. Journal of Personality and Social Psychology 47, 5 (1984) 956--982.
[21]
H. H. Kelly and J. W. Thibaut. 1978. Interpersonal Relations: A Theory of Interdependence. John Wiley 8 Sons, New York.
[22]
B. King-Casas, D. Tomlin, C. Anen, C. F. Camerer, S. R. Quartz, and P. R. Montague. 2005. Getting to know you: Reputation and trust in two-person economic exchange. Science 308, 5718, 78--83.
[23]
E. Kuligowski. 2008. Modeling Human Behavior during Building Fires. National Institute of Standards and Technology (NIST) Technical Note 1619. National Institute of Standards and Technology.
[24]
J. D. Lee and K. A. See. 2004. Trust in automation: Designing for appropriate reliance. Human Factors 46, 1, 50--80.
[25]
D. Li, P. L. Rau, and L. Y. Patrick. 2010. A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics, 2, 2 (2010), 175--186.
[26]
N. Luhmann. 1979. Trust and Power. Wiley, Chichester.
[27]
L. Luna-Reyes, A. M. Cresswell, and G. P. Richardson. 2004. Knowledge and the development of interpersonal trust: A dynamic model. In International Conference on System Science.
[28]
S. Marsh. 1994. Formalising Trust as a Computational Concept. Ph.D. dissertation. University of Stirling.
[29]
R. C. Mayer, J. H. Davis, and F. D. Schoorman. 1995. An integrative model of organizational trust. The Academy of Management Review 20, 3 (1995), 709--734.
[30]
M. J. Osborne and A. Rubinstein. 1994. A Course in Game Theory. Cambridge, MA: MIT Press.
[31]
G. Paolacci, J. Chandler, and P. G. Ipeirotis. 2010. Running experiments on amazon mechanical turk. Judgment and Decision Making 5, 5 (2010), 411--419
[32]
A. Prakash and W. A. Rogers. 2015. Why some humanoid faces are perceived more positively than others: Effects of human-likeness and task. International Journal of Social Robotics 7, 2 (2015), 309--331.
[33]
M. J. Prietula and K. M. Carley. 2001. Boundedly rational and emotional agents. In Trust and Deception in Virtual Society. Kluwer Academic, 169--194.
[34]
J. K. Rempel, J. G. Holmes, and M. P. Zanna. 1985. Trust in close relationships. Journal of Personality and Social Psychology 49, 1 (1985), 95--112.
[35]
J. K. Rilling, D. A. Gutman, T. R. Zeh, G. Pagnoni, G. S. Berns, and C. D. Kilts. 2002. A neural basis for social cooperation. Neuron 35, 2, 395--405.
[36]
J. K. Rilling, A. G. Sanfey, J. A. Aronson, L. E. Nystrom, and J. D. Cohen. 2004. The neural correlates of theory of mind within interpersonal interactions. NeuroImage 22, 4, 1694--1703.
[37]
R. Robinette, R. Allen, W. Li, A. Howard, and A. R. Wagner. 2016. Overtrust of robots in emergency evacuation scenarios. In ACM/IEEE International Conference on Human-Robot Interaction (HRI’16). 101--108.
[38]
P. Robinette, A. Howard, and A. R. Wagner. 2015. Timing is key for robot trust repair. In 7th International Conference on Social Robotics (ICSR’15). 574--583.
[39]
P. Robinette, A. R. Wagner, and A. Howard. 2013. Building and maintaining trust between humans and guidance robots in an emergency. In Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium. 78--83.
[40]
R. Robinette, A. R. Wagner, and A. Howard. 2014. Modeling human-robot trust in emergencies. In Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium, 2014.
[41]
P. Robinette, A. R. Wagner, and A. Howard. 2016. Investigating human-robot trust in emergency scenarios: Methodological lessons learned. In The Intersection of Robust Intelligence (RI) and Trust in Autonomous Systems. Springer.
[42]
P. Robinette, A. R. Wagner, and A. M. Howard. 2017. The effect of robot performance on human-robot trust in time-critical situations. Transactions on Human-Machine Systems, 47, 4, 425--436.
[43]
D. M. Rousseau, S. B. Sitkin, R. S. Burt, and C. Camerer. 1998. Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23, 3, 393--404.
[44]
S. Russell and P. Norvig. 1995. Artificial Intelligence: A Modern Approach. Pearson.
[45]
M. Salem, G. Lakatos, F. Amirabdollahian, and K. Dautenhahn. 2015. Would you trust a (faulty) robot?: Effects of error, task type and personality on human-robot cooperation and trust. In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), 141--148.
[46]
M. Schillo and P. Funk. 1999. Learning from and about other agents in terms of social metaphors. In IJCAI Workshop on Agents Learning about, from and with Other Agents.
[47]
M. Schillo, P. Funk, and M. Rovatsos. 2000. Using trust for detecting deceitful agents in artificial societies. Applied Artificial Intelligence Journal, Special Issue on Trust, Deception and Fraud in Agent Societies 14, 8 (2000), 825--848.
[48]
B. R. Schlenker, B. Helm, and J. T. Tedeschi. 1973. The effects of personality and situational variables on behavioral trust. Journal of Personality and Social Psychology 39, 4 (1973), 419--27.
[49]
J. A. Simpson. 2007. Psychological foundations of trust. Current Directions in Psychological Science 16, 5 (2007), 264--268.
[50]
B. Skyrms. 2004. The Stag Hunt and the Evolution of Social Structure. Cambridge, UK: Cambridge University Press.
[51]
Y. Trope and N. Liberman. 2010. Construal-level theory of psychological distance. Psychological Review 117, 2 (2010), 440--463.
[52]
A. R. Wagner. 2009. Creating and using matrix representations of social interaction. In Proceedings of IEEE 4th International Conference on Human-Robot Interaction (HRI’09). 125--132.
[53]
A. R. Wagner. 2009. The Role of Trust and Relationships in Human-Robot Social Interaction. Ph.D. dissertation, School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA.
[54]
A. R. Wagner. 2013. Developing robots that recognize when they are being trusted. In Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium, 84--89.
[55]
A. R. Wagner. 2015. Robots that stereotype: Creating and using categories of people for human-robot interaction. Journal of Human-Robot Interaction 4, 2 (2015), 97--124.
[56]
A. R. Wagner and P. Robinette. 2015. Towards robots that trust: Human subject validation of the situational conditions for trust. Interaction Studies 16, 1 (2015), 89--117.
[57]
A. R. Wagner and R. C. Arkin. 2006. A framework for situation-based social interaction. In Proceedings of the 15th International Symposium on Robot and Human Interactive Communication (RO-MAN’11). 291--297.
[58]
A. R. Wagner and R. Arkin. 2011. Recognizing situations that demand trust. In 20th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’11). Atlanta, GA.
[59]
J. S. Winston, B. A. Strange, J. O'Doherty, and R. J. Dolan. 2002. Automatic and intentional brain responses during evaluation of trustworthiness of faces. Nature Neuroscience. 5, 3, 277--283.
[60]
R. E. Yagoda and D. J. Gillan. 2012. You want me to trust a ROBOT? The development of a human--robot interaction trust scale. International Journal of Social Robotics 4, 3, 235--248.
[61]
T. Yamagishi. 2001. Trust as a form of social intelligence. In Trust in Society., New York, NY: Russell Sage Foundation.
[62]
YouTube. (2016, 11/27). Retrieved from https://www.youtube.com/watch?v=yLmxS71FIJc.

Cited By

View all
  • (2025)TICK: A Knowledge Processing Infrastructure for Cognitive Trust in Human–Robot InteractionInternational Journal of Social Robotics10.1007/s12369-024-01206-1Online publication date: 4-Jan-2025
  • (2024)A Meta-Analysis of Vulnerability and Trust in Human–Robot InteractionACM Transactions on Human-Robot Interaction10.1145/365889713:3(1-25)Online publication date: 29-Apr-2024
  • (2024)The IDEA of Us: An Identity-Aware Architecture for Autonomous SystemsACM Transactions on Software Engineering and Methodology10.1145/365443933:6(1-38)Online publication date: 28-Jun-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 8, Issue 4
December 2018
159 pages
ISSN:2160-6455
EISSN:2160-6463
DOI:10.1145/3292532
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 November 2018
Accepted: 01 September 2018
Revised: 01 November 2017
Received: 01 December 2016
Published in TIIS Volume 8, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Human-robot trust
  2. risk
  3. social robotics
  4. trust

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)77
  • Downloads (Last 6 weeks)14
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)TICK: A Knowledge Processing Infrastructure for Cognitive Trust in Human–Robot InteractionInternational Journal of Social Robotics10.1007/s12369-024-01206-1Online publication date: 4-Jan-2025
  • (2024)A Meta-Analysis of Vulnerability and Trust in Human–Robot InteractionACM Transactions on Human-Robot Interaction10.1145/365889713:3(1-25)Online publication date: 29-Apr-2024
  • (2024)The IDEA of Us: An Identity-Aware Architecture for Autonomous SystemsACM Transactions on Software Engineering and Methodology10.1145/365443933:6(1-38)Online publication date: 28-Jun-2024
  • (2024)Shared Bodily Fusion: Leveraging Inter-Body Electrical Muscle Stimulation for Social PlayProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660723(2088-2106)Online publication date: 1-Jul-2024
  • (2024)To Err is Automation: Can Trust be Repaired by the Automated Driving System After its Failure?IEEE Transactions on Human-Machine Systems10.1109/THMS.2024.343468054:5(508-519)Online publication date: Oct-2024
  • (2024)A Group-Vehicles Oriented Reputation Assessment Scheme for Edge VANETsIEEE Transactions on Cloud Computing10.1109/TCC.2024.340650912:3(859-875)Online publication date: Jul-2024
  • (2024)Interdependence and trust analysis (ITA): a framework for human–machine team designBehaviour & Information Technology10.1080/0144929X.2024.2431631(1-21)Online publication date: 25-Nov-2024
  • (2024)Bots against BiasThe Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction10.1017/9781009386708.023(362-390)Online publication date: 7-Dec-2024
  • (2024)Issues and Concerns for Human–Robot InteractionThe Cambridge Handbook of the Law, Policy, and Regulation for Human–Robot Interaction10.1017/9781009386708.013(171-390)Online publication date: 7-Dec-2024
  • (2024)Trust and robotics: a multi-staged decision-making approach to robots in communityAI & Society10.1007/s00146-023-01705-139:5(2463-2478)Online publication date: 1-Oct-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media