[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Trust in an AI versus a Human teammate: : The effects of teammate identity and performance on Human-AI cooperation

Published: 01 February 2023 Publication History

Abstract

Recent advances in artificial intelligence (AI) enable researchers to create more powerful AI agents that are becoming competent teammates for humans. However, human distrust of AI is a critical factor that may impede human-AI cooperation. Although AI agents have been endowed with anthropomorphic traits, such as a human-like appearance, in prior studies to improve human trust in AI, it is still an open question whether humans have more trust in an AI teammate and achieve better human-AI joint performance if they are deceived about the identity of their AI teammate as another human. This research assesses the effects of teammate identity (“human” vs. AI) and teammate performance (low-performing vs. high-performing AI) on human-AI cooperation through a human subjects study. The results of this study show that humans behaviorally trust the AI more than another human by accepting their AI teammate's decisions more often. In addition, teammate performance has a significant effect on human-AI joint performance in the study, while teammate identity does not. These results caution against deceiving humans about the identity of AI in future applications involving human-AI cooperation.

Highlights

Humans accept their AI teammate's decision less often when they are deceived about the identity of the AI as another human.
Teammate performance has a significant effect on human-AI joint performance while teammate identity does not.
Humans perceive higher temporal demand when they work with the “human” teammate than the AI teammate.
Humans report that the low-performing “human” teammate is more competent and helpful than the low-performing AI teammate.
Human individual expertise in the task has a considerable influence on human-AI cooperation.

References

[1]
L. Bao, N.M. Krause, M.N. Calice, D.A. Scheufele, C.D. Wirz, D. Brossard, et al., Whose AI? How different publics think about AI and its social impacts, Computers in Human Behavior (2022),. 107182.
[2]
J. Beck, M. Stern, E. Haugsjaa, Applications of AI in education, XRDS: Crossroads, The ACM Magazine for Students 3 (1) (1996) 11–15,.
[3]
R.T. Boone, R. Buck, Emotional expressivity and trustworthiness: The role of nonverbal behavior in the evolution of cooperation, Journal of Nonverbal Behavior 27 (3) (2003) 163–182,.
[4]
J.K. Burgoon, J.A. Bonito, P.B. Lowry, S.L. Humpherys, G.D. Moody, J.E. Gaskin, et al., Application of expectancy violations theory to communication with and judgments about embodied agents during a decision-making task, International Journal of Human-Computer Studies 91 (2016) 24–36,.
[5]
T.L. Carson, Lying and deception: Theory and practice, Oxford University Press, 2010.
[6]
L. Chong, G. Zhang, K. Goucher-Lambert, K. Kotovsky, J. Cagan, Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice, Computers in Human Behavior 127 (2022),.
[7]
R. Contreras-Masse, A. Ochoa-Zezzatti, V. García, L. Pérez-Dominguez, M. Elizondo-Cortés, Implementing a novel use of multicriteria decision analysis to select IIoT platforms for smart manufacturing, Symmetry 12 (3) (2020) 368,.
[8]
K.E. Culley, P. Madhavan, A note of caution regarding anthropomorphism in HCI agents, Computers in Human Behavior 29 (3) (2013) 577–579,.
[9]
Dujmovic, J. (2017): Opinion: What's holding back artificial intelligence? Americans don't trust it. MarketWatch. Retrieved from https://www.marketwatch.com/story/whats-holding-back-artificial-intelligence-americans-dont-trust-it-2017-03-30 Accessed.
[10]
J. Fox, S.J. Ahn, J.H. Janssen, L. Yeykelis, K.Y. Segovia, J.N. Bailenson, Avatars versus agents: A meta-analysis quantifying the effect of agency on social influence, Human-Computer Interaction 30 (5) (2015) 401–432,.
[11]
E. Glikson, A.W. Woolley, Human trust in artificial intelligence: Review of empirical research, The Academy of Management Annals 14 (2) (2020) 627–660,.
[12]
G.M. Grimes, R.M. Schuetzler, J.S. Giboney, Mental models and expectation violations in conversational AI interactions, Decision Support Systems 144 (2021),.
[13]
J.T. Gyory, N.F. Soria Zurita, J. Martin, C. Balon, C. McComb, K. Kotovsky, et al., Human versus artificial intelligence: A data-driven approach to real-time process management during complex engineering design, Journal of Mechanical Design 144 (2) (2022),.
[14]
S.G. Hart, L.E. Staveland, Development of NASA-TLX (task Load Index): Results of empirical and theoretical research, Advances in Psychology 52 (1988) 139–183,.
[15]
M. Hechter, K.-D. Opp, Social norms, Russell Sage Foundation, 2001.
[16]
K.A. Hoff, M. Bashir, Trust in automation: Integrating empirical evidence on factors that influence trust, Human Factors 57 (3) (2015) 407–434,.
[17]
F.-H. Hsu, Behind Deep Blue: Building the computer that defeated the world chess champion, Princeton University Press, 2002.
[18]
S.K. Jha, J. Bilalovic, A. Jha, N. Patel, H. Zhang, Renewable energy: Present research and future scope of artificial intelligence, Renewable and Sustainable Energy Reviews 77 (2017) 297–317,.
[19]
P. Ji, H. Zeng, A. Song, P. Yi, P. Xiong, H. Li, Virtual exoskeleton-driven uncalibrated visual servoing control for mobile robotic manipulators based on human-robot-robot cooperation, Transactions of the Institute of Measurement and Control 40 (14) (2018) 4046–4062,.
[20]
P. Kulms, S. Kopp, More human-likeness, more trust? The effect of anthropomorphism on self-reported and behavioral trust in continued and interdependent human-agent cooperation, in: Proceedings of Mensch Und Computer, 2019, pp. 31–42,. 2019 Hamburg, Germany.
[21]
J.D. Lee, K.A. See, Trust in automation: Designing for appropriate reliance, Human Factors 46 (1) (2004) 50–80,.
[22]
N.J. McNeese, B.G. Schelble, L.B. Canonico, M. Demir, Who/what is my teammate? Team composition considerations in human-AI teaming, IEEE Transactions on Human-Machine Systems 51 (4) (2021) 288–299,.
[23]
Metz, C. (2021): Who is making sure the AI machines aren't racist. The New York Times. Retrieved from : Who is making sure the AI machines aren't racist. The New York Times. https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html.
[24]
D.E. O'Leary, Google's Duplex: Pretending to be human, Intelligent Systems in Accounting, Finance and Management 26 (1) (2019) 46–53,.
[25]
K. Okamura, S. Yamada, Empirical evaluations of framework for adaptive trust calibration in human-AI cooperation, IEEE Access 8 (2020) 220335–220351,.
[26]
G.M. Parker, Team players and teamwork: New strategies for developing successful collaboration, John Wiley & Sons, 2008.
[27]
Pazzanese, C. (2020): Ethical concerns mount as AI takes bigger decision-making role in more industries. The Harvard Gazette. Retrieved from : Ethical concerns mount as AI takes bigger decision-making role in more industries. The Harvard Gazette. https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.
[28]
C. Pelau, D.-C. Dabija, I. Ene, What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry, Computers in Human Behavior 122 (2021),.
[29]
E. Plastino, Data modernization: Breaking the AI vicious cycle for superior decision-making, The cognizant's center for the future of work, Teaneck, New Jersey, USA, 2021, Whitepaper retrieved from https://www.cognizant.com/futureofwork/whitepaper/data-modernization-breaking-the-ai-vicious-cycle-for-superior-decision-making Accessed.
[30]
L. Ross, D. Greene, P. House, The “false consensus effect”: An egocentric bias in social perception and attribution processes, Journal of Experimental Social Psychology 13 (3) (1977) 279–301,.
[31]
I. Seeber, E. Bittner, R.O. Briggs, T. De Vreede, G.-J. De Vreede, A. Elkins, et al., Machines as teammates: A research agenda on AI in team collaboration, Information & Management 57 (2) (2020),.
[32]
Sergio, Stockfish wins TCEC Season 22, sets records, Retrieved from CHESSDOM, 2022, Accessed July 7, 2022 https://www.chessdom.com/stockfish-wins-tcec-season-22-sets-records/.
[33]
S.J. Sherman, C.C. Presson, L. Chassin, Mechanisms underlying the false consensus effect: The special role of threats to the self, Personality and Social Psychology Bulletin 10 (1) (1984) 127–138,.
[34]
K. Siau, W. Wang, Building trust in artificial intelligence, machine learning, and robotics, Retrieved from Cutter Business Technology Journal 31 (2) (2018) 47–53. Accessed July 7, 2022. https://www.cutter.com/article/building-trust-artificial-intelligence-machine-learning-and-robotics-498981.
[35]
H.A. Simon, W. Bibel, A. Bundy, H. Berliner, E.A. Feigenbaum, B.G. Buchanan, et al., AI's greatest trends and controversies, IEEE Intelligent Systems and Their Applications 15 (1) (2000) 8–17,.
[36]
J.B. Soll, R.P. Larrick, Strategies for revising judgment: How (and how well) people use others' opinions, Journal of Experimental Psychology: Learning, Memory, and Cognition 35 (3) (2009) 780–805,.
[37]
M. Sujan, C. Baber, P. Salmon, R. Pool, N. Chozos, Human factors and ergonomics in healthcare AI. The chartered institute of ergonomics and human factors, Wootton Park, UK. Whitepaper retrieved from 2021,. Accessed July 7, 2022 https://ergonomics.org.uk/resource/human-factors-in-healthcare-ai.html.
[38]
A.M. Turing, Computing machinery and intelligence, in: R. Epstein, G. Roberts, G. Beber (Eds.), Parsing the turing test, Springer, 2009, pp. 23–65,.
[39]
K. Van Dongen, P.-P. Van Maanen, A framework for explaining reliance on decision aids, International Journal of Human-Computer Studies 71 (4) (2013) 410–424,.
[40]
L.M. Van Swol, J.E. Paik, A. Prahl, Advice recipients: The psychology of advice utilization, in: E.L. MacGeorge, L.M. Van Swol (Eds.), The Oxford handbook of advice, Oxford University Press, 2018, pp. 21–41.
[41]
F.M. Verberne, J. Ham, C.J. Midden, Trusting a virtual driver that looks, acts, and thinks like you, Human Factors 57 (5) (2015) 895–909,.
[42]
E.J. de Visser, F. Krueger, P. McKnight, S. Scheid, M. Smith, S. Chalk, et al., The world is not enough: Trust in cognitive agents, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56, 2012, pp. 263–267,. Los Angeles, CA.
[43]
E.J. de Visser, S.S. Monfort, K. Goodyear, L. Lu, M. O'Hara, M.R. Lee, et al., A little anthropomorphism goes a long way: Effects of oxytocin on trust, compliance, and team performance with automated agents, Human Factors 59 (1) (2017) 116–133,.
[44]
E.J. de Visser, S.S. Monfort, R. McKendrick, M.A. Smith, P.E. McKnight, F. Krueger, et al., Almost human: Anthropomorphism increases trust resilience in cognitive agents, Journal of Experimental Psychology: Applied 22 (3) (2016) 331–349,.
[45]
A.M. Von der Pütten, N.C. Krämer, J. Gratch, S.-H. Kang, It doesn't matter what you are!” Explaining social effects of agents and avatars, Computers in Human Behavior 26 (6) (2010) 1641–1650,.
[46]
K. Warwick, H. Shah, Can machines think? A report on turing test experiments at the royal society, Journal of Experimental & Theoretical Artificial Intelligence 28 (6) (2016) 989–1007,.
[47]
H.J. Wilson, P.R. Daugherty, Collaborative intelligence: Humans and AI are joining forces, Retrieved from Harvard Business Review 96 (4) (2018) 114–123. Accessed July 7, 2022 https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces.
[48]
G. Zhang, A. Raina, J. Cagan, C. McComb, A cautionary tale about the impact of AI on human design teams, Design Studies 72 (2021),.

Cited By

View all
  • (2024)Understanding the Evolvement of Trust Over Time within Human-AI TeamsProceedings of the ACM on Human-Computer Interaction10.1145/36870608:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)Human vs. AI: Exploring students’ preferences between human and AI TA and the effect of social anxiety and problem complexityEducation and Information Technologies10.1007/s10639-023-12374-429:1(1217-1246)Online publication date: 1-Jan-2024
  • (2023)Theory of trust and acceptance of artificial intelligence technology (TrAAIT)Journal of Biomedical Informatics10.1016/j.jbi.2023.104550148:COnline publication date: 1-Dec-2023
  • Show More Cited By

Index Terms

  1. Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Computers in Human Behavior
    Computers in Human Behavior  Volume 139, Issue C
    Feb 2023
    740 pages

    Publisher

    Elsevier Science Publishers B. V.

    Netherlands

    Publication History

    Published: 01 February 2023

    Author Tags

    1. Artificial intelligence
    2. Trust
    3. Deception
    4. Anthropomorphism
    5. Human-computer interaction
    6. Decision-making

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Understanding the Evolvement of Trust Over Time within Human-AI TeamsProceedings of the ACM on Human-Computer Interaction10.1145/36870608:CSCW2(1-31)Online publication date: 8-Nov-2024
    • (2024)Human vs. AI: Exploring students’ preferences between human and AI TA and the effect of social anxiety and problem complexityEducation and Information Technologies10.1007/s10639-023-12374-429:1(1217-1246)Online publication date: 1-Jan-2024
    • (2023)Theory of trust and acceptance of artificial intelligence technology (TrAAIT)Journal of Biomedical Informatics10.1016/j.jbi.2023.104550148:COnline publication date: 1-Dec-2023
    • (2023)Digital capability requirements and improvement strategiesInformation Processing and Management: an International Journal10.1016/j.ipm.2023.10350460:6Online publication date: 1-Nov-2023

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media