[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

Understanding the Evolvement of Trust Over Time within Human-AI Teams

Published: 08 November 2024 Publication History

Abstract

The success of human-AI teams (HATs) requires humans to work with AI teammates in trustful ways over a certain time period. However, how trust evolves and changes dynamically in response to human-AI team interactions is generally understudied. This work explores the evolvement of trust in HATs over time by analyzing 45 participants' experiences of trust or distrust in an AI teammate prior to, during, and after collaborating with AI in a three-member HAT. Our findings highlight that humans' expectations of AI's ability, integrity, benevolence, and adaptability influence their initial trust in AI before collaboration. However, this initial trust can be maintained or revised through the development of situational trust during collaboration in response to the AI teammate's communication behaviors. Further, the trust developed through collaboration can impact individuals' subsequent expectations of AI's ability and their collaborations with AI. Our findings also reveal the similarities and differences in the temporal dimensions of trust for AI and human teammates. We contribute to CSCW community by offering one of the first empirical investigations into the dynamic and temporal dimension of trust evolvement in HATs. Our work yields insights into the pathways to expanding the methodological toolkit for investigating the development of trust in HATs, formulating theories of trust for the HAT context. These insights further inform the effective design of AI teammates and provide guidance on the timing, content, and methods for calibrating trust in future human-AI collaboration contexts.

References

[1]
Ronald Arkin. 2009. Governing lethal behavior in autonomous robots. CRC press.
[2]
Eugénie Avril. 2023. Providing different levels of accuracy about the reliability of automation to a human operator: impact on human performance. Ergonomics, Vol. 66, 2 (2023), 217--226.
[3]
Annette Baier. 1986. Trust and Antitrust. Ethics, Vol. 96, 2 (1986), 231--260. http://www.jstor.org/stable/2381376
[4]
Daniel Balliet and Paul AM Van Lange. 2013. Trust, conflict, and cooperation: a meta-analysis. Psychological bulletin, Vol. 139, 5 (2013), 1090.
[5]
Gagan Bansal, Alison Marie Smith-Renner, Zana Buçinca, Tongshuang Wu, Kenneth Holstein, Jessica Hullman, and Simone Stumpf. 2022. Workshop on Trust and Reliance in AI-Human Teams (TRAIT). In CHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, New Orleans LA USA, 1--6. https://doi.org/10.1145/3491101.3503704
[6]
Peter Blomsma, Gabriel Skantze, and Marc Swerts. 2022. Backchannel behavior influences the perceived personality of human and artificial communication partners. Frontiers in Artificial Intelligence, Vol. 5 (2022), 835298.
[7]
Philip Bobko, Leanne Hirshfield, Lucca Eloy, Cara Spencer, Emily Doherty, Jack Driscoll, and Hannah Obolsky. 2023. Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems. Theoretical Issues in Ergonomics Science, Vol. 24, 3 (2023), 310--334.
[8]
Christina Breuer, Joachim Hüffmeier, and Guido Hertel. 2016. Does trust matter more in virtual teams? A meta-analysis of trust and team effectiveness considering virtuality and documentation as moderators. Journal of Applied Psychology, Vol. 101, 8 (2016), 1151.
[9]
Zana Buccinca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--21.
[10]
Sabrina Caldwell, Penny Sweetser, Nicholas O'Donnell, Matthew J Knight, Matthew Aitchison, Tom Gedeon, Daniel Johnson, Margot Brereton, Marcus Gallagher, and David Conroy. 2022. An agile new research framework for hybrid human-AI teaming: Trust, transparency, and transferability. ACM Transactions on Interactive Intelligent Systems (TiiS), Vol. 12, 3 (2022), 1--36.
[11]
Kendall Carmody, Cherrise Ficke, Daniel Nguyen, Arianna Addis, Summer Rebensky, and Meredith Carroll. 2022. A Qualitative Analysis of Trust Dynamics in Human-Agent Teams (HATs). In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 66. SAGE Publications Sage CA: Los Angeles, CA, 152--156.
[12]
Kathy Charmaz. 2006. Constructing grounded theory: A practical guide through qualitative analysis. sage.
[13]
Jessie YC Chen. 2018. Human-autonomy teaming in military settings. Theoretical issues in ergonomics science, Vol. 19, 3 (2018), 255--258.
[14]
Jessie YC Chen and Michael J Barnes. 2014. Human--agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems, Vol. 44, 1 (2014), 13--29.
[15]
Jessie YC Chen, Michael J Barnes, Anthony R Selkowitz, Kimberly Stowers, Shan G Lakhmani, and Nicholas Kasdaglis. 2016. Human-autonomy teaming and agent transparency. In Companion Publication of the 21st International Conference on Intelligent User Interfaces. 28--31.
[16]
Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. 2018. Planning with trust for human-robot collaboration. In Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction. 307--315.
[17]
Xusen Cheng, Guopeng Yin, Aida Azadegan, and Gwendolyn Kolfschoten. 2016. Trust evolvement in hybrid team collaboration: A longitudinal case study. Group Decision and Negotiation, Vol. 25 (2016), 267--288.
[18]
Li-Fang Chou, An-Chih Wang, Ting-Yu Wang, Min-Ping Huang, and Bor-Shiuan Cheng. 2008. Shared work values and team member effectiveness: The mediation of trustfulness and trustworthiness. Human relations, Vol. 61, 12 (2008), 1713--1742.
[19]
Herbert H Clark. 1996. Using language. Cambridge university press.
[20]
Nancy J Cooke and Steven M Shope. 2004. Synthetic task environments for teams: CERTT's UAV-STE. In Handbook of human factors and ergonomics methods. CRC Press, 476--483.
[21]
Ana Cristina Costa, C Ashley Fulmer, and Neil R Anderson. 2018. Trust in work teams: An integrative review, multilevel model, and future directions. Journal of Organizational Behavior, Vol. 39, 2 (2018), 169--184.
[22]
Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz studies?why and how. Knowledge-based systems, Vol. 6, 4 (1993), 258--266.
[23]
Bart A De Jong and Kurt T Dirks. 2012. Beyond shared perceptions of trust and monitoring in teams: Implications of asymmetry and dissensus. Journal of Applied Psychology, Vol. 97, 2 (2012), 391.
[24]
Bart A De Jong, Kurt T Dirks, and Nicole Gillespie. 2016. Trust and team performance: A meta-analysis of main effects, moderators, and covariates. Journal of applied psychology, Vol. 101, 8 (2016), 1134.
[25]
Bart A De Jong and Tom Elfring. 2010. How does trust affect the performance of ongoing teams? The mediating role of reflexivity, monitoring, and effort. Academy of Management journal, Vol. 53, 3 (2010), 535--549.
[26]
Jaap J Dijkstra. 1999. User agreement with incorrect expert system advice. Behaviour & Information Technology, Vol. 18, 6 (1999), 399--411.
[27]
Theo Dimitrakos. 2002. A service-oriented trust management framework. In Workshop on Deception, Fraud and Trust in Agent Societies. Springer, 53--72.
[28]
Stephen L Dorton, Samantha B Harper, and Kelly J Neville. 2022. Adaptations to trust incidents with artificial intelligence. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 66. SAGE Publications Sage CA: Los Angeles, CA, 95--99.
[29]
Wen Duan, Nathan McNeese, and Rui Zhang. 2023. Communication in Human-AI Teaming. Group Communication (2023), 340--352.
[30]
Mary T Dzindolet, Linda G Pierce, Hall P Beck, and Lloyd A Dawe. 2002. The perceived utility of human and automated aids in a visual detection task. Human factors, Vol. 44, 1 (2002), 79--94.
[31]
Mica R Endsley. 2015. Autonomous horizons: system autonomy in the Air Force-a path to the future. United States Air Force Office of the Chief Scientist, AF/ST TR, Vol. 15, 6 (2015), 1--34.
[32]
Connor Esterwood and Lionel P Robert. 2021. Do you still trust me? human-robot trust repair strategies. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). IEEE, 183--188.
[33]
Jing Feng, Joseph Sanchez, Robert Sall, Joseph B Lyons, and Chang S Nam. 2019. Emotional expressions facilitate human-human trust when using automation in high-risk situations. Military Psychology, Vol. 31, 4 (2019), 292--305.
[34]
Christopher Flathmann, Wen Duan, Nathan J Mcneese, Allyson Hauptman, and Rui Zhang. 2024. Empirically Understanding the Potential Impacts and Process of Social Influence in Human-AI Teams. Proceedings of the ACM on Human-Computer Interaction, Vol. 8, CSCW1 (2024), 1--32.
[35]
Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. 2023. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration., 277--304 pages.
[36]
Omid Gheibi, Danny Weyns, and Federico Quin. 2021. Applying machine learning in self-adaptive systems: A systematic literature review. ACM Transactions on Autonomous and Adaptive Systems (TAAS), Vol. 15, 3 (2021), 1--37.
[37]
Ella Glikson and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, Vol. 14, 2 (2020), 627--660.
[38]
David AP Grimm, Jamie C Gorman, Nancy J Cooke, Mustafa Demir, and Nathan J McNeese. 2023. Dynamical Measurement of Team Resilience. Journal of Cognitive Engineering and Decision Making, Vol. 17, 4 (2023), 351--382.
[39]
Peter A Hancock, Deborah R Billings, Kristin E Schaefer, Jessie YC Chen, Ewart J De Visser, and Raja Parasuraman. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human factors, Vol. 53, 5 (2011), 517--527.
[40]
Allyson I. Hauptman, Wen Duan, and Nathan J. Mcneese. 2022. The Components of Trust for Collaborating With AI Colleagues. In Companion Publication of the 2022 Conference on Computer Supported Cooperative Work and Social Computing (Virtual Event, Taiwan) (CSCW'22 Companion). Association for Computing Machinery, New York, NY, USA, 72--75. https://doi.org/10.1145/3500868.3559450
[41]
Allyson I Hauptman, Beau G Schelble, Wen Duan, Christopher Flathmann, and Nathan J McNeese. 2024. Understanding the influence of AI autonomy on AI explainability levels in human-AI teams using a mixed methods approach. Cognition, Technology & Work (2024), 1--21.
[42]
Sebastian Hergeth, Lutz Lorenz, Roman Vilimek, and Josef F Krems. 2016. Keep your scanners peeled: Gaze behavior as a measure of automation trust during highly automated driving. Human factors, Vol. 58, 3 (2016), 509--519.
[43]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors, Vol. 57, 3 (2015), 407--434.
[44]
Kai Holländer, Philipp Wintersberger, and Andreas Butz. 2019. Overtrust in external cues of automated vehicles: an experimental investigation. In Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications. 211--221.
[45]
Lixiao Huang, Nancy J Cooke, Robert S Gutzwiller, Spring Berman, Erin K Chiou, Mustafa Demir, and Wenlong Zhang. 2021. Distributed dynamic team trust in human, artificial intelligence, and robot teaming. In Trust in human-robot interaction. Elsevier, 301--319.
[46]
Ming-Hui Huang and Roland T Rust. 2018. Artificial intelligence in service. Journal of service research, Vol. 21, 2 (2018), 155--172.
[47]
Alon Jacovi, Ana Marasoviç, Tim Miller, and Yoav Goldberg. 2021. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT '21). Association for Computing Machinery, New York, NY, USA, 624--635. https://doi.org/10.1145/3442188.3445923
[48]
Vidit Jain, Maitree Leekha, Rajiv Ratn Shah, and Jainendra Shukla. 2021. Exploring semi-supervised learning for predicting listener backchannels. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--12.
[49]
Sirkka L Jarvenpaa and Dorothy E Leidner. 1999. Communication and trust in global virtual teams. Organization science, Vol. 10, 6 (1999), 791--815.
[50]
Karen A. Jehn and Elizabeth A. Mannix. 2001. The Dynamic Nature of Conflict: A Longitudinal Study of Intragroup Conflict and Group Performance. Academy of Management Journal, Vol. 44 (2001), 238--251. https://api.semanticscholar.org/CorpusID:17800484
[51]
Noel D Johnson and Alexandra A Mislin. 2011. Trust games: A meta-analysis. Journal of economic psychology, Vol. 32, 5 (2011), 865--889.
[52]
Prasert Kanawattanachai and Youngjin Yoo. 2002. Dynamic nature of trust in virtual teams. The Journal of Strategic Information Systems, Vol. 11, 3--4 (2002), 187--213.
[53]
Spencer C Kohn, Daniel Quinn, Richard Pak, Ewart J De Visser, and Tyler H Shaw. 2018. Trust repair strategies with self-driving vehicles: An exploratory study. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 62. Sage Publications Sage CA: Los Angeles, CA, 1108--1112.
[54]
Sherrie YX Komiak and Izak Benbasat. 2006. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS quarterly (2006), 941--960.
[55]
E. S. Kox, L. B. Siegling, and J. H. Kerstholt. 2022. Trust development in military and civilian human--agent teams: The effect of social-cognitive recovery strategies. International Journal of Social Robotics, Vol. 14, 5 (July 2022), 1323--1338. https://doi.org/10.1007/s12369-022-00871--4
[56]
Nicole C Krämer, Gale Lucas, Lea Schmitt, and Jonathan Gratch. 2018. Social snacking with a virtual agent--On the interrelation of need to belong and effects of social responsiveness when interacting with artificial entities. International Journal of Human-Computer Studies, Vol. 109 (2018), 112--121.
[57]
Alexander Kunze, Stephen J Summerskill, Russell Marshall, and Ashleigh J Filtness. 2019. Automation transparency: implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics, Vol. 62, 3 (2019), 345--360.
[58]
Susan Leavy. 2018. Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. In Proceedings of the 1st International Workshop on Gender Equality in Software Engineering. ACM, Gothenburg Sweden, 14--16. https://doi.org/10.1145/3195570.3195580
[59]
Susan Leavy, Eugenia Siapera, and Barry O'Sullivan. 2021. Ethical Data Curation for AI: An Approach based on Feminist Epistemology and Critical Theories of Race. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Virtual Event USA, 695--703. https://doi.org/10.1145/3461702.3462598
[60]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors, Vol. 46, 1 (2004), 50--80.
[61]
Roy J Lewicki, Daniel J McAllister, and Robert J Bies. 1998. Trust and distrust: New relationships and realities. Academy of management Review, Vol. 23, 3 (1998), 438--458.
[62]
Roy J Lewicki, Edward C Tomlinson, and Nicole Gillespie. 2006. Models of interpersonal trust development: Theoretical approaches, empirical evidence, and future directions. Journal of management, Vol. 32, 6 (2006), 991--1022.
[63]
Michael Liebrenz, Roman Schleifer, Anna Buadze, Dinesh Bhugra, and Alexander Smith. 2023. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. The Lancet Digital Health, Vol. 5, 3 (2023), e105--e106.
[64]
Zhuoran Lu and Ming Yin. 2021. Human reliance on machine learning models when performance feedback is limited: Heuristics and risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.
[65]
Joseph B Lyons, Katia Sycara, Michael Lewis, and August Capiola. 2021. Human--autonomy teaming: Definitions, debates, and directions. Frontiers in Psychology, Vol. 12 (2021), 589585.
[66]
Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--14.
[67]
Stephen Marsh and Mark R Dibben. 2003. The role of trust in information science and technology. Annual Review of Information Science and Technology (ARIST), Vol. 37 (2003), 465--98.
[68]
Joseph A Maxwell. 2012. Qualitative research design: An interactive approach. Sage publications.
[69]
Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review, Vol. 20, 3 (1995), 709--734.
[70]
Roger C Mayer and Mark B Gavin. 2005. Trust in management and performance: Who minds the shop while the employees watch the boss? Academy of management journal, Vol. 48, 5 (2005), 874--888.
[71]
Daniel J McAllister. 1995. Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of management journal, Vol. 38, 1 (1995), 24--59.
[72]
Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on human-computer interaction, Vol. 3, CSCW (2019), 1--23.
[73]
Nathan McNeese, Mustafa Demir, Erin Chiou, Nancy Cooke, and Giovanni Yanikian. 2019. Understanding the role of trust in human-autonomy teaming. (2019).
[74]
Nathan J. McNeese, Mustafa Demir, Erin K. Chiou, and Nancy J. Cooke. 2021. Trust and Team Performance in Human--Autonomy Teaming. International Journal of Electronic Commerce, Vol. 25, 1 (Jan. 2021), 51--72. https://doi.org/10.1080/10864415.2021.1846854
[75]
Nathan J McNeese, Mustafa Demir, Nancy J Cooke, and Christopher Myers. 2018. Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human factors, Vol. 60, 2 (2018), 262--273.
[76]
Sharan B Merriam and Elizabeth J Tisdell. 2015. Qualitative research: A guide to design and implementation. John Wiley & Sons.
[77]
Tim Merritt and Kevin McGee. 2012. Protecting artificial team-mates: more seems like less. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2793--2802.
[78]
Debra Meyerson, Karl E Weick, Roderick M Kramer, et al. 1996. Swift trust and temporary groups. Trust in organizations: Frontiers of theory and research, Vol. 166 (1996), 195.
[79]
David R Millen. 2000. Rapid ethnography: time deepening strategies for HCI field research. In Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques. 280--286.
[80]
Dan Manh Nguyen. 2020. 1, 2, or 3 in a HAT? How a human-agent team's composition affects trust and cooperation. (2020).
[81]
Kazuo Okamura and Seiji Yamada. 2020. Adaptive trust calibration for human-AI collaboration. Plos one, Vol. 15, 2 (2020), e0229132.
[82]
Thomas O'Neill, Nathan McNeese, Amy Barron, and Beau Schelble. 2022. Human--autonomy teaming: A review and analysis of the empirical literature. Human factors, Vol. 64, 5 (2022), 904--938.
[83]
Michael Pflanzer, Zachary Traylor, Joseph B Lyons, Veljko Dubljevi?, and Chang S Nam. 2023. Ethics in human--AI teaming: principles and perspectives. AI and Ethics, Vol. 3, 3 (2023), 917--935.
[84]
Minna Räsänen and James M Nyce. 2006. A new role for anthropology? rewriting" context" and" analysis" in HCI research. In Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles. 175--184.
[85]
Amy Rechkemmer and Ming Yin. 2022. When confidence meets accuracy: Exploring the effects of multiple performance indicators on trust in machine learning models. In Proceedings of the 2022 chi conference on human factors in computing systems. 1--14.
[86]
John K Rempel, John G Holmes, and Mark P Zanna. 1985. Trust in close relationships. Journal of personality and social psychology, Vol. 49, 1 (1985), 95.
[87]
Tobias Rieger, Eileen Roesler, and Dietrich Manzey. 2022. Challenging presumed technological superiority when working with (artificial) colleagues. Scientific Reports, Vol. 12, 1 (2022), 3768.
[88]
Frank E Ritter, Nigel R Shadbolt, David Elliman, Richard M Young, Fernand Gobet, and Gordon D Baxter. 2003. Techniques for modeling human performance in synthetic environments: A supplementary review. Human Systems Information Analysis Center, Wright-Patterson Air Force Base, Dayton, OH (2003).
[89]
Julian B Rotter. 1980. Interpersonal trust, trustworthiness, and gullibility. American psychologist, Vol. 35, 1 (1980), 1.
[90]
Kristin E Schaefer, Edward R Straub, Jessie YC Chen, Joe Putney, and Arthur W Evans III. 2017. Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cognitive Systems Research, Vol. 46 (2017), 26--39.
[91]
Paul Scharre. 2018. Army of none: Autonomous weapons and the future of war. WW Norton & Company.
[92]
Beau G. Schelble, Christopher Flathmann, Nathan J. McNeese, Guo Freeman, and Rohit Mallick. 2022. Let's Think Together! Assessing Shared Mental Models, Performance, and Trust in Human-Agent Teams. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, GROUP (Jan. 2022), 1--29. https://doi.org/10.1145/3492832
[93]
Beau G Schelble, Jeremy Lopez, Claire Textor, Rui Zhang, Nathan J McNeese, Richard Pak, and Guo Freeman. 2022. Towards ethical AI: Empirically investigating dimensions of AI ethics, trust repair, and performance in human-AI teaming. Human Factors (2022), 00187208221116952.
[94]
F David Schoorman, Roger C Mayer, and James H Davis. 2007. An integrative model of organizational trust: Past, present, and future., 344--354 pages.
[95]
Ben Shneiderman. 2020. Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, Vol. 36, 6 (2020), 495--504.
[96]
Tony L. Simons and Randall S. Peterson. 1998. Task Conflict snd Relationship Conflict in Top Management Teams:The Pivotal Role of Intragroup Trust. https://api.semanticscholar.org/CorpusID:10732108
[97]
Indramani L Singh, Robert Molloy, and Raja Parasuraman. 1993. Automation-induced" complacency": Development of the complacency-potential rating scale. The International Journal of Aviation Psychology, Vol. 3, 2 (1993), 111--122.
[98]
Gabriel Szulanski, Rossella Cappetta, and Robert J Jensen. 2004. When and how trustworthiness matters: Knowledge transfer and the moderating effect of causal ambiguity. Organization science, Vol. 15, 5 (2004), 600--613.
[99]
Takane Ueno, Yuto Sawa, Yeongdae Kim, Jacqueline Urakami, Hiroki Oura, and Katie Seaborn. 2022. Trust in human-AI interaction: Scoping out models, measures, and methods. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1--7.
[100]
Anna-Sophie Ulfert. 2020. A Model of Team Trust in Human-Agent Teams. In ICMI '20 Companion. ACM, Virtual Event, Netherlands, 171--75. https://doi.org/10.1145/3395035.3425959
[101]
Niels van Berkel and Vassilis Kostakos. 2021. Recommendations for conducting longitudinal experience sampling studies. Advances in Longitudinal HCI Research (2021), 59--78.
[102]
James C Walliser, Ewart J de Visser, and Tyler H Shaw. 2016. Application of a system-wide trust strategy when supervising multiple autonomous agents. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 60. SAGE Publications Sage CA: Los Angeles, CA, 133--137.
[103]
James C Walliser, Ewart J de Visser, Eva Wiese, and Tyler H Shaw. 2019. Team structure and team building improve human--machine teaming with autonomous agents. Journal of Cognitive Engineering and Decision Making, Vol. 13, 4 (2019), 258--278.
[104]
Xinru Wang, Zhuoran Lu, and Ming Yin. 2022. Will you accept the ai recommendation? predicting human behavior in ai-assisted decision making. In Proceedings of the ACM Web Conference 2022. 1697--1708.
[105]
Xinru Wang and Ming Yin. 2022. Effects of explanations in ai-assisted decision making: Principles and comparisons. ACM Transactions on Interactive Intelligent Systems, Vol. 12, 4 (2022), 1--36.
[106]
Magdalena Wischnewski, Nicole Krämer, and Emmanuel Müller. 2023. Measuring and Understanding Trust Calibrations for Automated Systems: A Survey of the State-Of-The-Art and Future Directions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1--16.
[107]
Meng Xiao and Haibo Yi. 2021. Building an efficient artificial intelligence model for personalized training in colleges and universities. Computer Applications in Engineering Education, Vol. 29, 2 (2021), 350--358.
[108]
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Jianlong Zhou, and Fang Chen. 2019. Do i trust my machine teammate? an investigation from perception to decision. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 460--468.
[109]
Dale E Zand. 1972. Trust and managerial problem solving. Administrative science quarterly (1972), 229--239.
[110]
Guanglu Zhang, Leah Chong, Kenneth Kotovsky, and Jonathan Cagan. 2023. Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Computers in Human Behavior, Vol. 139 (Feb. 2023), 107536. https://doi.org/10.1016/j.chb.2022.107536
[111]
Rui Zhang, Wen Duan, Nathan J. McNeese, Christopher Flathmann, Guo Freeman, and Alyssa Williams. 2023. "Investigating AI Teammate Communication Strategies and Their Impact in Human-AI Teams For Effective Teamwork. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW2 (2023), 1--31. https://doi.org/10.1145/3610072
[112]
Rui Zhang, Christopher Flathmann, Geoff Musick, Beau Schelble, Nathan J McNeese, Bart Knijnenburg, and Wen Duan. 2024. I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI Teams. ACM Transactions on Interactive Intelligent Systems, Vol. 14, 1 (2024), 1--23.
[113]
Rui Zhang, Nathan J. McNeese, Guo Freeman, and Geoff Musick. 2021. "An Ideal Human": Expectations of AI Teammates in Human-AI Teaming. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW3 (Jan. 2021), 1--25. https://doi.org/10.1145/3432945
[114]
Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 295--305.
[115]
Roxanne Zolin, Pamela J Hinds, Renate Fruchter, and Raymond E Levitt. 2004. Interpersonal trust in cross-functional, geographically distributed work: A longitudinal study. Information and organization, Vol. 14, 1 (2004), 1--26.

Index Terms

  1. Understanding the Evolvement of Trust Over Time within Human-AI Teams

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW2
    CSCW
    November 2024
    5177 pages
    EISSN:2573-0142
    DOI:10.1145/3703902
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 November 2024
    Published in PACMHCI Volume 8, Issue CSCW2

    Check for updates

    Author Tags

    1. human-agent teaming
    2. human-ai teaming
    3. human-autonomy teaming
    4. qualitative method
    5. trust development
    6. trust evolvement
    7. trust fluctuation

    Qualifiers

    • Research-article

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 334
      Total Downloads
    • Downloads (Last 12 months)334
    • Downloads (Last 6 weeks)334
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media