[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3491101.3503704acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
extended-abstract

Workshop on Trust and Reliance in AI-Human Teams (TRAIT)

Published: 28 April 2022 Publication History

Abstract

As humans increasingly interact (and even collaborate) with AI systems during decision-making, creative exercises, and other tasks, appropriate trust and reliance are necessary to ensure proper usage and adoption of these systems. Specifically, people should understand when to trust or rely on an algorithm’s outputs and when to override them. While significant research focus has aimed to measure and promote trust in human-AI interaction, the field lacks synthesized definitions and understanding of results across contexts. Indeed, conceptualizing trust and reliance, and identifying the best ways to measure these constructs and effectively shape them in human-AI interactions remains a challenge.
This workshop aims to establish building appropriate trust and reliance on (imperfect) AI systems as a vital, yet under-explored research problem. The workshop will provide a venue for exploring three broad aspects related to human-AI trust: (1) How do we clarify definitions and frameworks relevant to human-AI trust and reliance (e.g., what does trust mean in different contexts)? (2) How do we measure trust and reliance? And, (3) How do we shape trust and reliance? As these problems and solutions involving humans and AI are interdisciplinary in nature, we invite participants with expertise in HCI, AI, ML, psychology, and social science, or other relevant fields to foster closer communications and collaboration between multiple communities.

References

[1]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[2]
Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 international conference on healthcare informatics. IEEE, 160–169.
[3]
Erin K Chiou and John D Lee. 2021. Trusting Automation: Designing for Responsivity and Resilience. Human Factors (2021), 00187208211009995.
[4]
Henriette Cramer, Vanessa Evers, Satyan Ramlal, Maarten Van Someren, Lloyd Rutledge, Natalia Stash, Lora Aroyo, and Bob Wielinga. 2008. The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-adapted interaction 18, 5 (2008), 455.
[5]
Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[6]
Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
[7]
Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449–466.
[8]
Ella Glikson and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 14, 2 (2020), 627–660.
[9]
Kenneth Holstein and Vincent Aleven. 2021. Designing for human-AI complementarity in K-12 education. AI Magazine (2021).
[10]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 624–635.
[11]
René F Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2390–2395.
[12]
Lea Krause and Piek Vossen. 2020. When to explain: Identifying explanation triggers in human-agent interaction. In 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence. 55–60.
[13]
Himabindu Lakkaraju and Osbert Bastani. 2020. ” How do I fool you?” Manipulating User Trust via Misleading Black Box Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 79–85.
[14]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
[15]
Min Kyung Lee and Su Baykal. 2017. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. In Proceedings of the 2017 acm conference on computer supported cooperative work and social computing. 1035–1048.
[16]
Ariel Levy, Monica Agrawal, Arvind Satyanarayan, and David Sontag. 2021. Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
[17]
Brian Y Lim and Anind K Dey. 2011. Investigating intelligibility for uncertain context-aware applications. In Proceedings of the 13th international conference on Ubiquitous computing. 415–424.
[18]
Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[19]
Bonnie M Muir. 1987. Trust between humans and machines, and the design of decision aids. International journal of man-machine studies 27, 5-6 (1987), 527–539.
[20]
Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–52.
[21]
Vanessa Sauer, Alexander Mertens, Jens Heitland, and Verena Nitsch. 2021. Designing for Trust and Well-being: Identifying Design Features of Highly Automated Vehicles. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–7.
[22]
Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, and Qianying Wang. 2020. From human-human collaboration to human-ai collaboration: Designing ai systems that can work together with people. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–6.
[23]
David Gray Widder, Laura Dabbish, James D Herbsleb, Alexandra Holloway, and Scott Davidoff. 2021. Trust in Collaborative Automation in High Stakes Software Engineering Work: A Case Study at NASA. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
[24]
Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable AI: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11.
[25]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.
[26]
Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295–305.

Cited By

View all
  • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024
  • (2024)Understanding the Evolvement of Trust Over Time within Human-AI TeamsProceedings of the ACM on Human-Computer Interaction10.1145/36870608:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)LLMs as Research Tools: Applications and Evaluations in HCI Data WorkExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636301(1-7)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Workshop on Trust and Reliance in AI-Human Teams (TRAIT)
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI EA '22: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems
      April 2022
      3066 pages
      ISBN:9781450391566
      DOI:10.1145/3491101
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 28 April 2022

      Check for updates

      Author Tags

      1. human-centered artificial intelligence
      2. reliance
      3. trust
      4. uncertainty

      Qualifiers

      • Extended-abstract
      • Research
      • Refereed limited

      Conference

      CHI '22
      Sponsor:
      CHI '22: CHI Conference on Human Factors in Computing Systems
      April 29 - May 5, 2022
      LA, New Orleans, USA

      Acceptance Rates

      Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)96
      • Downloads (Last 6 weeks)10
      Reflects downloads up to 12 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024
      • (2024)Understanding the Evolvement of Trust Over Time within Human-AI TeamsProceedings of the ACM on Human-Computer Interaction10.1145/36870608:CSCW2(1-31)Online publication date: 8-Nov-2024
      • (2024)LLMs as Research Tools: Applications and Evaluations in HCI Data WorkExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636301(1-7)Online publication date: 11-May-2024
      • (2023)Generative AI Considered HarmfulProceedings of the 5th International Conference on Conversational User Interfaces10.1145/3571884.3603756(1-5)Online publication date: 19-Jul-2023
      • (2023)Horse as Teacher: How human-horse interaction informs human-robot interactionProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581245(1-13)Online publication date: 19-Apr-2023
      • (2023)What is Human-Centered about Human-Centered AI? A Map of the Research LandscapeProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580959(1-23)Online publication date: 19-Apr-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media