[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Being Trustworthy is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their Trust

Published: 16 April 2023 Publication History

Abstract

Trustworthy Artificial Intelligence (AI) is characterized, among other things, by: 1) competence, 2) transparency, and 3) fairness. However, end-users may fail to recognize incompetent AI, allowing untrustworthy AI to exaggerate its competence under the guise of transparency to gain unfair advantage over other trustworthy AI. Here, we conducted an experiment with 120 participants to test if untrustworthy AI can deceive end-users to gain their trust. Participants interacted with two AI-based chess engines, trustworthy (competent, fair) and untrustworthy (incompetent, unfair), that coached participants by suggesting chess moves in three games against another engine opponent. We varied coaches' transparency about their competence (with the untrustworthy one always exaggerating its competence). We quantified and objectively measured participants' trust based on how often participants relied on coaches' move recommendations. Participants showed inability to assess AI competence by misplacing their trust with the untrustworthy AI, confirming its ability to deceive. Our work calls for design of interactions to help end-users assess AI trustworthiness.

References

[1]
Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y. Lim. 2020. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376615
[2]
Ahmed Alqaraawi, Martin Schuessler, Philipp Weiß, Enrico Costanza, and Nadia Berthouze. 2020. Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI '20). Association for Computing Machinery, New York, NY, USA, 275--285. https://doi.org/10.1145/3377325.3377519
[3]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3290605.3300233
[4]
Nikola Banovic, Anqi Wang, Yanfeng Jin, Christie Chang, Julian Ramos, Anind Dey, and Jennifer Mankoff. 2017. Leveraging Human Routine Models to Detect and Generate Human Behaviors. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 6683--6694. https://doi.org/10.1145/3025453.3025571
[5]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, Vol. 58 (2020), 82--115. https://doi.org/10.1016/j.inffus.2019.12.012
[6]
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, and Peter Eckersley. 2020. Explainable Machine Learning in Deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 648--657. https://doi.org/10.1145/3351095.3375624
[7]
Tom Bridgwater, Manuel Giuliani, Anouk van Maris, Greg Baker, Alan Winfield, and Tony Pipe. 2020. Examining Profiles for Robotic Risk Assessment: Does a Robot's Approach to Risk Affect User Trust? Association for Computing Machinery, New York, NY, USA, 23--31. https://doi.org/10.1145/3319502.3374804
[8]
Zana Buccinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI '20). Association for Computing Machinery, New York, NY, USA, 454--464. https://doi.org/10.1145/3377325.3377498
[9]
Zana Buccinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-Making. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW1, Article 188 (apr 2021), 21 pages. https://doi.org/10.1145/3449287
[10]
Diogo V. Carvalho, Eduardo M. Pereira, and Jaime S. Cardoso. 2019. Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, Vol. 8, 8 (2019). https://doi.org/10.3390/electronics8080832
[11]
Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, Vol. 144, 1 (Nov. 2015), 114--126. https://doi.org/10.1037/xge0000033
[12]
Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. 2021. Expanding Explainability: Towards Social Transparency in AI Systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 82, 19 pages. https://doi.org/10.1145/3411764.3445188
[13]
Upol Ehsan and Mark O. Riedl. 2021. Explainability Pitfalls: Beyond Dark Patterns in Explainable AI. https://doi.org/10.48550/ARXIV.2109.12480
[14]
Lisa A. Elkin, Matthew Kay, James J. Higgins, and Jacob O. Wobbrock. 2021. An Aligned Rank Transform Procedure for Multifactor Contrast Tests. In Proceedings of the 34th Annual Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST '21). Association for Computing Machinery, New York, NY, USA, 15 pages. https://doi.org/10.1145/3472749.3474784
[15]
Arpad E. Elo. 1967. The Proposed USCF Rating System, Its Development, Theory, and Applications. Chess Life, Vol. 22, 8 (August 1967), 242--247.
[16]
Nel Escher and Nikola Banovic. 2020. Exposing Error in Poverty Management Technology: A Method for Auditing Government Benefits Screening Tools. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW1, Article 064 (May 2020), 20 pages. https://doi.org/10.1145/3392874
[17]
Birhanu Eshete. 2021. Making machine learning trustworthy. Science, Vol. 373, 6556 (2021), 743--744.
[18]
Anthony M. Evans and Joachim I. Krueger. 2009. The Psychology (and Economics) of Trust. Social and Personality Psychology Compass, Vol. 3, 6 (2009), 1003--1017. https://doi.org/10.1111/j.1751--9004.2009.00232.x
[19]
Shi Feng and Jordan Boyd-Graber. 2019. What Can AI Do for Me? Evaluating Machine Learning Interpretations in Cooperative Play. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI '19). Association for Computing Machinery, New York, NY, USA, 229--239. https://doi.org/10.1145/3301275.3302265
[20]
Lex Fridman, Li Ding, Benedikt Jenik, and Bryan Reimer. 2019. Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019, Long Beach, CA, USA, June 16--20, 2019. Computer Vision Foundation / IEEE, 146--154. http://openaccess.thecvf.com/content_CVPRW_2019/html/Autonomous_Driving/Fridman_Arguing_Machines_Human_Supervision_of_Black_Box_AI_Systems_That_CVPRW_2019_paper.html
[21]
Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). 80--89. https://doi.org/10.1109/DSAA.2018.00018
[22]
Ben Green and Salomé Viljoen. 2020. Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 19--31. https://doi.org/10.1145/3351095.3372840
[23]
Dara Gruber, Ashley Aune, and Wilma Koutstaal. 2018. Can Semi-Anthropomorphism Influence Trust and Compliance? Exploring Image Use in App Interfaces. In Proceedings of the Technology, Mind, and Society (Washington, DC, USA) (TechMindSociety '18). Association for Computing Machinery, New York, NY, USA, Article 13, 6 pages. https://doi.org/10.1145/3183654.3183700
[24]
Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW1, Article 068 (May 2020), 26 pages. https://doi.org/10.1145/3392878
[25]
Stéphane Hulaud. 2018. Identification of taste attributes from an audio signal. US Patent 9,934,785.
[26]
Brett W. Israelsen and Nisar R. Ahmed. 2019. ?Dave...I Can Assure You. ..That It's Going to Be All Right. .." A Definition, Case for, and Survey of Algorithmic Assurances in Human-Autonomy Trust Relationships. ACM Comput. Surv., Vol. 51, 6, Article 113 (jan 2019), 37 pages. https://doi.org/10.1145/3267338
[27]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence, Vol. 1, 9 (01 Sep 2019), 389--399. https://doi.org/10.1038/s42256-019-0088--2
[28]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376219
[29]
Benjamin Kuipers. 2018. How can we trust a robot? Commun. ACM, Vol. 61, 3 (2018), 86--95. https://doi.org/10.1145/3173087
[30]
Himabindu Lakkaraju and Osbert Bastani. 2020. "How Do I Fool You?": Manipulating User Trust via Misleading Black Box Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (New York, NY, USA) (AIES '20). Association for Computing Machinery, New York, NY, USA, 79--85. https://doi.org/10.1145/3375627.3375833
[31]
John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors, Vol. 46, 1 (2004), 50--80. https://doi.org/10.1518/hfes.46.1.50_30392
[32]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--15. https://doi.org/10.1145/3313831.3376590
[33]
Niklas Luhmann. 1988. Familiarity, Confidence, Trust: Problems and Alternatives. D. Gambetta, editor, Trust: Making and Breaking of Cooperative Relations, Basil Blackwell, Oxford, 1988 (1988).
[34]
Maria Madsen and Shirley Gregor. 2000. Measuring human-computer trust. In Proceedings of the 11 th Australasian Conference on Information Systems. 6--8.
[35]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 1995. An Integrative Model of Organizational Trust. The Academy of Management Review, Vol. 20, 3 (1995), 709--734. http://www.jstor.org/stable/258792
[36]
Daniel J. McAllister. 1995. Affect- and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations. The Academy of Management Journal, Vol. 38, 1 (1995), 24--59. http://www.jstor.org/stable/256727
[37]
Danaë Metaxa, Joon Sung Park, Ronald E. Robertson, Karrie Karahalios, Christo Wilson, Jeff Hancock, and Christian Sandvig. 2021. Auditing Algorithms: Understanding Algorithmic Systems from the Outside In. Foundations and Trends® in Human--Computer Interaction, Vol. 14, 4 (2021), 272--344. https://doi.org/10.1561/1100000083
[38]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, Vol. 267 (2019), 1--38. https://doi.org/10.1016/j.artint.2018.07.007
[39]
Samir Passi and Steven J. Jackson. 2018. Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 136 (nov 2018), 28 pages. https://doi.org/10.1145/3274405
[40]
Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and Measuring Model Interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 237, 52 pages. https://doi.org/10.1145/3411764.3445315
[41]
Snehal Prabhudesai, Nicholas Chandler Wang, Vinayak Ahluwalia, Xun Huan, Jayapalli Rajiv Bapuraj, Nikola Banovic, and Arvind Rao. 2021. Stratification by Tumor Grade Groups in a Holistic Evaluation of Machine Learning for Brain Tumor Segmentation. Frontiers in Neuroscience, Vol. 15 (2021), 1236. https://doi.org/10.3389/fnins.2021.740353
[42]
Divya Ramesh, Vaishnav Kameswaran, Ding Wang, and Nithya Sambasivan. 2022. How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT '22). Association for Computing Machinery, New York, NY, USA, 1917--1928. https://doi.org/10.1145/3531146.3533237
[43]
Gabriëlle Ras, Marcel van Gerven, and Pim Haselager. 2018. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning, Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Xavier Baró, Yaug mur Gücc lütürk, Umut Gücc lü, and Marcel van Gerven (Eds.). Springer International Publishing, Cham, 19--36. https://doi.org/10.1007/978--3--319--98131--4_2
[44]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD '16). Association for Computing Machinery, New York, NY, USA, 1135--1144. https://doi.org/10.1145/2939672.2939778
[45]
Andrew Ross, Nina Chen, Elisa Zhao Hang, Elena L. Glassman, and Finale Doshi-Velez. 2021. Evaluating the Interpretability of Generative Models by Interactive Reconstruction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 80, 15 pages. https://doi.org/10.1145/3411764.3445296
[46]
Johannes Schneider, Joshua Handali, Michalis Vlachos, and Christian Meske. 2020. Deceptive AI Explanations: Creation and Detection. CoRR, Vol. abs/2001.07641 (2020). showeprint[arXiv]2001.07641 https://arxiv.org/abs/2001.07641
[47]
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* '19). Association for Computing Machinery, New York, NY, USA, 59--68. https://doi.org/10.1145/3287560.3287598
[48]
Ben Shneiderman. 2020. Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. International Journal of Human--Computer Interaction, Vol. 36, 6 (2020), 495--504. https://doi.org/10.1080/10447318.2020.1741118
[49]
Alison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan Boyd-Graber, Daniel S. Weld, and Leah Findlater. 2020. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376624
[50]
Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, et al. 2016. Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence. (2016).
[51]
Simone Stumpf, Adrian Bussone, and Dympna O'sullivan. 2016. Explanations considered harmful? user interactions with machine learning systems. In The CHI 2016 Human Centred Machine Learning Workshop.
[52]
Steven C. Sutherland, Casper Harteveld, and Michael E. Young. 2015. The Role of Environmental Predictability and Costs in Relying on Automation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI '15). Association for Computing Machinery, New York, NY, USA, 2535--2544. https://doi.org/10.1145/2702123.2702609
[53]
Suzanne Tolmeijer, Astrid Weiss, Marc Hanheide, Felix Lindner, Thomas M. Powers, Clare Dixon, and Myrthe L. Tielman. 2020. Taxonomy of Trust-Relevant Failures and Mitigation Strategies. In HRI '20: ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, United Kingdom, March 23--26, 2020, Tony Belpaeme, James Young, Hatice Gunes, and Laurel D. Riek (Eds.). ACM, 3--12. https://doi.org/10.1145/3319502.3374793
[54]
Jennifer Wortman Vaughan and Hanna Wallach. 2020. A human-centered agenda for intelligible machine learning. Machines We Trust: Getting Along with Artificial Intelligence (2020).
[55]
Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW2, Article 327 (oct 2021), 39 pages. https://doi.org/10.1145/3476068
[56]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--15. https://doi.org/10.1145/3290605.3300831
[57]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI '11). Association for Computing Machinery, New York, NY, USA, 143--146. https://doi.org/10.1145/1978942.1978963
[58]
X. Jessie Yang, Vaibhav V. Unhelkar, Kevin Li, and Julie A. Shah. 2017. Evaluating Effects of User Experience and System Transparency on Trust in Automation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (Vienna, Austria) (HRI '17). Association for Computing Machinery, New York, NY, USA, 408--416. https://doi.org/10.1145/2909824.3020230
[59]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300509
[60]
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017a. User Trust Dynamics: An Investigation Driven by Differences in System Performance. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, IUI 2017, Limassol, Cyprus, March 13--16, 2017, George A. Papadopoulos, Tsvi Kuflik, Fang Chen, Carlos Duarte, and Wai-Tat Fu (Eds.). ACM, 307--317. https://doi.org/10.1145/3025171.3025219
[61]
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017b. User Trust Dynamics: An Investigation Driven by Differences in System Performance. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (Limassol, Cyprus) (IUI '17). Association for Computing Machinery, New York, NY, USA, 307--317. https://doi.org/10.1145/3025171.3025219
[62]
Beste F. Yuksel, Penny Collisson, and Mary Czerwinski. 2017. Brains or Beauty: How to Engender Trust in User-Agent Interactions. ACM Trans. Internet Technol., Vol. 17, 1, Article 2 (jan 2017), 20 pages. https://doi.org/10.1145/2998572
[63]
John Zerilli, Umang Bhatt, and Adrian Weller. 2022. How transparency modulates trust in artificial intelligence. Patterns (N Y), Vol. 3, 4 (Feb. 2022). https://doi.org/10.1016/j.patter.2022.100455
[64]
Enhao Zhang and Nikola Banovic. 2021. Method for Exploring Generative Adversarial Networks (GANs) via Automatically Generated Image Galleries. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 76, 15 pages. https://doi.org/10.1145/3411764.3445714
[65]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 295--305. https://doi.org/10.1145/3351095.3372852

Cited By

View all
  • (2024)Uncertainty-aware explainable AI as a foundational paradigm for digital twinsFrontiers in Mechanical Engineering10.3389/fmech.2023.13291469Online publication date: 5-Jan-2024
  • (2024)The Explanation That Hits Home: The Characteristics of Verbal Explanations That Affect Human Perception in Subjective Decision-MakingProceedings of the ACM on Human-Computer Interaction10.1145/36870568:CSCW2(1-37)Online publication date: 8-Nov-2024
  • (2024)"Something Fast and Cheap" or "A Core Element of Building Trust"? - AI Auditing Professionals' Perspectives on Trust in AIProceedings of the ACM on Human-Computer Interaction10.1145/36869638:CSCW2(1-22)Online publication date: 8-Nov-2024
  • Show More Cited By

Index Terms

  1. Being Trustworthy is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their Trust

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW1
    CSCW
    April 2023
    3836 pages
    EISSN:2573-0142
    DOI:10.1145/3593053
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 April 2023
    Published in PACMHCI Volume 7, Issue CSCW1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. XAI
    2. explainability
    3. explainable AI
    4. fairness
    5. transparency
    6. trustworthiness
    7. trustworthy AI

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)820
    • Downloads (Last 6 weeks)67
    Reflects downloads up to 27 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Uncertainty-aware explainable AI as a foundational paradigm for digital twinsFrontiers in Mechanical Engineering10.3389/fmech.2023.13291469Online publication date: 5-Jan-2024
    • (2024)The Explanation That Hits Home: The Characteristics of Verbal Explanations That Affect Human Perception in Subjective Decision-MakingProceedings of the ACM on Human-Computer Interaction10.1145/36870568:CSCW2(1-37)Online publication date: 8-Nov-2024
    • (2024)"Something Fast and Cheap" or "A Core Element of Building Trust"? - AI Auditing Professionals' Perspectives on Trust in AIProceedings of the ACM on Human-Computer Interaction10.1145/36869638:CSCW2(1-22)Online publication date: 8-Nov-2024
    • (2024)VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-MakingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676323(1-21)Online publication date: 13-Oct-2024
    • (2024)Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI ResearchProceedings of the ACM on Human-Computer Interaction10.1145/36410098:CSCW1(1-43)Online publication date: 26-Apr-2024
    • (2024)“DecisionTime”: A Configurable Framework for Reproducible Human-AI Decision-Making StudiesAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664885(66-69)Online publication date: 27-Jun-2024
    • (2024)The Role of Explainability in Collaborative Human-AI Disinformation DetectionProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659031(2157-2174)Online publication date: 3-Jun-2024
    • (2024)Establishing Appropriate Trust in AI through Transparency and ExplainabilityExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3638184(1-6)Online publication date: 11-May-2024
    • (2024)Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642934(1-20)Online publication date: 11-May-2024
    • (2024)Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-MakingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642621(1-18)Online publication date: 11-May-2024
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media