[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/1943403.1943424acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Taking advice from intelligent systems: the double-edged sword of explanations

Published: 13 February 2011 Publication History

Abstract

Research on intelligent systems has emphasized the benefits of providing explanations along with recommendations. But can explanations lead users to make incorrect decisions? We explored this question in a controlled experimental study with 18 professional network security analysts doing an incident classification task using a prototype cybersecurity system. The system provided three recommendations on each trial. The recommendations were displayed with explanations (called "justifications") or without. On half the trials, one of the recommendations was correct; in the other half none of the recommendations was correct. Users were more accurate with correct recommendations. Although there was no benefit overall of explanation, we found that a segment of the analysts were more accurate with explanations when a correct choice was available but were less accurate with explanations in the absence of a correct choice. We discuss implications of these results for the design of intelligent systems.

References

[1]
Bilgic, M. and Mooney, R.J. Explaining Recommendations: Satisfaction vs. Promotion. Proc. Beyond Personalization Workshop, IUI (2005).
[2]
Celma, Ò. and Herrera, P. A new approach to evaluating novel recommendations. Proc. RecSys '08 (2008) 179--186.
[3]
Cunningham, P., Doyle, D. and Loughrey, J. An evaluation of the usefulness of case-based explanation. Proc. Case-based reasoning: Research and Development '03 (2003) 122--130.
[4]
Dhaliwal, J.S. and Benbasat, I. The use and effects of knowledge-based system explanations: theoretical foundations and a framework for empirical evaluation. Information Systems Research 7 (1996).
[5]
Doyle, D., Cunningham, P. and Walsh, P. An evaluation of the usefulness of explanation in a case-based reasoning system for decision support in Bronchiolitis treatment. Computational Intelligence 22 (2006) 269--281.
[6]
Fleischmann, K.R. and Wallace, W.A. A covenant with transparency: opening the black box of models. Communication ACM 48 (2005) 93--97.
[7]
Fleischmann, K.R. and Wallace, W.A. Ensuring transparency in computational modeling. Communication ACM 52 (2009) 131--134.
[8]
Glass, A., McGuinness, D.L. and Wolverton, M. Toward establishing trust in adaptive agents. Proc. IUI '08 (2008) 227--236.
[9]
Gregor, S. and Benbasat, I. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly 23 (1999) 497--530.
[10]
Herlocker, J.L., Konstan, J.A. and Riedl, J. Explaining collaborative filtering recommendations. Proc. CSCW '00 (2000).
[11]
Lee, J.D. and See, K.A.T. Trust in automation: Designing for appropriate reliance. Human factors 46 (2004) 50.
[12]
Luo, B. and Hancock, E.R. Structural Graph Matching Using the EM Algorithm and Singular Value Decomposition. IEEE Trans. Pattern Anal. Mach. Intell. 23 (2001) 1120--1136.
[13]
Mao, J.Y. and Benbasat, I. The use of explanations in knowledge-based systems: Cognitive perspectives and a process-tracing analysis. Journal of Management Information Systems 17 (2000) 153--179.
[14]
McSherry, D. Interactive Case-Based Reasoning in Sequential Diagnosis. Applied Intelligence 14 (2001) 65--76.
[15]
McSherry, D. Explaining the Pros and Cons of Conclusions in CBR. Proc. Seventh European Conference on Case-Based Reasoning (2004) 149--165.
[16]
Pu, P. and Chen, L. Trust building with explanation interfaces. Proc. IUI '06 (2006).
[17]
Rasmussen, J., Ehrlich, K., Ross, S., Kirk, S., Gruen, D. and Patterson, J. Nimble cybersecurity incident management through visualization and defensible recommendations. Proc. VizSec '10 (2010).
[18]
Rovira, E., McGarry, K. and Parasuraman, R. Effects of imperfect automation on decision making in a simulated command and control task. Human Factors 49 (2007).
[19]
Sinha, R. and Swearingen, K. The role of transparency in recommender systems. Proc. CHI '02 extended abstracts (2002).
[20]
Southwick, R.W. Explaining Reasoning: An Overview of Explanation in Knowledge-Based Systems. Knowledge Engineering Review 6 (1991) 1--19.
[21]
2Swartout, W., Paris, C. & Moore, J. Explanations in Knowledge Systems: Design for Explainable Expert Systems. IEEE Intelligent Systems 6 (1991) 58--64.
[22]
2Tintarev, N. and Masthoff, J. Effective explanations of recommendations: User-centered design. Proc. RecSys '07 (2007).
[23]
Tintarev, N. and Masthoff, J. A survey of explanations in recommender systems. Proc. ICDE'07 Workshop on Recommender Systems and Intelligent User Interfaces (2007).
[24]
Vig, J., Sen, S. and Riedl, J. Tagsplanations: explaining recommendations using tags. Proc. IUI '09 (2009) 47--56.
[25]
Wang, W. and Benbasat, I. Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems 23 (2007) 217--246.
[26]
Wickens, C.D. and Dixon, S.R. The benefits of imperfect diagnostic automation: A synthesis of the literature. Theoretical Issues in Ergonomics Science 8 (2007) 201--212.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '11: Proceedings of the 16th international conference on Intelligent user interfaces
February 2011
504 pages
ISBN:9781450304191
DOI:10.1145/1943403
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 February 2011

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adaptive agents
  2. evaluation
  3. explanations
  4. individual differences
  5. intelligent systems
  6. recommendations

Qualifiers

  • Research-article

Conference

IUI '11
Sponsor:

Acceptance Rates

Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)39
  • Downloads (Last 6 weeks)3
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Initial results on personalizing explanations of AI hints in an ITSProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659566(244-248)Online publication date: 22-Jun-2024
  • (2024)Human-centred explanations for artificial intelligence systemsErgonomics10.1080/00140139.2024.2334427(1-15)Online publication date: 8-Apr-2024
  • (2024)Human-AI Teaming: Following the IMOI FrameworkArtificial Intelligence in HCI10.1007/978-3-031-60611-3_27(387-406)Online publication date: 29-Jun-2024
  • (2024)Navigating Transparency: The Influence of On-demand Explanations on Non-expert User Interaction with AIArtificial Intelligence in HCI10.1007/978-3-031-60606-9_14(238-263)Online publication date: 1-Jun-2024
  • (2023)Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject StudiesProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594087(1369-1385)Online publication date: 12-Jun-2023
  • (2023)Follow the Successful Herd: Towards Explanations for Improved Use and Mental Models of Natural Language SystemsProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584088(220-239)Online publication date: 27-Mar-2023
  • (2022)The challenges of providing explanations of AI systems when they do not behave like users expectProceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization10.1145/3503252.3531306(110-120)Online publication date: 4-Jul-2022
  • (2022)Explainability in music recommender systemsAI Magazine10.1002/aaai.1205643:2(190-208)Online publication date: 23-Jun-2022
  • (2021)Perfection Not Required? Human-AI Partnerships in Code TranslationProceedings of the 26th International Conference on Intelligent User Interfaces10.1145/3397481.3450656(402-412)Online publication date: 14-Apr-2021
  • (2021)“That's (not) the output I expected!” On the role of end user expectations in creating explanations of AI systemsArtificial Intelligence10.1016/j.artint.2021.103507(103507)Online publication date: Apr-2021
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media