[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2166966.2166996acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Are explanations always important?: a study of deployed, low-cost intelligent interactive systems

Published: 14 February 2012 Publication History

Abstract

Intelligent interactive systems (IIS) have great potential to improve users' experience with technology by tailoring their behaviour and appearance to users' individual needs; however, these systems, with their complex algorithms and dynamic behaviour, can also suffer from a lack of comprehensibility and transparency. We present the results of two studies examining the comprehensibility of, and desire for explanations with deployed, low-cost IIS. The first study, a set of interviews with 21 participants, reveals that i) comprehensibility is not always dependent on explanations, and ii) the perceived cost of viewing explanations tends to outweigh the anticipated benefits. Our second study, a two-week diary study with 14 participants, confirms these findings in the context of daily use, with participants indicating a desire for an explanation in only 7% of diary entries. We discuss the implications of our findings for the design of explanation facilities.

References

[1]
Beyer, H., and Holtzblatt, K. Contextual Design: Defining Customer-Centered Systems. Morgan Kaufmann, San Francisco, CA, USA, 1988.
[2]
Bunt, A., McGrenere, J., and Conati, C. Understanding the utility of rationale a mixed-initiative system for GUI customization. Proc. UM, (2007), 147--156.
[3]
Cheverst, K., Byun, H.E., Fitton, D., Sas, C., Kray, C., and Villar, N. Exploring issues of user model transparency and proactive behaviour in an office environment control system. User Modeling and UserAdapted Interaction, 15, 3--4, (2005), 235--273.
[4]
Dzindolet, M.T., Peterson, S.A., Pomranky, R. A., Pierce, L. G., and Beck, H. P. The role of trust in automation reliance. Int. J. of Human-Computer Studies 58, (2003), 697--718.
[5]
Erlich, K., Kirk, S., Patterson, J. Rasmussen, J., Ross, S., and Gruen, D. Taking advice from intelligent agents: The double-edged sword of explanations. Proc IUI, (2011), 125--134.
[6]
Glass, A., McGuinness, D.L., and Wolverton, M. Toward establishing trust in adaptive agents. Proc. IUI, (2008), 227--236.
[7]
Gregor, S. and Benbasat, I. Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Quarterly 23, 4 (1999), 497--530.
[8]
Herlocker, J.L., Konstan, J.A., and Rield, J. Explaining collaborative filtering recommendations. Proc. CSCW, (2000), 241--250.
[9]
Jameson, A. Understanding and dealing with usability side effects of intelligent processing, AI Magazine 30, 4, (2009), 23--40.
[10]
Koedinger, K.R., Anderson, J.R., Hadley, W.H., and Mark, M. Intelligent tutoring goes to school in the big city. Int. J. of AIED 8, (1997), 30--43.
[11]
Lee, J.D. and See, K.A. Trust in automation: Designing for appropriate reliance. Human Factors 46, 1, (2004), 50--80.
[12]
Leshed, G., Haber, E.M., Matthews, T., and Lau, T. CoScripter: automating & sharing how-to knowledge in the enterprise. Proc. CHI, (2008), 1719--1728.
[13]
Lim, B.Y. and Dey, A.K. Assessing demand for intelligibility in context-aware applications. Proc. UbiComp, (2009), 195--204.
[14]
Lim, B.Y., Dey, A.,K., and Avrahami, D. Why and why not explanations improve the intelligibility of contextaware intelligent systems. Proc. CHI, (2009), 2119--2128.
[15]
Pu, P. and Chen, L. Trust building with explanation interfaces. Proc. IUI, (2006), 93--100.
[16]
Norman, D.A. Some observations on mental models. In D. Gentner & A. Stevens (Eds.) Mental Models, Hillsdale, NJ: Erlbaum, (1983), 7--15.
[17]
Resnick, P., Iacovou, N., Sushak, M., Bergstrom, P., and Riedl, J. GroupLens: An open architecture for collaborative filtering of Netnews. Proc. CSCW, (1994), 175--186.
[18]
Shneiderman, B. and Maes, P. Direct manipulation vs. interface agents. Interactions 4, 6, (1997), 42--61.
[19]
Stylianou, A.C., Madey, G.R., and Smith, R.D. Selection criteria for expert systems shells: A sociology technical framework. CACM, 35, 10, (1992), 30--48.
[20]
Suermondt, H.J. and Cooper, G.F. An evaluation of explanations of probabilistic inference. Proc. Computer Applications in Medical Care, (1992), 579--585.
[21]
Stumpf, S., Rajaram, V., Li, L., Wong, W., Burnett, M., Dietterich, T., Sullivan, E., and Herlocker, J. Interacting meaningfully with machine learning systems: three experiments. Int. J. of Human-Computer Studies 67, (2009), 639--662.
[22]
Teach, R.L., and Shortliffe, E.H. An analysis of physician attitudes regarding computer-based clinical consultation systems. Computers and Biomedical Research 14, (1981), 542--558.
[23]
Tintarev, N. and Masthoff, J. Designing and evaluating explanations for recommender systems. In F. Ricci, L. Rokach, B. Shapira & P. Kantor, (Eds.) Recommender Systems Handbook, Dordrecht, Netherlands: Springer, (2011), 479--510.
[24]
Tullio, J., Dey, A.K., Chalecki, J., and Fogarty, J. How it works: A field study of non-technical users interacting with an intelligent systems. Proc. CHI, (2007), 31--40.
[25]
Wang. W. and Benbasat, I. Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. J. of Management Information Systems 23, 4, (2007), 219--249.

Cited By

View all
  • (2024)Explainability of Image Generative AI for Novice and Expert Users: A Comparative Study of Static and Dynamic ExplanationsJournal of Digital Contents Society10.9728/dcs.2024.25.8.226125:8(2261-2272)Online publication date: 31-Aug-2024
  • (2024)Initial results on personalizing explanations of AI hints in an ITSProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659566(244-248)Online publication date: 22-Jun-2024
  • (2024)Supporting User Critiques of AI Systems via Training Dataset Explanations: Investigating Critique Properties and the Impact of Presentation Style2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VL/HCC60511.2024.00024(134-147)Online publication date: 2-Sep-2024
  • Show More Cited By

Index Terms

  1. Are explanations always important?: a study of deployed, low-cost intelligent interactive systems

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '12: Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
    February 2012
    436 pages
    ISBN:9781450310482
    DOI:10.1145/2166966
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 February 2012

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. comprehensibility
    2. diary studies
    3. explanations
    4. qualitative evaluations
    5. recommender systems
    6. transparency

    Qualifiers

    • Research-article

    Conference

    IUI '12
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Upcoming Conference

    IUI '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)118
    • Downloads (Last 6 weeks)16
    Reflects downloads up to 24 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Explainability of Image Generative AI for Novice and Expert Users: A Comparative Study of Static and Dynamic ExplanationsJournal of Digital Contents Society10.9728/dcs.2024.25.8.226125:8(2261-2272)Online publication date: 31-Aug-2024
    • (2024)Initial results on personalizing explanations of AI hints in an ITSProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659566(244-248)Online publication date: 22-Jun-2024
    • (2024)Supporting User Critiques of AI Systems via Training Dataset Explanations: Investigating Critique Properties and the Impact of Presentation Style2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VL/HCC60511.2024.00024(134-147)Online publication date: 2-Sep-2024
    • (2024)SmartEx: A Framework for Generating User-Centric Explanations in Smart Environments2024 IEEE International Conference on Pervasive Computing and Communications (PerCom)10.1109/PerCom59722.2024.10494449(106-113)Online publication date: 11-Mar-2024
    • (2024)Enhancing User Trust and Interpretability in AI-Driven Feature Request Detection for Mobile App Reviews: An Explainable ApproachIEEE Access10.1109/ACCESS.2024.344352712(114023-114045)Online publication date: 2024
    • (2024)EXplainable Artificial Intelligence (XAI)—From Theory to Methods and ApplicationsIEEE Access10.1109/ACCESS.2024.340984312(80799-80846)Online publication date: 2024
    • (2024)The SAGE Framework for Explaining Context in Explainable Artificial IntelligenceApplied Artificial Intelligence10.1080/08839514.2024.231867038:1Online publication date: 22-Feb-2024
    • (2024)Explainable Artificial Intelligence (XAI) 2.0Information Fusion10.1016/j.inffus.2024.102301106:COnline publication date: 1-Jun-2024
    • (2024)To Explain or Not To Explain: An Empirical Investigation of AI-based Recommendations on Social Media PlatformsElectronic Markets10.1007/s12525-024-00741-z35:1Online publication date: 13-Dec-2024
    • (2024)Navigating Transparency: The Influence of On-demand Explanations on Non-expert User Interaction with AIArtificial Intelligence in HCI10.1007/978-3-031-60606-9_14(238-263)Online publication date: 1-Jun-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media