[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3290605.3300831acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Designing Theory-Driven User-Centric Explainable AI

Published: 02 May 2019 Publication History

Editorial Notes

A corrigendum was issued for this paper on September 16, 2019. You can download the corrigendum from the source materials section of this citation page.

Abstract

From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.

Supplementary Material

p601-wang-corrigendum (p601-wang-corrigendum.pdf)
Corrigendum to "Designing Theory-Driven User-Centric Explainable AI," by Wang et al., Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
ZIP File (paper601pvc.zip)
Preview video captions
MP4 File (paper601p.mp4)
Preview video
MP4 File (paper601.mp4)

References

[1]
Aamodt, A., & Plaza, E. (1994). Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI communications, 7(1), 39--59.
[2]
Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., Kankanhalli, M. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '18.
[3]
Adams, I. D., Chan, M., Clifford, P. C., Cooke, W. M., Dallos, V., de Dombal, F. T., Edwards, M. H., Hancock, D. M., Hewett, D. J., & McIntyre, N. (1986). Computer aided diagnosis of acute abdominal pain: A multi-center study. British Medical Journal, 293(6550), 800--804.
[4]
Altman, N. S. (1992). An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3), 175--185.
[5]
Anderson, H. (2015). Scientific Method. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/scientific-method/. Retrieved 10 September 2018.
[6]
Arocha, J. F., Wang, D., & Patel, V. L. (2005). Identifying reasoning strategies in medical decision making: a methodological guide. Journal of biomedical informatics, 38(2), 154--171.
[7]
Assad, M., Carmichael, D. J., Kay, J., & Kummerfeld, B. (2007, May). PersonisAD: Distributed, active, scrutable model framework for context-aware services. In International Conference on Pervasive Computing (pp. 55--72). Springer, Berlin, Heidelberg.
[8]
Antifakos, S., Schwaninger, A., & Schiele, B. (2004, September). Evaluating the effects of displaying uncertainty in context-aware applications. In International Conference on Ubiquitous Computing (pp. 54--69). Springer, Berlin, Heidelberg.
[9]
Barber, D. (2012). Bayesian reasoning and machine learning. Cambridge University Press.
[10]
Barnett, G. O., Cimino, J. J., Hupp, J. A., & Hoffer, E. P. (1987). DXplain: an evolving diagnostic decision-support system. Jama, 258(1), 67--74.
[11]
Baron, J. (2000). Thinking and deciding. Cambridge University Press.
[12]
Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 Workshop on Explainable AI (XAI) (p. 8).
[13]
Breese, J. S., Heckerman, D., & Kadie, C. (1998, July). Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (pp. 43--52). Morgan Kaufmann Publishers Inc.
[14]
Buchanan, B. G., & Shortliffe, E. H. (1984). Explanation as a topic of AI research. Rule-based expert systems: the MYCIN experiments of the Stanford Heuristic Programming Project, 331.
[15]
Bussone, A., Stumpf, S., & O'Sullivan, D. (2015, October). The role of explanations on trust and reliance in clinical decision support systems. In Healthcare Informatics (ICHI), 2015 International Conference on (pp. 160--169). IEEE.
[16]
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1721--1730). ACM.
[17]
Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785--794). ACM.
[18]
Coppers, S., Van den Bergh, J., Luyten, K., Coninx, K., van der Lek-Ciudin, I., Vanallemeersch, T., & Vandeghinste, V. (2018, April). Intellingo: An Intelligible Translation Environment. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 524). ACM.
[19]
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273--297.
[20]
Croskerry, P. (2009). A universal model of diagnostic reasoning. Academic medicine, 84(8), 1022--1028.
[21]
Croskerry, P. (2009). Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Advances in health sciences education, 14(1), 27--35.
[22]
Croskerry, P. (2017). A Model for Clinical Decision-Making in Medicine. Medical Science Educator, 27(1), 9--13.
[23]
Crowley, R. S., Legowski, E., Medvedeva, O., Reitmeyer, K., Tseytlin, E., Castine, M., ... & Mello-Thoms, C. (2013). Automated detection of heuristics and biases among pathologists in a computer-based system. Advances in Health Sciences Education, 18(3), 343--363.
[24]
Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on (pp. 598--617). IEEE.
[25]
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
[26]
Ehsan, U., Harrison, B., Chan, L., & Riedl, M. (2018). Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations, AAAI/ACM Conf. on Artificial Intelligence, Ethics, and Society (AIES), 2018.
[27]
Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018, March). Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces (pp. 211--223). ACM.
[28]
Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978). Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, MA: Harvard University Press.
[29]
Eslami, M., Krishna Kumaran, S. R., Sandvig, C., & Karahalios, K. (2018, April). Communicating Algorithmic Process in Online Behavioral Advertising. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 432). ACM.
[30]
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115.
[31]
Festinger, L. (1954). A theory of social comparison processes. Human relations, 7(2), 117--140.
[32]
Graesser, A.C., Person, N., Huber, J. (1992). Mechanisms that generate questions. In: Lauer, T.W., Peacock, E., Graesser, A.C. (Eds.), Questions and Information Systems. Lawrence Erlbaum, Hillsdale, NJ, pp. 167--187.
[33]
Guba, E. G., & Lincoln, Y. S. (1982). Epistemological and methodological bases of naturalistic inquiry. ECTJ, 30(4), 233--252.
[34]
Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., & Giannotti, F. (2018). Local Rule-Based Explanations of Black Box Decision Systems. arXiv preprint arXiv:1805.10820.
[35]
Hamblin, C. L. (1970). fallacies. London: Methuen.
[36]
Harutyunyan, H., Khachatrian, H., Kale, D. C., & Galstyan, A. (2017). Multitask learning and benchmarking with clinical time series data. arXiv preprint arXiv:1703.07771.
[37]
Heider, F. (2013). The psychology of interpersonal relations. Psychology Press.
[38]
Herlocker, J., Konstan, J., and Riedl, J. (2000). Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work (CSCW'00). ACM, New York, NY, USA, 241--250.
[39]
Hilton, D. J., & Slugoski, B. R. (1986). Knowledge-based causal attribution: The abnormal conditions focus model. Psychological review, 93(1), 75.
[40]
Hoffman, R. R., & Klein, G. (2017). Explaining explanation, part 1: theoretical foundations. IEEE Intelligent Systems, (3), 68--73.
[41]
Hoffman, R. R., Mueller, S. T., & Klein, G. (2017). Explaining Explanation, Part 2: Empirical Foundations. IEEE Intelligent Systems, 32(4), 78--86.
[42]
Hoffman, R., Miller, T., Mueller, S. T., Klein, G., & Clancey, W. J. (2018). Explaining Explanation, Part 4: A Deep Dive on Deep Nets. IEEE Intelligent Systems, 33(3), 87--95.
[43]
Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain?. arXiv preprint arXiv:1712.09923.
[44]
Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of educational psychology, 24(6), 417.
[45]
Johnson, A. E., Pollard, T. J., Shen, L., Li-wei, H. L., Feng, M., Ghassemi, M., ... & Mark, R. G. (2016). MIMIC-III, a freely accessible critical care database. Scientific data, 3, 160035.
[46]
Kahneman, D., & Egan, P. (2011). Thinking, fast and slow (Vol. 1). New York: Farrar, Straus and Giroux.
[47]
Kahng, M., Andrews, P. Y., Kalro, A., & Chau, D. H. P. (2018). A cti v is: Visual exploration of industry-scale deep neural network models. IEEE transactions on visualization and computer graphics, 24(1), 88--97.
[48]
Kay, J. (2001). Learner control. User modeling and user-adapted interaction, 11(1--2), 111--127.
[49]
Kim, B., Khanna, R., & Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. In Advances in Neural Information Processing Systems (pp. 2280--2288).
[50]
Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., & Viegas, F. (2018, July). Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In International Conference on Machine Learning (pp. 2673--2682).
[51]
Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18--19.
[52]
Klein, G. (2018). Explaining Explanation, Part 3: The Causal Landscape. IEEE Intelligent Systems, 33(2), 83--88.
[53]
Koesten, L. M., Kacprzak, E., Tennison, J. F., & Simperl, E. (2017, May). The Trials and Tribulations of Working with Structured Data:-a Study on Information Seeking Behaviour. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 1277--1289). ACM.
[54]
Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730.
[55]
Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, (8), 30--37.
[56]
Krause, J., Perer, A., & Ng, K. (2016, May). Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5686--5697). ACM.
[57]
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097--1105).
[58]
Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., & Wong, W. K. (2013, September). Too much, too little, or just right? Ways explanations impact end users' mental models. In Visual Languages and Human-Centric Compxuting (VL/HCC), 2013 IEEE Symposium on (pp. 3--10). IEEE.
[59]
Kulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015, March). Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces (pp. 126--137). ACM.
[60]
Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016, August). Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1675--1684). ACM.
[61]
Lamond, G. (2006). Precedent and analogy in legal reasoning. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/legal-reas-prec/. Retrieved 10 September 2018.
[62]
Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. arXiv preprint arXiv:1606.04155.
[63]
Letham, B., Rudin, C., McCormick, T. H., & Madigan, D. (2015). Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350--1371.
[64]
Lighthall, G. K., & Vazquez-Guillamet, C. (2015). Understanding Decision-Making in Critical Care. Clinical medicine & research, cmr-2015.
[65]
Lim, B. Y., Dey, A. K., & Avrahami, D. (2009, April). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2119--2128). ACM.
[66]
Lim, B. Y., & Dey, A. K. (2009, September). Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing (pp. 195--204). ACM.
[67]
Lim, B. Y., & Dey, A. K. (2010, September). Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th ACM international conference on Ubiquitous computing (pp. 13--22). ACM.
[68]
Lim, B. Y., & Dey, A. K. (2011, September). Investigating intelligibility for uncertain context-aware applications. In Proceedings of the 13th international conference on Ubiquitous computing (pp. 415--424). ACM.
[69]
Lim, B. Y., & Dey, A. K. (2011, August). Design of an intelligible mobile context-aware application. In Proceedings of the 13th international conference on human computer interaction with mobile devices and services (pp. 157--166). ACM.
[70]
Lim, B. Y., & Dey, A. K. (2013, July). Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application. In International Conference on Human-Computer Interaction (pp. 92--101). Springer, Berlin, Heidelberg.
[71]
Lipton, P. (1990). Contrastive explanation. Royal Institute of Philosophy Supplements, 27, 247--266.
[72]
Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.
[73]
Lombrozo, T. (2006). The structure and function of explanations. Trends in cognitive sciences, 10(10), 464--470.
[74]
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (NIPS 2017). (pp. 4765--4774).
[75]
Lundberg, S. M., Erion, G. G., & Lee, S. I. (2018). Consistent Individualized Feature Attribution for Tree Ensembles. arXiv preprint arXiv:1802.03888.
[76]
MacQueen, J. (1967, June). Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability (Vol. 1, No. 14, pp. 281--297).
[77]
Markie, P. (2004). Rationalism vs. empiricism. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/rationalism-empiricism. Retrieved 10 September 2018.
[78]
McGuinness, D. L., Ding, L., Da Silva, P. P., & Chang, C. (2007, July). PML 2: A Modular Explanation Interlingua. In ExaCt (pp. 49--55).
[79]
Miller, T. (2017). Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269.
[80]
Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., & Doshi-Velez, F. (2018). How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. arXiv preprint arXiv:1802.00682.
[81]
Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction, 27(3--5), 393--444.
[82]
Patel, V. L., Arocha, J. F., & Zhang, J. (2005). Thinking and reasoning in medicine. The Cambridge handbook of thinking and reasoning, 14, 727--750.
[83]
Peirce, C. S. (1903). Harvard lectures on pragmatism, Collected Papers v. 5.
[84]
Popper, Karl (2002), Conjectures and Refutations: The Growth of Scientific Knowledge, London, UK: Routledge.
[85]
Quinlan, J. R. (2014). C4. 5: programs for machine learning. Elsevier.
[86]
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135--1144). ACM.
[87]
Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence.
[88]
Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Semantically Equivalent Adversarial Rules for Debugging NLP Models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 856--865).
[89]
Rosenthal, S., Selvaraj, S. P., & Veloso, M. M. (2016, July). Verbalization: Narration of Autonomous Robot Experience. In IJCAI (pp. 862--868).
[90]
Roth-Berghofer, T. R. (2004, August). Explanations and case-based reasoning: Foundational issues. In European Conference on Case-Based Reasoning (pp. 389--403). Springer, Berlin, Heidelberg.
[91]
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1985). Learning internal representations by error propagation (No. ICS-8506). California Univ San Diego La Jolla Inst for Cognitive Science.
[92]
Shortliffe, E. H., & Axline, S. G. (1975). Computer-Based Consultations in Clinical Therapeutics: Explanation and Rule Acquisition Capabilities of the MYCIN.
[93]
Silveira, M.S., de Souza, C.S., and Barbosa, S.D.J. (2001). Semiotic engineering contributions for designing online help systems. In Proceedings of the 19th annual international conference on Computer documentation (SIGDOC '01). ACM, New York, NY, USA, 31--38.
[94]
Souillard-Mandar, W., Davis, R., Rudin, C., Au, R., Libon, D. J., Swenson, R., ... & Penney, D. L. (2016). Learning classification models of cognitive conditions from subtle behaviors in the digital clock drawing test. Machine learning, 102(3), 393--441.
[95]
Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities. Lawrence Erlbaum.
[96]
Sunstein, C. R. (1993). On analogical reasoning. Harvard Law Review, 106(3), 741--791.
[97]
Swartout, W. R. (1983). What Kind of Expert Should a System Be? XPLAIN: A System for Creating and Explaining Expert Consulting Programs. Artificial Intelligence, (21), 285--325.
[98]
Tintarev, N., & Masthoff, J. (2012). Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction, 22(4--5), 399--439.
[99]
Toulmin, S. E. (1958). The Uses of Argument, by Stephen Edelston Toulmin,... University Press.
[100]
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4), 297--323.
[101]
Veale, M., Van Kleek, M., & Binns, R. (2018, April). Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 440). ACM.
[102]
Vermeulen, J., Luyten, K., van den Hoven, E., & Coninx, K. (2013, April). Crossing the bridge over Norman's Gulf of Execution: revealing feedforward's true identity. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1931--1940). ACM.
[103]
Vickers, John (2009). The Problem of Induction. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/induction-problem/. Retrieved 10 September 2018.
[104]
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec), 3371--3408.
[105]
Von Neumann, J., & Morgenstern, O. (2007). Theory of games and economic behavior (commemorative edition). Princeton university press.
[106]
Weirich, P. (2008). Causal decision theory. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/decision-causal. Retrieved 10 September 2018.
[107]
Whewell, W. (1989). Theory of scientific method. Hackett Publishing.
[108]
Zhang, Q., & Li, H. (2007). MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on evolutionary computation, 11(6), 712--731.
[109]
Zhang, Q. S., & Zhu, S. C. (2018). Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27--39.

Cited By

View all
  • (2025)Overview of basic design recommendations for user-centered explanation interfaces for AI-based clinical decision support systems: A scoping reviewDIGITAL HEALTH10.1177/2055207624130829811Online publication date: 23-Jan-2025
  • (2025)Designing Visual Explanations and Learner Controls to Engage Adolescents in AI-Supported Exercise SelectionProceedings of the 15th International Learning Analytics and Knowledge Conference10.1145/3706468.3706470(1-12)Online publication date: 3-Mar-2025
  • (2025)XEdgeAI: A human-centered industrial inspection framework with data-centric Explainable Edge AI approachInformation Fusion10.1016/j.inffus.2024.102782116(102782)Online publication date: Apr-2025
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
May 2019
9077 pages
ISBN:9781450359702
DOI:10.1145/3290605
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 May 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. clinical decision making
  2. decision making
  3. explainable artificial intelligence
  4. explanations
  5. intelligibility

Qualifiers

  • Research-article

Funding Sources

  • National Research Foundation Prime Minister's Office Singapore
  • Ministry of Education Singapore

Conference

CHI '19
Sponsor:

Acceptance Rates

CHI '19 Paper Acceptance Rate 703 of 2,958 submissions, 24%;
Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,706
  • Downloads (Last 6 weeks)222
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Overview of basic design recommendations for user-centered explanation interfaces for AI-based clinical decision support systems: A scoping reviewDIGITAL HEALTH10.1177/2055207624130829811Online publication date: 23-Jan-2025
  • (2025)Designing Visual Explanations and Learner Controls to Engage Adolescents in AI-Supported Exercise SelectionProceedings of the 15th International Learning Analytics and Knowledge Conference10.1145/3706468.3706470(1-12)Online publication date: 3-Mar-2025
  • (2025)XEdgeAI: A human-centered industrial inspection framework with data-centric Explainable Edge AI approachInformation Fusion10.1016/j.inffus.2024.102782116(102782)Online publication date: Apr-2025
  • (2025)ContractMind: Trust-calibration interaction design for AI contract review toolsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103411196(103411)Online publication date: Feb-2025
  • (2025)Calibrated explanations for regressionMachine Learning10.1007/s10994-024-06642-8114:4Online publication date: 21-Feb-2025
  • (2025)Mitigating AI-induced professional identity threat and fostering adoption in the workplaceAI & SOCIETY10.1007/s00146-024-02170-0Online publication date: 15-Jan-2025
  • (2025)FIPER: A Visual-Based Explanation Combining Rules and Feature ImportanceMachine Learning and Principles and Practice of Knowledge Discovery in Databases10.1007/978-3-031-74633-8_11(171-184)Online publication date: 1-Jan-2025
  • (2025)Towards Synergistic Human-AI Collaboration in Hybrid Decision-Making SystemsMachine Learning and Principles and Practice of Knowledge Discovery in Databases10.1007/978-3-031-74627-7_20(268-275)Online publication date: 1-Jan-2025
  • (2025)XAI-Supported Decision-Making: Insights from NeuroIS Studies for a User PerspectiveInformation Systems and Neuroscience10.1007/978-3-031-71385-9_13(157-177)Online publication date: 3-Mar-2025
  • (2024)Explanation Ontology: A general-purpose, semantic representation for supporting user-centered explanationsSemantic Web10.3233/SW-23328215:4(959-989)Online publication date: 4-Oct-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media