[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3490100.3516481acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
extended-abstract

Explaining Artificial Intelligence with Tailored Interactive Visualisations

Published: 22 March 2022 Publication History

Abstract

Artificial intelligence (AI) is becoming ubiquitous in the lives of both researchers and non-researchers, but AI models often lack transparency. To make well-informed and trustworthy decisions based on these models, people require explanations that indicate how to interpret the model outcomes. This paper presents our ongoing research in explainable AI, which investigates how visual analytics interfaces and visual explanations, tailored to the target audience and application domain, can make AI models more transparent and allow interactive steering based on domain expertise. First, we present our research questions and methods, contextualised by related work at the intersection of AI, human-computer interaction, and information visualisation. Then, we discuss our work so far in healthcare, agriculture, and education. Finally, we share our research ideas for additional studies in these domains.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156
[2]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
[3]
Muhammad Aurangzeb Ahmad, Carly Eckert, Ankur Teredesai, and Greg McKelvey. 2018. Interpretable Machine Learning in Healthcare. IEEE Intelligent Informatics Bulletin 19, 1 (Aug. 2018), 1–7.
[4]
A. Chatzimparmpas, R.M. Martins, I. Jusufi, K. Kucher, F. Rossi, and A. Kerren. 2020. The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations. Computer Graphics Forum 39, 3 (2020), 713–756. https://doi.org/10.1111/cgf.14034
[5]
Wenqiang Cui. 2019. Visual Analytics: A Comprehensive Overview. IEEE Access 7(2019), 81555–81573. https://doi.org/10.1109/ACCESS.2019.2923736
[6]
Brittany Davis, Maria Glenski, William Sealy, and Dustin Arendt. 2020. Measure Utility, Gain Trust: Practical Advice for XAI Researchers. In 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). IEEE, Salt Lake City, UT, USA, 1–8. https://doi.org/10.1109/TREX51495.2020.00005
[7]
Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 275–285. https://doi.org/10.1145/3301275.3302310
[8]
A. Endert, W. Ribarsky, C. Turkay, B.L. William Wong, I. Nabney, I. Díaz Blanco, and F. Rossi. 2017. The State of the Art in Integrating Machine Learning into Visual Analytics: Integrating Machine Learning into Visual Analytics. Computer Graphics Forum 36, 8 (Dec. 2017), 458–486. https://doi.org/10.1111/cgf.13092
[9]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A Survey of Methods for Explaining Black Box Models. Comput. Surveys 51, 5 (Jan. 2019), 1–42. https://doi.org/10.1145/3236009
[10]
David Gunning and David Aha. 2019. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine 40, 2 (June 2019), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
[11]
Dong-Han Ham. 2010. The State of the Art of Visual Analytics. In EKC 2009 Proceedings of the EU-Korea Conference on Science and Technology(Springer Proceedings in Physics), Joung Hwan Lee, Habin Lee, and Jung-Sik Kim (Eds.). Springer, Berlin, Heidelberg, 213–222. https://doi.org/10.1007/978-3-642-13624-5_20
[12]
Michael Hind. 2019. Explaining Explainable AI. XRDS: Crossroads, The ACM Magazine for Students 25, 3 (April 2019), 16–19. https://doi.org/10.1145/3313096
[13]
Fred Hohman, Minsuk Kahng, Robert Pienta, and Duen Horng Chau. 2019. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers. IEEE Transactions on Visualization and Computer Graphics 25, 8 (Aug. 2019), 2674–2693. https://doi.org/10.1109/TVCG.2018.2843369
[14]
Daniel A. Keim, Florian Mansmann, Jörn Schneidewind, Jim Thomas, and Hartmut Ziegler. 2008. Visual Analytics: Scope and Challenges. In Visual Data Mining, Simeon J. Simoff, Michael H. Böhlen, and Arturas Mazeika (Eds.). Vol. 4404. Springer Berlin Heidelberg, Berlin, Heidelberg, 76–90. https://doi.org/10.1007/978-3-540-71080-6_6
[15]
Leon Kopitar, Leona Cilar, Primoz Kocbek, and Gregor Stiglic. 2019. Local vs. Global Interpretability of Machine Learning Models in Type 2 Diabetes Mellitus Screening. In Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems, Mar Marcos, Jose M. Juarez, Richard Lenz, Grzegorz J. Nalepa, Slawomir Nowaczyk, Mor Peleg, Jerzy Stefanowski, and Gregor Stiglic (Eds.). Vol. 11979. Springer International Publishing, Cham, 108–119. https://doi.org/10.1007/978-3-030-37446-4_9
[16]
Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models. In 2013 IEEE Symposium on Visual Languages and Human Centric Computing. IEEE, San Jose, CA, USA, 3–10. https://doi.org/10.1109/VLHCC.2013.6645235
[17]
B.C. Kwon, M.-J. Choi, J.T. Kim, E. Choi, Y.B. Kim, S. Kwon, J. Sun, and J. Choo. 2019. RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records. IEEE Transactions on Visualization and Computer Graphics 25, 1(2019), 299–309. https://doi.org/10.1109/TVCG.2018.2865027
[18]
Yafeng Lu, Rolando Garcia, Brett Hansen, Michael Gleicher, and Ross Maciejewski. 2017. The State-of-the-Art in Predictive Visual Analytics. Computer Graphics Forum 36, 3 (June 2017), 539–562. https://doi.org/10.1111/cgf.13210
[19]
Maria Madsen and Shirley Gregor. 2000. Measuring Human-Computer Trust. In Proceedings of the 11th Australasian Conference on Information Systems, Vol. 53. Australasian Association for Information Systems, Brisbane, Australia, 6–8.
[20]
Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. 2020. On the Applicability of ML Fairness Notions. arXiv:2006.16745 [cs, stat] (Oct. 2020). arxiv:2006.16745 [cs, stat]
[21]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. Comput. Surveys 54, 6 (July 2021), 1–35. https://doi.org/10.1145/3457607
[22]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Aug. 2021), 24:1–24:45. https://doi.org/10.1145/3387166
[23]
Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. 2018. Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing 73 (Feb. 2018), 1–15. https://doi.org/10.1016/j.dsp.2017.10.011
[24]
Jeroen Ooge. 2019. Het Personaliseren van Motivationele Strategieën En Gamificationtechnieken m.b.v. Recommendersystemen. Master’s thesis. KU Leuven, Faculteit Wetenschappen.
[25]
Jeroen Ooge, Shotallo Kato, and Katrien Verbert. 2022. Explaining Recommendations in E-Learning: Effects on Adolescents’ Trust. In 27th International Conference on Intelligent User Interfaces (IUI ’22 Companion). ACM, Helsinki, Finland. https://doi.org/10.1145/3490099.3511140
[26]
Jeroen Ooge, Gregor Stiglic, and Katrien Verbert. 2021. Explaining Artificial Intelligence with Visual Analytics in Healthcare. WIREs Data Mining and Knowledge Discovery n/a, n/a (2021), e1427. https://doi.org/10.1002/widm.1427
[27]
Jeroen Ooge and Katrien Verbert. 2021. Trust in Prediction Models: A Mixed-Methods Pilot Study on the Impact of Domain Expertise. In 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). IEEE, New Orleans, LA, USA, 8–13. https://doi.org/10.1109/TREX53765.2021.00007
[28]
Dominik Sacha, Hansi Senaratne, Bum Chul Kwon, Geoffrey Ellis, and Daniel A. Keim. 2016. The Role of Uncertainty, Awareness, and Trust in Visual Analytics. IEEE Transactions on Visualization and Computer Graphics 22, 1 (Jan. 2016), 240–249. https://doi.org/10.1109/TVCG.2015.2467591
[29]
G. Stiglic, P. Kocbek, L. Cilar, N. Fijačko, A. Stožer, J. Zaletel, A. Sheikh, and P. Povalej Bržan. 2018. Development of a Screening Tool Using Electronic Health Records for Undiagnosed Type 2 Diabetes Mellitus and Impaired Fasting Glucose Detection in the Slovenian Population. Diabetic Medicine 35, 5 (May 2018), 640–649. https://doi.org/10.1111/dme.13605
[30]
Gregor Stiglic, Primoz Kocbek, Nino Fijacko, Marinka Zitnik, Katrien Verbert, and Leona Cilar. 2020. Interpretability of Machine Learning-based Prediction Models in Healthcare. WIREs Data Mining and Knowledge Discovery 10, 5 (Sept. 2020), e1379. https://doi.org/10.1002/widm.1379
[31]
Gregor Stiglic, Fei Wang, Aziz Sheikh, and Leona Cilar. 2021. Development and Validation of the Type 2 Diabetes Mellitus 10-Year Risk Score Prediction Models from Survey Data. Primary Care Diabetes 15, 4 (Aug. 2021), 699–705. https://doi.org/10.1016/j.pcd.2021.04.008
[32]
Nava Tintarev and Judith Masthoff. 2011. Designing and Evaluating Explanations for Recommender Systems. In Recommender Systems Handbook, Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B. Kantor (Eds.). Springer US, Boston, MA, 479–510. https://doi.org/10.1007/978-0-387-85820-3_15
[33]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–15.
[34]
Ji Soo Yi, Youn ah Kang, John Stasko, and J.A. Jacko. 2007. Toward a Deeper Understanding of the Role of Interaction in Information Visualization. IEEE Transactions on Visualization and Computer Graphics 13, 6 (Nov. 2007), 1224–1231. https://doi.org/10.1109/TVCG.2007.70515
[35]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300509

Cited By

View all
  • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024
  • (2023)Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If ExplorationsProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584075(204-219)Online publication date: 27-Mar-2023
  • (2023)Explanations on Demand - a Technique for Eliciting the Actual Need for Explanations2023 IEEE 31st International Requirements Engineering Conference Workshops (REW)10.1109/REW57809.2023.00065(345-351)Online publication date: Sep-2023
  • Show More Cited By

Index Terms

  1. Explaining Artificial Intelligence with Tailored Interactive Visualisations
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    IUI '22 Companion: Companion Proceedings of the 27th International Conference on Intelligent User Interfaces
    March 2022
    142 pages
    ISBN:9781450391450
    DOI:10.1145/3490100
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 March 2022

    Check for updates

    Author Tags

    1. XAI
    2. algorithmic transparency
    3. explainability
    4. information visualisation
    5. interpretability

    Qualifiers

    • Extended-abstract
    • Research
    • Refereed limited

    Funding Sources

    Conference

    IUI '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)99
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 29 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024
    • (2023)Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If ExplorationsProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584075(204-219)Online publication date: 27-Mar-2023
    • (2023)Explanations on Demand - a Technique for Eliciting the Actual Need for Explanations2023 IEEE 31st International Requirements Engineering Conference Workshops (REW)10.1109/REW57809.2023.00065(345-351)Online publication date: Sep-2023
    • (2023)The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature reviewComputers in Biology and Medicine10.1016/j.compbiomed.2023.107555166(107555)Online publication date: Nov-2023
    • (2023)Rams, hounds and white boxesArtificial Intelligence in Medicine10.1016/j.artmed.2023.102506138:COnline publication date: 1-Apr-2023
    • (2023)Survey on Explainable AI: From Approaches, Limitations and Applications AspectsHuman-Centric Intelligent Systems10.1007/s44230-023-00038-y3:3(161-188)Online publication date: 10-Aug-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media