[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Explainable AI and Interpretable Machine Learning: : A Case Study in Perspective

Published: 01 January 2022 Publication History

Abstract

Explainable AI, as the word implies is a type of artificial intelligence which enables the explanation of learning models and focuses on why the system arrived at a particular decision, exploring its logical paradigms, contrary to the inherent black box nature of artificial intelligence. Similarly, machine learning interpretability allows users to comprehend the results of the learning models by providing reasoning for the decisions that it has arrived at. This nature of Explainable AI(XAI) and Interpretable Machine Learning (IML) is particularly helpful in the context of AI applications pertaining to healthcare and medical diagnosis. In this paper, we present a case study wherein we have focused on using the ELI5 XAI toolkit in conjunction with LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) algorithmic frameworks in Python, for determining if a patient is diabetic or not, based on a randomized clinical trial dataset. We also endeavor to point out trends and most vital factors that can help clinicians and researchers in analyzing patient data, in conjunction with machine learning and artificial intelligence outputs. Having explanations for machine learning models allows for higher degree of interpretability and paves the way for accountability and transparency in medical and other fields of data analysis. We explore the aforementioned paradigms in the context of this research paper, paving the way for developing an accountable, transparent and robust data analytics framework using XAI & IML.

References

[1]
Eun-Jae Lee, Yong-Hwan Kim, Namkug Kim, Dong-Wha Kang, Deep into the brain: Artificial intelligence in stroke imaging, Journal of Stroke 19 (09) (2017) 277–285.
[2]
Olaf Ronneberger, et al. U-net: Convolutional networks for biomedical image segmentation,1505.04597, 2015.
[3]
Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image egmentation. CoRR, abs/1606.04797, 2016.
[4]
L. Hetzel, J. Dudley, A.M. Feit, P.O. Kristensson, Complex Interaction as Emergent Behaviour: Simulating Mid-Air Virtual Keyboard Typing using Reinforcement Learning, IEEE Transactions on Visualization and Computer Graphics 27 (11) (2021) 4140–4149,. Nov.
[5]
Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, Journal of Latex Classifiers 14 (8) (2016).
[6]
V.V. Vishwarupe, P.M. Joshi, Intellert: a novel approach for content-priority based message filtering, in: 2016 IEEE Bombay Section Symposium (IBSS), 2016, pp. 1–6,.
[7]
Christopher J. Kelly, Alan Karthikesalingam, Mustafa Suleyman, Greg Corrado, Dominic King, Key challenges for delivering clinical impact with artificial intelligence, BMC Medicine 17 (1) (2019) 195.
[8]
Finale Doshi-Velez and Been Kim, Towards a rigorous science of interpretable machine learning, 2017. cite arxiv:1702.08608.
[9]
Sana Tonekaboni, Shalmali Joshi, Melissa D. McCradden, and Anna Goldenberg. What clinicians want: Contextualizing explainable machine learning for clinical end use. CoRR, abs/1905.05134, 2019.
[10]
Jonathan L. Herlocker, Joseph A. Konstan, John Riedl, Explaining collaborative filtering recommendations, in: Proceedings of the 2000ACM Conference on Computer Supported Cooperative Work, CSCW Association for Computing Machinery, 2000.
[11]
Sebastian Lapuschkin, Stephan Waldchen, Alexander Binder, Gregoire Montavon, Wojciech Samek, Klaus-Robert M¨uller, Unmasking clever predictors and assessing what machines really learn, Nature Communications 10 (1) (2019) 1096.
[12]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge.
[13]
Zachary Chase Lipton, The mythos of model interpretability. CoRR, abs/1606.03490, 2016.
[14]
F.K. Doilovi, M. Bri, N. Hlupi, Explainable artificial intelligence: A survey, in: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, (MIPRO), 2018, pp. 0210–0215. pages.
[15]
J. Townsend, T. Chaton, J.M. Monteiro, Extracting relational explanations from deep neural networks: A survey from a neural symbolic perspective, IEEE Transactions on Neural Networks and Learning Systems (2019) 1–15. pages.
[16]
Erico Tjoa, Cuntai Guan, A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI, Journal of Latex Class Files 14 (8) (2015) August.
[17]
S. Bobek, S.K. Tadeja, Ł. Struski, P. Stachura, T. Kipouros, J. Tabor, G.J. Nalepa, P.O Kristensson, Virtual Reality-Based Parallel Coordinates Plots Enhanced with Explainable AI and Data-Science Analytics for Decision-Making Processes, Appl. Sci. 12 (2022) 331,.
[18]
Jason T. Jacques, Per Ola Kristensson, Studying Programmer Behaviour at Scale: A Case Study using Amazon Mechanical Turk, in: Companion Proceedings of the 5th International Conference on the Art, Science, and Engineering of Programming, 2021, pp. 36–48,. MarchPages.
[19]
M. Bedekar, S. Zahoor, V. Vishwarupe, PeTelCoDS—Personalized Television Content Delivery System: A Leap into the Set-Top Box Revolution, in: S. Satapathy, S. Das (Eds.), Proceedings of First International Conference on Information and Communication Technology for Intelligent Systems, 2, Springer, 2016,. SIST.

Index Terms

  1. Explainable AI and Interpretable Machine Learning: A Case Study in Perspective
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Procedia Computer Science
      Procedia Computer Science  Volume 204, Issue C
      2022
      984 pages
      ISSN:1877-0509
      EISSN:1877-0509
      Issue’s Table of Contents

      Publisher

      Elsevier Science Publishers B. V.

      Netherlands

      Publication History

      Published: 01 January 2022

      Author Tags

      1. Explainable AI
      2. Interpretable Machine Learning
      3. Human Centered Computing
      4. Human Inspired AI
      5. HCI
      6. Artificial Intelligence

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 01 Jan 2025

      Other Metrics

      Citations

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media