[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Decrypting the Black Boxing of Artificial Intelligence Using Explainable Artificial Intelligence in Smart Healthcare

  • Chapter
  • First Online:
Connected e-Health

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1021))

Abstract

Artificial Intelligence (AI) is creating a revolution in the healthcare industry with its recent developments in organized and amorphous data and quick progress in analytic techniques. The usefulness of AI in healthcare is being recognised at the same time as people begin to be concerned with the possible lack of explainability and bias in the models created. This explains the concept of explainable artificial intelligence (XAI), which increases the faith held in a system, thus leading to more widespread use of AI in healthcare. In this chapter, we offer diverse ways of viewing the XAI concepts, understandability and interpretability of explainable AI systems, mainly focussing on the healthcare domain. The intention is to educate healthcare providers on the understandability and interpretability of explainable AI systems. The medical model is the root cause of life, and we should be assured adequate to treat the patient according to its rules. This chapter uses AI explainability as a way to help build trustworthiness in the medical domain and takes a look at the recent developments in the area of explainable AI which encourages creativity, and at times are necessities in practice to raise awareness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 127.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 159.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
GBP 159.99
Price includes VAT (United Kingdom)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ciresan D, Meier U, Masci J, Schmidhuber J (2011) A committee of neural networks for traffic sign classification. In: International joint conference on neural networks (IJCNN), pp 1918–1921

    Google Scholar 

  2. Moravcık M, Schmid M, Burch N, Lisy V, Morrill D, Bard N et al (2017) Deepstack: expert-level artificial intelligence in heads-up no-limit poker. Science 356(6337):508–513

    Article  MathSciNet  MATH  Google Scholar 

  3. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 779–788

    Google Scholar 

  4. Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: International conference on learning representations (ICLR)

    Google Scholar 

  5. Wu D, Wang L, Zhang P (2019) Solving statistical mechanics using variation autoregressive networks. Phys Rev Lett 122(8):080602

    Article  Google Scholar 

  6. Le Cun YA, Bottou L, Orr GB, Müller KR (2012) Efficient backprop. In: Neural networks: tricks of the trade. Springer, pp 9–48

    Google Scholar 

  7. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Largescale video classification with convolutional neural networks. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1725–1732

    Google Scholar 

  8. Lindholm E, Nickolls J, Oberman S, Montrym J (2008) Nvidia tesla: a unified graphics and computing architecture. IEEE Micro 28(2):39–55

    Article  Google Scholar 

  9. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489

    Article  Google Scholar 

  10. Han S, Pool J, Tran J, Dally W (2015) Learning both weights and connections for efficient neural network. In: Advances in neural information processing systems (NIPS), pp 1135–1143

    Google Scholar 

  11. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations (ICLR)

    Google Scholar 

  12. Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608

  13. Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608

  14. Al-Shedivat M, Dubey A, Xing EP (2017) Contextual explanation networks. arXiv preprint arXiv:1705.10301

  15. Rich C, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N (2015) Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1721–1730

    Google Scholar 

  16. Loyola-Gonz_alez O (2019) Black-Box vs. White-Box: understanding their advantages and weaknesses from a practical point of view. https://doi.org/10.1109/ACCESS.2019.2949286

  17. Nori H et al (2019) InterpretML: a Uni_ed framework for machine learning interpretability. In: arXiv preprint arXiv:1909.09223

  18. Lundberg S, Chen H, Lee S (2019) Explaining models by propagating shapley values. In: arXiv:1911.11888

  19. van Lent M, Fisher W, Mancuso M (2004) An explainable artificial intelligence system for small-unit tactical behaviour. In: Proceeding 16th conference innovation application artificial intelligent, pp 900–907

    Google Scholar 

  20. Gunning D (2018) Explainable artificial intelligence (XAI), defines advanced research projects agency (DARPA) [Online]. Available: http://www.darpa.mil/program/explainable-artificialintelligence. Accessed 6 Jun 2018

  21. Barocas S, Friedler S, Hardt M, Kroll J, Venka-Tasubramanian S, Wallach H (2018) The FAT-ML workshop series on fairness, accountability, and transparency in machine learning [Online]. Available: http://www.fatml.org/. Accessed 6 Jun 2018

  22. FICO (2018) Explainable machine learning challenge. [Online]. Available: https://community.fico.com/s/explainable-machine-learning-challenge. Accessed 6 Jun 2018

  23. McFarland M (2018) Uber shuts down self-driving operations in Arizona, CNN. [Online]. Available: http://money.cnn.com/2018/05/23/technology/uber-arizona-selfdriving/index.html. Accessed 6 Jun 2018

  24. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N (2015) Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceeding 21th ACM SIGKDD international conference knowledge discovery data mining, pp 1721–1730

    Google Scholar 

  25. Lightbourne J (2017) Damned lies & criminal sentencing using evidence based tools. 15 Duke Law & Technol Rev Tech Rep 327–343. https://scholarship.law.duke.edu/dltr/vol15/iss1/16. Accessed 6 Jun 2018

  26. Tan S, Caruana R, Hooker G, Lou Y (2018) Detecting bias in black-box models using transparent model distillation. [Online]. Available: https://arxiv.org/abs/1710.06169

  27. Knight W (2017) The US military wants its autonomous machines to explain themselves, MIT technology review. [Online]. Available: https://www.technologyreview.com/s/603795/theus-military-wants-its-autonomous-machines-to-explain-themselves. Accessed 6 Jun 2018

  28. Henelius A, Puolamäki K, Ukkonen A (2017) Interpreting classifiers through attribute interactions in dataset. [Online]. Available: https://arxiv.org/abs/1707.07576

  29. Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining, vol 13, pp 1675–1684

    Google Scholar 

  30. Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning, no Ml, pp 1–13

    Google Scholar 

  31. Ribeiro MT, Guestrin C (2016) “Why should I trust you ?” Explaining the predictions of any classifier, pp 1135–1144

    Google Scholar 

  32. Khedkar S, Subramanian V, Shinde G, Gandhi P (2019) Explainable AI in healthcare. SSRN Electron J

    Google Scholar 

  33. Ribeiro MT, Guestrin C, Anchors : high-precision model-agnostic explanations

    Google Scholar 

  34. Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems, pp 4765–4774

    Google Scholar 

  35. Wang D, Yang Q, Abdul A, Lim BY (2019) Designing theory-driven user-centric explainable A.IA.I. In: Proceedings of the 2019 CHI conference on human factors in computing systems—CHI ‘19, pp 1–15

    Google Scholar 

  36. Sharma V, Piyush, Chhatwal S, Singh B (2021) An explainable artificial intelligence based prospective framework for COVID-19 risk prediction. Preprint March 5, 2021, https://doi.org/10.1101/2021.03.02.21252269

  37. Duell JA (2021) A comparative approach to explainable artificial intelligence (XAI) methods in application to high-dimensional electronic health records: examining the usability of XAI. arXiv:2103.04951v1 [cs. L. L.G.], pp 1–18

  38. Larasati R, DeLiddo A, Building a trustworthy explainable AI in healthcare, pp 1–8

    Google Scholar 

  39. Pawar U, Shea DO, Rea S, Reilly RO, Incorporating explainable artificial intelligence (XAI) to aid the understanding of machine learning in the healthcare domain. CEUR-WS.org/vol. 2771/AISC2020_paper_62.pdf, pp 1–12

    Google Scholar 

  40. Dave D, Naik H, Singhal S, Patel P (2020) Explainable AI meets healthcare: a study on heart disease dataset. arXiv:2011.03195v1 [cs.LG] 6 Nov 2020, pp 1–13

  41. Markus AF, Kors JA, Rijnbeek PR (2021) The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform 113(103655):1–11

    Google Scholar 

  42. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N (2015) Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceeding 21th ACM SIGKDD international conference knowledge discovery data mining, pp 1721–1730

    Google Scholar 

  43. Letham B, Rudin C, McCormick TH, Madigan D (2015) Interpretable classi_ers using rules and Bayesian analysis: building a better stroke prediction model. Ann Appl Statist 9(3):1350–1371

    Article  MATH  Google Scholar 

  44. Yang C, Rangarajan A, Ranka S (2018) Global model interpretation via recursive partitioning. [Online]. Available: https://arxiv.org/abs/1802.04253

  45. Ribeiro MT, Singh S, Guestrin C (2016) ‘Why should I trust you?’: Explaining the predictions of any classier. In: Proceeding 22nd ACM SIGKDD international conference knowledge discovery data mining, pp 1135–1144

    Google Scholar 

  46. Ribeiro MT, Singh S, Guestrin C (2018) Anchors: high-precision model-agnostic explanations. In: Proceeding AAAI conference artificial intelligent, pp 1–9

    Google Scholar 

  47. Lei J, G’Sell M, Rinaldo A, Tibshirani RJ, Wasserman L, Distribution-free predictive inference for regression. J Amer Stat Assoc, to be published [Online]. Available: http://www.stat.cmu.edu/~ryantibs/papers/conformal.pdf

  48. Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K, Müller K-R (2010) How to explain individual classification decisions. J Mach Learn Res 11(6):1803–1831

    MathSciNet  MATH  Google Scholar 

  49. Robnik-ikonja M, Kononenko I (2008) Explaining classifications for individual instances. IEEE Trans Knowl Data Eng 20(5):589–600

    Article  Google Scholar 

  50. Montavon G, Lapuschkin S, Binder A, Samek W, Müller K-R (2017) Explaining non-linear classification decisions with deep taylor decomposition. Pattern Recognit 65:211–222

    Article  Google Scholar 

  51. Bastani O, Kim C, Bastani H (2017) Interpretability via model extraction. [Online]. Available: https://arxiv.org/abs/1706.09773

  52. Thiagarajan JJ, Kailkhura B, Sattigeri P, Ramamurthy KN (2016) TreeView: peeking into deep neural networks via feature-space partitioning. [Online]. Available: https://arxiv.org/abs/1611.07429

  53. Tan S, Caruana R, Hooker G, Lou Y (2018) Detecting bias in black-box models using transparent model distillation. [Online]. Available: https://arxiv.org/abs/1710.06169

  54. Che Z, Purushotham S, Khemani R, Liu Y (2015) Distilling knowledge from deep networks with applications to healthcare domain. [Online]. Available: https://arxiv.org/abs/1512.03542

  55. Xu K, Park DH, Yi DH, Sutton C (2018) Interpreting deep classifier by visual distillation of dark knowledge. [Online]. Available: https://arxiv.org/abs/1803.04042

  56. Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the GDPR. [Online]. Available: https://arxiv.org/abs/1711.00399

  57. Kim B, Khanna R, Koyejo OO (2016) Examples are not enough, learn to criticise! Criticism for interpretability. Adv Neural Inform Process Syst 29:2280–2288

    Google Scholar 

  58. Cook RD (1977) Detection of influential observation in linear regression. Technometrics 19:15–18. https://doi.org/10.1080/00401706.1977.10489493

    Article  MathSciNet  MATH  Google Scholar 

  59. Tang Z, Chuang KV, DeCarli C, Jin L-W, Beckett L, Keiser MJ, Dugger BN (2019) Interpretable classification of alzheimer’s disease pathologies with a convolutional neural network pipeline. Nat Commun 10(1):2173

    Article  Google Scholar 

  60. Thomas AW, Heekeren HR, Muller KR, Samek W (2019) Analyzing neuroimaging data through recurrent deep learning models. Front Neurosci 13:1321

    Article  Google Scholar 

  61. Vilamala A, Madsen KH, Hansen LK (2017) Deep convolutional neural networks for interpretable analysis of eeg sleep stage scoring. In: 2017 IEEE 27th international workshop on machine learning for signal processing (MLSP), pp 1–6

    Google Scholar 

  62. Van Molle P, De Strooper M, Verbelen T, Vankeirsbilck B, Simoens P, Dhoedt B (2018) Visualising convolutional neural networks to improve decision support for skin lesion classification. In: Stoyanov D, Taylor Z, Kia SM, Oguz I, Reyes M, Martel A, Maier-Hein L, Marquand AF, Duchesnay E, Lofstedt T, Landman B, Jorge Cardoso M, Silva CA, Pereira S, Meier R (eds) Understanding and interpreting machine learning in medical image computing applications. Cham, Springer International Publishing, pp 115–123

    Google Scholar 

  63. Letham B, Rudin C, McCormick TH, Madigan D (2015) Interpretable classifiers using rules and bayesian analysis: building a better stroke prediction model. Annals Appl Stat 9(3):13501371

    Article  MathSciNet  MATH  Google Scholar 

  64. Lee H, Kim ST, Ro YM (2019) Generation of multimodal justification using visual word constraint model for explainable computer-aided diagnosis. In: Suzuki K, Reyes M, Syeda-Mahmood T, Glocker B, Wiest R, Gur Yaniv, Greenspan H, Madabhushi A (eds) Interpretability of machine intelligence in medical image computing and multimodal learning for clinical decision support. Springer International Publishing, Cham, pp 21–29

    Chapter  Google Scholar 

  65. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N (2015) Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, KDD 15. New York, NY, USA, Association for Computing Machinery, pp 17211730

    Google Scholar 

  66. Varol E, Sotiras A, Zeng K, Davatzikos C (2018) Generative discriminative models for multivariate inference and statistical mapping in medical imaging. In: Frangi AF, Schnabel JA, Davatzikos C, Alberola-Lopez Carlos, Fichtinger G (eds) Medical image computing and computer assisted intervention—MICCAI 2018. Springer International Publishing, Cham, pp 540–548

    Chapter  Google Scholar 

  67. Das A, Rad P (2020) Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv:2006.11371v2 [cs.CV]

  68. Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI) 6:52138–52160

    Google Scholar 

  69. Doilovi£ FK, Br£i¢ M, Hlupi¢ N, Explainable artificial intelligence: a survey. In: Proceeding 41st international conversion information communication technology., electronic microelectronic. (MIPRO), pp 0210-0215

    Google Scholar 

  70. Markus AF, Kors JA, Rijnbeek PR (2021) The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform 113(103655):1–11

    Google Scholar 

  71. Tjoa E, Guan CF (2015) A survey on explainable artificial intelligence (XAI): towards medical XAI. J Latex Class Files 14(8):1–11

    Google Scholar 

  72. Gold C et al (2015) Trust in automation { before and after the experience of take-over scenarios in a highly automated vehicle. Procedia Manuf 3(2015):3025–3032. https://doi.org/10.1016/j.promfg.2015.07.847

    Article  Google Scholar 

  73. Tonekaboni S et al (2019) What Clinicians want: contextualizing explainable machine learning for clinical end use. In: arXiv preprint 1905.05134

    Google Scholar 

  74. Dibben MR, Lean M (2003) Achieving compliance in chronic illness management: illustrations of trust relationships between physicians and nutrition clinic patients. Health Risk Soc 5(3):241–258

    Article  Google Scholar 

  75. Gunning D (2017) Explainable artificial intelligence (xai) (2017).

    Google Scholar 

  76. Shaikh TA, Ali R, Beg MMS (2020) Transfer learning privileged information fuels CAD diagnosis of breast cancer. Mach Vis Appl 31(9):1–23

    Google Scholar 

  77. Shaikh TA, Ali R (2019) Applying machine learning algorithms for early diagnosis and prediction of breast cancer risk. In: Proceedings of 2nd international conference on communication, computing and networking (ICCCN). Chandigarh India, Springer, Singapore, pp 589–598

    Google Scholar 

  78. Shaikh TA, Ali R (2020) Computer-aided big healthcare data (BHD) analytics. In: Tanwar P, Jain V, Liu CM, Goyal V (eds) Big data analytics and intelligence: a perspective for health care. Emerald Publishing Limited, Bingley, pp 115–138. https://doi.org/10.1108/978-1-83909-099-820201010

  79. Rahim SS, Palade V, Almakky I, Holzinger A (2019) Detection of diabetic retinopathy and maculopathy in eye fundus images using deep learning and image augmentation. In: International cross-domain conference for machine learning and knowledge extraction. Springer, pp 114–127. https://doi.org/10.1007/978-3-030-29726-8_8

  80. Lakkaraju H, Kamar E, Caruana R, Leskovec J (2017) Interpretable and explorable approximations of black box models. arXiv:1707.01154

  81. Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215. https://doi.org/10.1038/s42256-019-0048-x

    Article  Google Scholar 

  82. Faust K, Bala S, Ommeren V, Randy P, Alessia AQ, Raniah D, Ugljesa, Diamandis P (2019) Intelligent feature engineering and ontological mapping of brain tumour histomorphologies by deep learning. Nat Mach Intell 1(7):316–321. https://doi.org/10.1038/s42256-019-0068-6

    Article  Google Scholar 

  83. Mishra S, Mishra BK, Tripathy HK, Dutta A (2020) Analysis of the role and scope of big data analytics with IoT in health care domain. In: Handbook of data science approaches for biomedical engineering. Academic Press, pp 1–23

    Google Scholar 

  84. Lapuschkin S, Binder A, Montavon G, Müller KR, Samek W (2016) Analysing classifiers: fisher vectors and deep neural networks. In: IEEE conference on computer vision and pattern recognition (CVPR). pp 2912–2920

    Google Scholar 

  85. Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes (VOC) challenge. Int J Comput Vision 88(2):303–338

    Article  Google Scholar 

  86. Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you?: explaining the predictions of any classifier. In: ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144

    Google Scholar 

  87. Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W, Müller KR (2019) Unmasking clever hans predictors and assessing what machines really learn. Nat Commun 10:1096

    Article  Google Scholar 

  88. Mordvintsev A, Olah C, Tyka M (2015) Inceptionism: going deeper into neural networks

    Google Scholar 

  89. Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you?: Explaining the predictions of any classifier. In: ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144

    Google Scholar 

  90. Lapuschkin S (2019) Opening the machine learning black box with layer-wise relevance propagation. Ph.D. thesis, Technische Universität Berlin (2019)

    Google Scholar 

  91. Smilkov D, Thorat N, Kim B, Viégas F, Wattenberg M (2017) Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825

  92. Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems, pp 4765–4774

    Google Scholar 

  93. Ancona M, Ceolini E, Oztireli C, Gross M (2014) Gradient-based attribution methods. In: Explainable AI : interpreting, explaining and visualising deep learning. lecture notes in computer science, vol 11700. Springer

    Google Scholar 

  94. Simonyan K, Vedaldi A, Zisserman A (2014) Deep inside convolutional networks: visualising image classification models and saliency maps. In: ICLR workshop

    Google Scholar 

  95. Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: International conference on machine learning (ICML), pp 3319–3328

    Google Scholar 

  96. Fong RC, Vedaldi A (2017) Interpretable explanations of black boxes by meaningful perturbation. In: IEEE international conference on computer vision (CVPR), pp 3429–3437

    Google Scholar 

  97. Zeiler MD, Fergus R (2014) Visualising and understanding convolutional networks. In: European conference computer vision (ECCV), pp 818–833

    Google Scholar 

  98. Zintgraf LM, Cohen TS, Adel T, Welling M (2017) Visualising deep neural network decisions: prediction difference analysis. In: International conference on learning representations (ICLR)

    Google Scholar 

  99. Fong R, Vedaldi A (2019) Explanations for attributing deep neural network predictions. In: Explainable AI: interpreting, explaining and visualising deep learning. lecture notes in computer science, vol 11700. Springer, pp 149167

    Google Scholar 

  100. Nguyen A, Dosovitskiy A, Yosinski J, Brox T, Clune J (2016) Synthesising the preferred inputs for neurons in neural networks via deep generator networks. In: Advances in neural information processing systems (NIPS), pp 3387–3395

    Google Scholar 

  101. Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034

  102. Montavon G, Binder A, Lapuschkin S, Samek W, Müller KR (2019) Layer-wise relevance propagation: an overview. In: Explainable AI: interpreting, explaining and visualising deep learning. Lecture notes in computer science, vol 11700. Springer, pp 193–209

    Google Scholar 

  103. Arras L, Arjona-Medina J, Gillhofer M, Widrich M, Montavon G, Müller K.R, Hochreiter S, Samek W (2019) Explaining and interpreting LSTMs with LRP. In: Explainable AI : interpreting, explaining and visualising deep learning. Lecture notes in computer science, vol 11700. Springer, pp 211238

    Google Scholar 

  104. Lapuschkin S, Binder A, Montavon G, Müller KR, Samek W (2016) Analysing classifiers: fisher vectors and deep neural networks. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 2912–2920

    Google Scholar 

  105. Zeiler MD, Fergus R (2014) Visualising and understanding convolutional networks. In: European conference computer vision (ECCV), pp 818–833

    Google Scholar 

  106. Springenberg JT, Dosovitskiy A, Brox T, Riedmiller M (2015) Striving for simplicity: the all convolutional net. In: ICLR workshop

    Google Scholar 

  107. Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7):e0130140

    Article  Google Scholar 

  108. Lapuschkin S, Binder A, Montavon G, Müller K-R, Samek W (2016) Analysing classifiers: fisher vectors and deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 2912–2920

    Google Scholar 

  109. Montavon G, Bach S, Binder A, Samek W, Müller KR (2017) Explaining non-linear classification decisions with deep taylor decomposition. Pattern Recogn 65:211–222

    Article  Google Scholar 

  110. Ribeiro MT, Singh S, Guestrin C (2018) Anchors: high-precision model-agnostic explanations. In: AAAI conference on artificial intelligence (AAAI)

    Google Scholar 

  111. Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the black box: automated decisions and the gdpr

    Google Scholar 

  112. Looveren AV, Klaise J (2020) Interpretable counterfactual explanations guided by prototypes

    Google Scholar 

  113. Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. arXiv preprint arXiv:1703.01365

  114. Lapuschkin S, Binder A, Montavon G, Müller K-R, Samek W (2016) The layer-wise relevance propagation toolbox for artificial neural networks. J Mach Learn Res 17(114):1–5

    MATH  Google Scholar 

  115. Alber M, Lapuschkin S, Seegerer P, Hägele M, Schütt KT, Montavon G, Samek W, Müller KR, Dähne S, Kindermans PJ (2019) Investigate neural networks! J Mach Learn Res 20(93):1–8

    Google Scholar 

  116. Molnar C, Casalicchio G, Bischl B (2018) iml: An r package for interpretable machine learning. J Open Source Software 3(26):786

    Article  Google Scholar 

  117. Molnar C (2020) Interpretable machine learning. Lulu. com

    Google Scholar 

  118. Ancona M, Ceolini E, Oztireli C, Gross M (2018) Towards better understanding of gradient-based attribution methods for deep neural networks. In: 6th international conference on learning representations, ICLR 2018—conference track proceedings

    Google Scholar 

  119. Doshi-Velez F, Kim B (2018) Towards a rigorous science of interpretable machine learning. [Online]. Available: https://arxiv.org/abs/1702.08608

  120. Samek W, Binder A, Montavon G, Lapuschkin S, Müller KR (2017) Evaluating the visualisation of what a deep neural network has learned. IEEE Trans Neural Netw Learn Syst 28(11):2660–2673

    Article  MathSciNet  Google Scholar 

  121. Zhang J, Lin ZL, Brandt J, Shen X, Sclaroff S (2016) Top-down neural attention by excitation backprop. In: European conference on computer vision (ECCV), pp 543–559

    Google Scholar 

  122. Poerner N, Roth B, Schütze H (2018) Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement. In: 56th annual meeting of the association for computational linguistics (ACL), pp 340–350

    Google Scholar 

  123. Arras L, Osman A, Müller KR, Samek W (2019) Evaluating recurrent neural network explanations. In: ACL’19 workshop on BlackboxNLP: analysing and interpreting neural networks for NLP, pp 113–126

    Google Scholar 

  124. Arras L, Horn F, Montavon G, Müller KR, Samek W (2017) “What is relevant in a text document?”: An interpretable machine learning approach. PLoS ONE 12(8):e0181142

    Article  Google Scholar 

  125. Arjona-Medina JA, Gillhofer M, Widrich M, Unterthiner T, Hochreiter S (2018) RUDDER: return decomposition for delayed rewards. arXiv preprint arXiv:1806.07857

  126. Lakkaraju H, Kamar E, Caruana R, Leskovec J (2017) Interpretable & exportable approximations of black box models [Preprint]. https://arxiv.org/pdf/1707.01154.pdf

  127. Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 51:93. https://doi.org/10.1145/3236009

    Article  Google Scholar 

  128. Friedler SA, Roy CD, Scheidegger C, Slack D (2019) Assessing the local interpretability of machine learning models [Preprint]. https://arxiv.org/abs/1902.03501

  129. Molnar C, Casalicchio G, Bischl B (2019) Quantifying interpretability of arbitrary machine learning models through functional decomposition [Preprint]. https://arxiv.org/pdf/1904.03867.pdf

  130. Arras L, Osman A, Müller K-R, Samek W (2019) Evaluating recurrent neural network explanations [Preprint]. https://arxiv.org/abs/1904.11829

  131. Montavon G, Lapuschkin S, Binder A, Samek W, Müller K-R (2017) Explaining non-linear classification decisions with deep taylor decomposition. Pattern Recognit 65:211–222. https://doi.org/10.1016/j.patcog.2016.11.008

    Article  Google Scholar 

  132. Samek W (2019) Explainable AI : interpreting, explaining and visualising deep learning: Springer Nature

    Google Scholar 

  133. Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: Proceedings of the 34th international conference on machine learning, pp 3319–3328

    Google Scholar 

  134. Montavon G, Samek W, Müller K-R (2018) Methods for interpreting and understanding deep neural networks. Digit. Sig Process 73:1–15. https://doi.org/10.1016/j.dsp.2017.10.011

    Article  MathSciNet  Google Scholar 

  135. Ancona M, Ceolini E, Oztireli C, Gross M (2018) Towards better understanding of gradient-based attribution methods for deep neural networks [Preprint]. https://arxiv.org/abs/1711.06104

  136. Holzinger A, Carrington A, Müller H (2020) Measuring the quality of explanations: the system causability scale (SCS). K.K.I.—Künstliche Intelligenz, pp 193–198

    Google Scholar 

  137. Yang M, Kim B (2019) Benchmarking attribution methods with relative feature importance. CoRR, vol. abs/1907.09701

    Google Scholar 

  138. Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Doll’ar P, Zitnick CL (2014) Microsoft coco: common objects in context. In European conference on computer vision. Springer, pp 740–755

    Google Scholar 

  139. Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2018) Places: a 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1452–1464

    Article  Google Scholar 

  140. Melis DA, Jaakkola T (2018) Towards robust interpretability with self-explaining neural networks. In: Advances in neural information processing systems, pp 7775–7784

    Google Scholar 

  141. Luss R, Chen P-Y, Dhurandhar A, Sattigeri P, Shanmugam K, Tu CC (2019) Generating contrastive explanations with monotonic attribute functions. arXiv preprint arXiv:1905.12698

  142. Mohseni S, Ragan ED (2018) A human-grounded evaluation benchmark for local explanations of machine learning. arXiv preprint arXiv:1801.05075

  143. Deng J, Dong W, Socher R, Li L, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255

    Google Scholar 

  144. Lauritsen SM, Kristensen M, Olsen MV, Larsen MS, Lauritsen KM, Jorgensen MJ, Lange J, Thiesson B (2020) Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat Commun 11(3852):1–11

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tawseef Ayoub Shaikh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Shaikh, T.A., Mir, W.A., Sofi, S. (2022). Decrypting the Black Boxing of Artificial Intelligence Using Explainable Artificial Intelligence in Smart Healthcare. In: Mishra, S., González-Briones, A., Bhoi, A.K., Mallick, P.K., Corchado, J.M. (eds) Connected e-Health. Studies in Computational Intelligence, vol 1021. Springer, Cham. https://doi.org/10.1007/978-3-030-97929-4_3

Download citation

Publish with us

Policies and ethics