Authors:
Haadia Amjad
1
;
Mohammad Ashraf
2
;
Syed Sherazi
2
;
Saad Khan
1
;
Muhammad Moazam Fraz
1
;
Tahir Hameed
3
and
Syed Bukhari
4
Affiliations:
1
Department of Computing, National University of Sciences and Technology, Islamabad, Pakistan
;
2
Department of Electrical Engineering, National University of Sciences and Technology, Islamabad, Pakistan
;
3
Girard School of Business, Merrimack College, North Andover, MA 01845, U.S.A.
;
4
Division of Computer Science, Mathematics and Science, St. John’s University, Queens, NY 11439, U.S.A.
Keyword(s):
Attention Mechanism, Explainability, Natural Language Processing, AI, Healthcare, Clinical Decision Support Systems.
Abstract:
Artificial intelligence (AI) systems are becoming common for decision support. However, the prevalence of the black-box approach in developing AI systems has been raised as a significant concern. It becomes very crucial to understand how an AI system makes decisions, especially in healthcare, since it directly impacts human life. Clinical decision support systems (CDSS) frequently use Natural Language Processing (NLP) techniques to extract information from textual data including Electronic Health Records (EHRs). In contrast to the prevalent black box approaches, emerging ’Explainability’ research has improved our comprehension of the decision-making processes in CDSS using EHR data. Many researches use ’attention’ mechanisms and ’graph’ techniques to explain the ’causability’ of machine learning models for solving text-related problems. In this paper, we conduct a survey of the latest research on explainability and its application in CDSS and healthcare AI systems using NLP. For our
work, we searched through medical databases to find explainability components used for NLP tasks in healthcare. We extracted 26 papers that we found relevant for this review based on their main approach to develop explainable NLP models. We excluded some papers since they did not possess components for inherent explainability in architectures or they included explanations directly from the medical experts for the explainability of their work, leaving us with 16 studies in this review. We found attention mechanisms are the most dominant approach for explainability in healthcare AI and CDSS systems. There is an emerging trend using graphing and hybrid techniques for explainability, but most of the projects we studied employed attention mechanisms in different ways. The paper discusses the inner working, merits and issues in the underlying architectures. To the best of our knowledge, this is among the few papers summing up latest explainability research in the healthcare domain mainly to support future work on NLP-based AI models in healthcare.
(More)