[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Haadia Amjad 1 ; Mohammad Ashraf 2 ; Syed Sherazi 2 ; Saad Khan 1 ; Muhammad Moazam Fraz 1 ; Tahir Hameed 3 and Syed Bukhari 4

Affiliations: 1 Department of Computing, National University of Sciences and Technology, Islamabad, Pakistan ; 2 Department of Electrical Engineering, National University of Sciences and Technology, Islamabad, Pakistan ; 3 Girard School of Business, Merrimack College, North Andover, MA 01845, U.S.A. ; 4 Division of Computer Science, Mathematics and Science, St. John’s University, Queens, NY 11439, U.S.A.

Keyword(s): Attention Mechanism, Explainability, Natural Language Processing, AI, Healthcare, Clinical Decision Support Systems.

Abstract: Artificial intelligence (AI) systems are becoming common for decision support. However, the prevalence of the black-box approach in developing AI systems has been raised as a significant concern. It becomes very crucial to understand how an AI system makes decisions, especially in healthcare, since it directly impacts human life. Clinical decision support systems (CDSS) frequently use Natural Language Processing (NLP) techniques to extract information from textual data including Electronic Health Records (EHRs). In contrast to the prevalent black box approaches, emerging ’Explainability’ research has improved our comprehension of the decision-making processes in CDSS using EHR data. Many researches use ’attention’ mechanisms and ’graph’ techniques to explain the ’causability’ of machine learning models for solving text-related problems. In this paper, we conduct a survey of the latest research on explainability and its application in CDSS and healthcare AI systems using NLP. For our work, we searched through medical databases to find explainability components used for NLP tasks in healthcare. We extracted 26 papers that we found relevant for this review based on their main approach to develop explainable NLP models. We excluded some papers since they did not possess components for inherent explainability in architectures or they included explanations directly from the medical experts for the explainability of their work, leaving us with 16 studies in this review. We found attention mechanisms are the most dominant approach for explainability in healthcare AI and CDSS systems. There is an emerging trend using graphing and hybrid techniques for explainability, but most of the projects we studied employed attention mechanisms in different ways. The paper discusses the inner working, merits and issues in the underlying architectures. To the best of our knowledge, this is among the few papers summing up latest explainability research in the healthcare domain mainly to support future work on NLP-based AI models in healthcare. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 79.170.44.78

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Amjad, H. ; Ashraf, M. ; Sherazi, S. ; Khan, S. ; Moazam Fraz, M. ; Hameed, T. and Bukhari, S. (2023). Attention-Based Explainability Approaches in Healthcare Natural Language Processing. In Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies - CCH; ISBN 978-989-758-631-6; ISSN 2184-4305, SciTePress, pages 689-696. DOI: 10.5220/0011927300003414

@conference{cch23,
author={Haadia Amjad and Mohammad Ashraf and Syed Sherazi and Saad Khan and Muhammad {Moazam Fraz} and Tahir Hameed and Syed Bukhari},
title={Attention-Based Explainability Approaches in Healthcare Natural Language Processing},
booktitle={Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies - CCH},
year={2023},
pages={689-696},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011927300003414},
isbn={978-989-758-631-6},
issn={2184-4305},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies - CCH
TI - Attention-Based Explainability Approaches in Healthcare Natural Language Processing
SN - 978-989-758-631-6
IS - 2184-4305
AU - Amjad, H.
AU - Ashraf, M.
AU - Sherazi, S.
AU - Khan, S.
AU - Moazam Fraz, M.
AU - Hameed, T.
AU - Bukhari, S.
PY - 2023
SP - 689
EP - 696
DO - 10.5220/0011927300003414
PB - SciTePress

<style> #socialicons>a span { top: 0px; left: -100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease-in-out; -o-transition: all 0.3s ease-in-out; -ms-transition: all 0.3s ease-in-out; transition: all 0.3s ease-in-out;} #socialicons>ahover div{left: 0px;} </style>