Explainable Machine Learning in Critical Decision Systems: Ensuring Safe Application and Correctness
<p>Overview of paper’s structure with results obtained in steps and connections between steps.</p> "> Figure 2
<p>Development of dataset in different filtering steps: from originally 508 publications, 2 were dropped as duplicates and 422 were dropped because they don’t contribute to answering the RQ.</p> "> Figure 3
<p>Distribution of publication years in final selection of papers: most titles were selected from IEEE Digital Library (blue) and only few from ACM Digital Library (orange); 2023 is year of most frequent publications.</p> "> Figure 4
<p>Keyword cloud for keywords in selected publications; terms related to ML are dominant.</p> "> Figure 5
<p>Histogram of 15 most common keywords in literature selection; deep learning, explainable ai and machine learning are the most common kewords (25–36 counts).</p> "> Figure 6
<p>Classification of most common XAI methods in papers examined in taxonomy of XAI.</p> "> Figure 7
<p>Grouping of purposes pursued by XAI in critical decision systems: support users and model improvement as two big groups of XAI purposes.</p> "> Figure 8
<p><span class="html-italic">R4VR</span>-framework for enhancing safety by XAI: Developer creates accountable ML model using XAI in the steps <span class="html-italic">Reliability</span>, ßtextitValidation and <span class="html-italic">Verification</span>; user applies model for safe decision in reverse steps <span class="html-italic">Verification</span>, <span class="html-italic">Validation</span> and <span class="html-italic">Reliability</span>.</p> ">
Abstract
:1. Introduction
1.1. Aims and Contributions
- RQ 1: For what purposes is XAI currently leveraged in critical decision systems?
- RQ 2: What XAI techniques are commonly applied in critical decision systems?
- RQ 3: How can XAI increase safety of ML applications in critical decision systems?
- An overview of common XAI applications in critical decision systems
- A clustering of XAI applications in critical decision systems
- An outline of promising approaches to enhance safety of ML applications in critical decision systems
- An identification of research gaps and relevant future directions of research for enhancing safety
- A proposal of a conceptual three-layered framework to enhance safety of ML applications in critical decision systems
1.2. Methods and Structure
2. Background
2.1. Taxonomy of XAI
- Feature summary statistics: These methods provide statistics for each feature. These statistics can be as simple as the feature importance or more complex, such as pairwise feature correlation strength. One example is SHAP. The method assumes the output of a model as sum of the contribution of all features. SHAP calculates a value which quantifies the contribution of each feature to the model behavior [2].
- Visualization of feature summary: These methods provide a visualization of the features’ impacts on the prediction. One example are partial dependency plots (PDP) which show how the average predicted value changes when a feature is changed [16].
- Model internals: These methods provide internal structures or rules of a model, for example, the structure of a decision tree or the weights in a linear regression [17].
- Data points: These methods provide a data point which shall help to understand which features are important for a prediction. An example are counterfactual explanations where a close datapoint to the sample of interest is provided for which the predicted outcome is different (other class) [18].
- Interpretable model: These methods try to train explainable models which approximate the behavior of the black-box model to be explained. The model is trained based on a dataset and the predictions, a black-box model makes on the data. For example, neural networks can be approximated using decision trees [18].
2.2. Critical Infrastructure
“Critical entities, as providers of essential services, play an indispensable role in the maintenance of vital societal functions or economic activities in the internal market in an increasingly interdependent Union economy.” [4]
- Every member state has to define a strategy to identify critical entities and carry out risk assessment regularly
- Entities belonging to critical infrastructure have to do risk assessment as well and take measures to enhance resilience
- If a critical entity provides service in 6 or more member states, it will receive support in risk assessment and taking measures by the European authorities
- Member states have to provide support to entities of critical infrastructure. In turn, the commission will offer support to the member states
2.3. Critical Decision Systems
- Critical infrastructure operation and management
- Vocational training and education
- Employment, employee management and entry into self-employment
- Access to and utilization of basic private and public services and benefits
- Execution of processes ensuing compliance with the law
- Management of migration, asylum and border controls
- Support in interpretation and application of the law
- Has the potential to kill a human immediately
- Has the potential to save a human
- Has an impact on the stability of national or international financial market
- Has an impact on the security of supply with goods vital for life
- Has the potential to affect the health of a human
- Has the potential to affect a human’s opportunities for good and dignified life
3. Related Work
3.1. Reviews on XAI in Health Sector
3.2. Reviews on XAI in Digital Infrastructure
3.3. Reviews on XAI in Energy Sector
3.4. Reviews on XAI in Transport Sector
3.5. Summary of Related Work
- Focus on trust rather than safety: Many existing works emphasize improving transparency and user trust in ML models. However, the question of how XAI techniques can be systematically leveraged to enhance the safety of ML applications is only a minor focus of researchers.
- Lack of analysis across sectors: Many researchers already address individual domains of critical infrastructure or certain critical decision systems. A holistic analysis of XAI across different sectors of critical infrastructure is largely absent, even though these sectors share similar challenges regarding the safety and correctness of decisions.
- Lack of methodological foundations: Detailed methodological approaches on how XAI can be used to enhance the safety of ML applications through feedback mechanisms or cross-sectoral standards are missing. This also includes integrating human feedback and adapting XAI techniques to different target audiences and contexts.
4. Methodology of Literature Review
4.1. Assembling
4.2. Arranging
- Is XAI the primary topic of the paper?
- Does the paper contribute to answering our RQ?
4.3. Assessing
- Type of paper: What type of research is conducted in the paper?
- Sector of critical infrastructure: What sector of critical infrastructure (c.f. Section 2.2) is mainly addressed by the researchers?
- XAI technique: What XAI techniques are applied in the paper?
4.3.1. Type of Paper
- Framework: Researchers propose a new theoretical or practical framework for solving current challenges in XAI. Usually, the framework incorporates a systematic structure and integrates different concepts and components.
- Method: Researchers propose a new method or technique to solve specific issues in XAI. In contrast to frameworks, a method it is more technical and tailored to specific use cases.
- Empirical study: Researchers collect data to examine certain aspects of topics related to XAI. One example can be a study which examines how users perceive explanations provided by a XAI method.
- Use case: Researchers examine a certain use case in XAI. Usually, qualitative methods are applied.
- Conceptual work: Researchers develop new concepts or theories. Usually, there is no link to practical use cases.
4.3.2. Sector of Critical Infrastructure
4.3.3. XAI Method
- Shapely additive explanations (SHAP): The method assumes the output of a model as sum of the contribution of all features. SHAP calculates a value which quantifies the contribution of each feature [14].
- Local interpretable model-agnostic explanations (LIME): The method explains a prediction by building a local, explainable surrogate model based on the black-box model’s behavior [14].
- Class activation mapping (CAM): The method calculates the gradients of the output with respect to the extracted features or the input via back-propagation. Based on this, attribution scores are estimated. CAM is mainly applicable for CNNs for image classification [15].
- Layer wise relevance propagation (LRP): The method breaks down the prediction of a neural network to relevance scores for the single input dimensions of a instance [108].
- Decision trees: Decision trees are interpretable models. From the tree structure of the model, decision rules can be extracted [14].
- Explain like I am 5 (ELI5): ELI5 is not a single XAI technique but a model-agnostic framework that provides different XAI techniques for generating global and local explanations [109].
- Custom: This category includes all papers which propose new XAI techniques.
- Others: This category includes all papers which use other methods only represented seldom in the selected publications. Examples are fuzzy logic and integrated gradients (IG).
- None: This category is for papers that don’t apply XAI but only theoreticall reason about the topic.
5. Use Cases of XAI in Critical Decision Systems
5.1. XAI Use Cases in Health
5.2. XAI Use Cases in Digital Infrastructure
5.3. XAI Use Cases in Transport
5.4. XAI Use Cases in Energy
6. Can XAI Be Harnessed to Enhance Safety?
6.1. Purposes of XAI in Examined Use Cases
6.1.1. XAI for Gaining Trust
6.1.2. XAI for Model Improvement
6.1.3. XAI for Gaining Insights
6.1.4. Clustering Use Cases
6.2. Enhancing Safety
6.2.1. Reliability of Decisions
6.2.2. Validation
6.2.3. Verification
6.2.4. General Perspectives on XAI for Enhancing Safety
6.3. Beyond Enhancing Safety of ML Applications
6.3.1. Uncertainty Quantification
6.3.2. Regulation and Certification of ML
6.4. The R4VR-Framework to Enhance Safety of ML Applications
How the R4VR-Framework Compares to the EU AI Act
7. Analysis and Discussion
7.1. Summary of Results
7.2. Gaps in Research and Open Challenges
7.2.1. Efficient Verification of Models for Certain Ranges
7.2.2. Combination of Models
7.2.3. Human Feedback
7.2.4. Providing Explanations to Different Users
7.2.5. Trade-Off Between Accuracy and Explainability
7.2.6. Scalability
7.2.7. Providing Explanations to Different Users
7.2.8. Regulations and Certification Standards
8. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
AI | Artificial Imtelligence |
CAM | Class Activation Mapping |
CP-Nets | Coloured Petri nets |
DNN | Deep neural network |
DOAJ | Directory of open access journals |
ELI5 | Explain like I am 5 |
EU | European Union |
GRU | Gated recurrent unit |
HMI | Human machine interface |
IG | Integrated Gradient |
LD | Linear dichroism |
LIME | Local interpretable model-agnostic explanations |
LRP | Layer wise relevance propagation |
LSTM | Long short term memory |
MDPI | Multidisciplinary Digital Publishing Institute |
ML | Machine learning |
R4VR | Reliability, validation, verification, verification, validation, reliability |
RQ | Research question |
SHAP | Shapely additive explanations |
SPAR-4-SLR | Systematic procedures and rationals for systematic literature reviews |
TLA | Three letter acronym |
XAI | Explainable artificial intelligence |
Appendix A. Search String Used for IEEE Digital Library
Appendix B. Search String Used for ACM Digital Library
References
- Alimonda, N.; Guidotto, L.; Malandri, L.; Mercorio, F.; Mezzanzanica, M.; Tosi, G. A Survey on XAI for Cyber Physical Systems in Medicine. In Proceedings of the 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome, Italy, 26–28 October 2022; pp. 265–270. [Google Scholar] [CrossRef]
- Khan, N.; Nauman, M.; Almadhor, A.S.; Akhtar, N.; Alghuried, A.; Alhudhaif, A. Guaranteeing Correctness in Black-Box Machine Learning: A Fusion of Explainable AI and Formal Methods for Healthcare Decision-Making. IEEE Access 2024, 12, 90299–90316. [Google Scholar] [CrossRef]
- Renjith, V.; Judith, J. A Review on Explainable Artificial Intelligence for Gastrointestinal Cancer using Deep Learning. In Proceedings of the 2023 Annual International Conference on Emerging Research Areas: International Conference on Intelligent Systems (AICERA/ICIS), Kerala, India, 16–18 November 2023; pp. 1–6. [Google Scholar] [CrossRef]
- European Parliament and Council of the European Union. Directive (EU) 2022/2557 of the European Parliament and of the Council of 14 December 2022 on the Resilience of Critical Entities and Repealing Council Directive 2008/114/EC (Text with EEA Relevance). Official Journal of the European Union, L 333, 27 December 2022. 2022, pp. 164–196. Available online: https://eur-lex.europa.eu/eli/dir/2022/2557/oj (accessed on 17 September 2024).
- European Parliament. EU AI Act: First Regulation on Artificial Intelligence. 2023. Available online: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 28 September 2024).
- Adadi, A.; Berrada Khan, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Kaur, D.; Uslu, S.; Rittichier, K.J.; Durresi, A. Trustworthy Artificial Intelligence: A Review. ACM Comput. Surv. (CSUR) 2022, 55, 1–38. [Google Scholar] [CrossRef]
- Mahajan, P.; Aujla, G.S.; Krishna, C.R. Explainable Edge Computing in a Distributed AI—Powered Autonomous Vehicular Networks. In Proceedings of the 2024 IEEE International Conference on Communications Workshops (ICC Workshops), Denver, CO, USA, 9–13 March 2024; pp. 1195–1200. [Google Scholar] [CrossRef]
- Paul, S.; Vijayshankar, S.; Macwan, R. Demystifying Cyberattacks: Potential for Securing Energy Systems With Explainable AI. In Proceedings of the 2024 International Conference on Computing, Networking and Communications (ICNC), Hawaii, HI, USA, 19–22 February 2024; pp. 430–434. [Google Scholar] [CrossRef]
- Afzal-Houshmand, S.; Papamartzivanos, D.; Homayoun, S.; Veliou, E.; Jensen, C.D.; Voulodimos, A.; Giannetsos, T. Explainable Artificial Intelligence to Enhance Data Trustworthiness in Crowd-Sensing Systems. In Proceedings of the 2023 19th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), Pafos, Cyprus, 19–21 June 2023; pp. 568–576. [Google Scholar] [CrossRef]
- Moghadasi, N.; Piran, M.; Valdez, R.S.; Baek, S.; Moghaddasi, N.; Polmateer, T.L.; Lambert, J.H. Process Quality Assurance of Artificial Intelligence in Medical Diagnosis. In Proceedings of the 2024 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 8–10 May 2024; pp. 1–8. [Google Scholar] [CrossRef]
- Masud, M.T.; Keshk, M.; Moustafa, N.; Linkov, I.; Emge, D.K. Explainable Artificial Intelligence for Resilient Security Applications in the Internet of Things. IEEE Open J. Commun. Soc. 2024. [Google Scholar] [CrossRef]
- Crook, B.; Schlüter, M.; Speith, T. Revisiting the Performance-Explainability Trade-Off in Explainable Artificial Intelligence (XAI). In Proceedings of the 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW), Hannover, Germany, 4–8 September 2023; pp. 316–324. [Google Scholar] [CrossRef]
- Molnar, C. Interpretable Machine Learning, 2nd ed.; Independently Published: Munich, Germany, 2022. [Google Scholar]
- Gizzini, A.K.; Shukor, M.; Ghandour, A.J. Extending CAM-based XAI methods for Remote Sensing Imagery Segmentation. arXiv 2023, arXiv:2310.01837. [Google Scholar] [CrossRef]
- Das, T.; Samandar, S.; Rouphail, N.; Williams, B.; Harris, D. Examining Factors Influencing the Acceleration Behavior of Autonomous Vehicles Through Explainable AI Analysis. In Proceedings of the 2024 Smart City Symposium Prague (SCSP), Prague, Czech Republic, 23–24 May 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Adams, J.; Hagras, H. A Type-2 Fuzzy Logic Approach to Explainable AI for regulatory compliance, fair customer outcomes and market stability in the Global Financial Sector. In Proceedings of the 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Jia, Y.; McDermid, J.; Lawton, T.; Habli, I. The Role of Explainability in Assuring Safety of Machine Learning in Healthcare. IEEE Trans. Emerg. Top. Comput. 2022, 10, 1746–1760. [Google Scholar] [CrossRef]
- European Commission. Critical Infrastructure Resilience at EU Level. 2024. Available online: https://home-affairs.ec.europa.eu/policies/internal-security/counter-terrorism-and-radicalisation/protection/critical-infrastructure-resilience-eu-level_en (accessed on 28 September 2024).
- Tjoa, E.; Guan, C. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4793–4813. [Google Scholar] [CrossRef]
- Farkhadov, M.; Eliseev, A.; Petukhova, N. Explained Artificial Intelligence Helps to Integrate Artificial and Human Intelligence Into Medical Diagnostic Systems: Analytical Review of Publications. In Proceedings of the 2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT), Tashkent, Uzbekistan, 7–9 October 2020; pp. 1–4. [Google Scholar] [CrossRef]
- Jagatheesaperumal, S.K.; Pham, Q.V.; Ruby, R.; Yang, Z.; Xu, C.; Zhang, Z. Explainable AI Over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions. IEEE Open J. Commun. Soc. 2022, 3, 2106–2136. [Google Scholar] [CrossRef]
- Zhang, Z.; Hamadi, H.A.; Damiani, E.; Yeun, C.Y.; Taher, F. Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research. IEEE Access 2022, 10, 93104–93139. [Google Scholar] [CrossRef]
- Machlev, R.; Heistrene, L.; Perl, M.; Levy, K.; Belikov, J.; Mannor, S.; Levron, Y. Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities. Energy AI 2022, 9, 100169. [Google Scholar] [CrossRef]
- Kuznietsov, A.; Gyevnar, B.; Wang, C.; Peters, S.; Albrecht, S.V. Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review. IEEE Trans. Intell. Transp. Syst. 2024, 25, 19342–19364. [Google Scholar] [CrossRef]
- Paul, J.; Lim, W.M.; O’Cass, A.; Hao, A.; Bresciani, S. Scientific Procedures and Rationales for Systematic Literature Reviews (SPAR-4-SLR). Int. J. Consum. Stud. 2021, 45, O1–O16. [Google Scholar] [CrossRef]
- Kommission, E.; Generaldirektion Kommunikationsnetze, I.U.T. Ethik-Leitlinien für eine Vertrauenswürdige KI; Publications Office: Luxembourg, 2019. [Google Scholar] [CrossRef]
- The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, Version 2. Technical Report, IEEE Standards Association. 2017. Available online: https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf (accessed on 11 October 2024).
- Amin, A.; Hasan, K.; Zein-Sabatto, S.; Chimba, D.; Ahmed, I.; Islam, T. An Explainable AI Framework for Artificial Intelligence of Medical Things. In Proceedings of the 2023 IEEE Globecom Workshops (GC Wkshps), Kuala Lumpur, Malaysia, 4–8 December 2023; pp. 2097–2102. [Google Scholar] [CrossRef]
- Oseni, A.; Moustafa, N.; Creech, G.; Sohrabi, N.; Strelzoff, A.; Tari, Z.; Linkov, I. An Explainable Deep Learning Framework for Resilient Intrusion Detection in IoT-Enabled Transportation Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1000–1014. [Google Scholar] [CrossRef]
- Shtayat, M.M.; Hasan, M.K.; Sulaiman, R.; Islam, S.; Khan, A.U.R. An Explainable Ensemble Deep Learning Approach for Intrusion Detection in Industrial Internet of Things. IEEE Access 2023, 11, 115047–115061. [Google Scholar] [CrossRef]
- Mridha, K.; Uddin, M.M.; Shin, J.; Khadka, S.; Mridha, M.F. An Interpretable Skin Cancer Classification Using Optimized Convolutional Neural Network for a Smart Healthcare System. IEEE Access 2023, 11, 41003–41018. [Google Scholar] [CrossRef]
- Jahan, S.; Alqahtani, S.; Gamble, R.F.; Bayesh, M. Automated Extraction of Security Profile Information from XAI Outcomes. In Proceedings of the 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C), Toronto, ON, Canada, 26–29 September 2023; pp. 110–115. [Google Scholar] [CrossRef]
- Gu, R.; Wang, G.; Song, T.; Huang, R.; Aertsen, M.; Deprest, J.; Ourselin, S.; Vercauteren, T.; Zhang, S. CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation. IEEE Trans. Med. Imag. 2021, 40, 699–711. [Google Scholar] [CrossRef]
- Shen, Z.; Jiang, X.; Huang, X. Deep Learning-based Interpretable Detection Method for Fundus Diseases: Diagnosis and Information Mining of Diseases based on Fundus Photography Images. In Proceedings of the 2023 3rd International Conference on Bioinformatics and Intelligent Computing, Sanya, China, 10–12 February 2023; pp. 305–309. [Google Scholar] [CrossRef]
- Han, D.; Wang, Z.; Chen, W.; Zhong, Y.; Wang, S.; Zhang, H.; Yang, J.; Shi, X.; Yin, X. DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, New York, NY, USA, 15–19 November 2021; pp. 3197–3217. [Google Scholar] [CrossRef]
- Apon, T.S.; Hasan, M.M.; Islam, A.; Alam, M.G.R. Demystifying Deep Learning Models for Retinal OCT Disease Classification using Explainable AI. In Proceedings of the 2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), Brisbane, Australia, 8–10 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Kapcia, M.; Eshkiki, H.; Duell, J.; Fan, X.; Zhou, S.; Mora, B. ExMed: An AI Tool for Experimenting Explainable AI Techniques on Medical Data Analytics. In Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), Virtual, 1–3 November 2021; pp. 841–845. [Google Scholar] [CrossRef]
- Gürbüz, E.; Turgut, Ö.; Kök, I. Explainable AI-Based Malicious Traffic Detection and Monitoring System in Next-Gen IoT Healthcare. In Proceedings of the 2023 International Conference on Smart Applications, Communications and Networking (SmartNets), Istanbul, Turkey, 25–27 July 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Nguyen, T.N.; Yang, H.J.; Kho, B.G.; Kang, S.R.; Kim, S.H. Explainable Deep Contrastive Federated Learning System for Early Prediction of Clinical Status in Intensive Care Unit. IEEE Access 2024, 12, 117176–117202. [Google Scholar] [CrossRef]
- Drichel, A.; Meyer, U. False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers. In Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses, Hong Kong, China, 16–18 October 2023; pp. 330–345. [Google Scholar] [CrossRef]
- Friedrich, M.; Küls, J.; Findeisen, M.; Peinecke, N. HMI Design for Explainable Machine Learning Enhanced Risk Detection in Low-Altitude UAV Operations. In Proceedings of the 2023 IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC), Barcelon, Spain, 1–5 October 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Li, J.; Chen, Y.; Wang, Y.; Ye, Y.; Sun, M.; Ren, H.; Cheng, W.; Zhang, H. Interpretable Pulmonary Disease Diagnosis with Graph Neural Network and Counterfactual Explanations. In Proceedings of the 2023 2nd International Conference on Sensing, Measurement, Communication and Internet of Things Technologies (SMC-IoT), Changsha, China, 29–31 December 2023; pp. 146–154. [Google Scholar] [CrossRef]
- Gyawali, S.; Huang, J.; Jiang, Y. Leveraging Explainable AI for Actionable Insights in IoT Intrusion Detection. In Proceedings of the 2024 19th Annual System of Systems Engineering Conference (SoSE), Tacoma, WA, USA, 23–26 June 2024; pp. 92–97. [Google Scholar] [CrossRef]
- Dutta, J.; Puthal, D.; Yeun, C.Y. Next Generation Healthcare with Explainable AI: IoMT-Edge-Cloud Based Advanced eHealth. In Proceedings of the GLOBECOM 2023—2023 IEEE Global Communications Conference, Kuala Lumpur, Malaysia, 4–8 December 2023; pp. 7327–7332. [Google Scholar] [CrossRef]
- Haque, E.; Hasan, K.; Ahmed, I.; Alam, M.S.; Islam, T. Towards an Interpretable AI Framework for Advanced Classification of Unmanned Aerial Vehicles (UAVs). In Proceedings of the 2024 IEEE 21st Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 6–9 January 2024; pp. 644–645. [Google Scholar] [CrossRef]
- Astolfi, D.; De Caro, F.; Vaccaro, A. Wind Power Applications of eXplainable Artificial Intelligence Techniques. In Proceedings of the 2023 AEIT International Annual Conference (AEIT), Rome, Italy, 5–7 October 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Porambage, P.; Pinola, J.; Rumesh, Y.; Tao, C.; Huusko, J. XcARet: XAI based Green Security Architecture for Resilient Open Radio Access Networks in 6G. In Proceedings of the 2023 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit), Gothenburg, Sweden, 6–9 June 2023; pp. 699–704. [Google Scholar] [CrossRef]
- Tahmassebi, A.; Martin, J.; Meyer-Baese, A.; Gandomi, A.H. An Interpretable Deep Learning Framework for Health Monitoring Systems: A Case Study of Eye State Detection using EEG Signals. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, Australia, 1–4 December 2020; pp. 211–218. [Google Scholar] [CrossRef]
- Hamilton, D.; Kornegay, K.; Watkins, L. Autonomous Navigation Assurance with Explainable AI and Security Monitoring. In Proceedings of the 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 13–15 October 2020; pp. 1–7. [Google Scholar] [CrossRef]
- Kommineni, S.; Muddana, S.; Senapati, R. Explainable Artificial Intelligence based ML Models for Heart Disease Prediction. In Proceedings of the 2024 3rd International Conference on Computational Modelling, Simulation and Optimization (ICCMSO), Phuket, Thailand, 14–16 June 2024; pp. 160–164. [Google Scholar] [CrossRef]
- Tan, B.; Zhao, J.; Su, T.; Huang, Q.; Zhang, Y.; Zhang, H. Explainable Bayesian Neural Network for Probabilistic Transient Stability Analysis Considering Wind Energy. In Proceedings of the 2022 IEEE Power & Energy Society General Meeting (PESGM), Austin, TX, USA, 17–21 July 2022; pp. 1–5. [Google Scholar] [CrossRef]
- Nazat, S.; Li, L.; Abdallah, M. XAI-ADS: An Explainable Artificial Intelligence Framework for Enhancing Anomaly Detection in Autonomous Driving Systems. IEEE Access 2024, 12, 48583–48607. [Google Scholar] [CrossRef]
- Sutthithatip, S.; Perinpanayagam, S.; Aslam, S. (Explainable) Artificial Intelligence in Aerospace Safety-Critical Systems. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022; pp. 1–12. [Google Scholar] [CrossRef]
- Rožman, J.; Hagras, H.; Andreu-Perez, J.; Clarke, D.; Müeller, B.; Fitz, S. A Type-2 Fuzzy Logic Based Explainable AI Approach for the Easy Calibration of AI models in IoT Environments. In Proceedings of the 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Luxembourg, 11–14 July 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Zhang, X.; Han, L.; Zhu, W.; Sun, L.; Zhang, D. An Explainable 3D Residual Self-Attention Deep Neural Network for Joint Atrophy Localization and Alzheimer’s Disease Diagnosis Using Structural MRI. IEEE J. Biomed. Health Inform. 2022, 26, 5289–5297. [Google Scholar] [CrossRef]
- Ren, C.; Xu, Y.; Zhang, R. An Interpretable Deep Learning Method for Power System Transient Stability Assessment via Tree Regularization. IEEE Trans. Power Syst. 2022, 37, 3359–3369. [Google Scholar] [CrossRef]
- Jing, Y.; Liu, H.; Guo, R. An Interpretable Soft Sensor Model for Power Plant Process Based on Deep Learning. In Proceedings of the 2023 IEEE 7th Conference on Energy Internet and Energy System Integration (EI2), Hangzhou, China, 15–18 December 2023; pp. 2079–2085. [Google Scholar] [CrossRef]
- Watson, M.; Al Moubayed, N. Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 8180–8187. [Google Scholar] [CrossRef]
- Liu, S.; Müller, S. Reliability of Deep Neural Networks for an End-to-End Imitation Learning-Based Lane Keeping. IEEE Trans. Intell. Transp. Syst. 2023, 24, 13768–13786. [Google Scholar] [CrossRef]
- Manju, V.N.; Aparna, N.; Krishna Sowjanya, K. Decision Tree-Based Explainable AI for Diagnosis of Chronic Kidney Disease. In Proceedings of the 2023 5th International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 3–5 August 2023; pp. 947–952. [Google Scholar] [CrossRef]
- Rodríguez-Barroso, N.; Del Ser, J.; Luzón, M.V.; Herrera, F. Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability. In Proceedings of the 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Yokohama, Japan, 30 June–5 July 2024; pp. 1–8. [Google Scholar] [CrossRef]
- Shukla, A.; Upadhyay, S.; Bachan, P.R.; Bera, U.N.; Kshirsagar, R.; Nathani, N. Dynamic Explainability in AI for Neurological Disorders: An Adaptive Model for Transparent Decision-Making in Alzheimer’s Disease Diagnosis. In Proceedings of the 2024 IEEE 13th International Conference on Communication Systems and Network Technologies (CSNT), Jabalpur, India, 6–7 April 2024; pp. 980–986. [Google Scholar] [CrossRef]
- Haque, E.; Hasan, K.; Ahmed, I.; Alam, M.S.; Islam, T. Enhancing UAV Security Through Zero Trust Architecture: An Advanced Deep Learning and Explainable AI Analysis. In Proceedings of the 2024 International Conference on Computing, Networking and Communications (ICNC), Hawaii, HI, USA, 19–22 February 2024; pp. 463–467. [Google Scholar] [CrossRef]
- Duamwan, L.M.; Bird, J.J. Explainable AI for Medical Image Processing: A Study on MRI in Alzheimer’s Disease. In Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments, New York, NY, USA, 5–7 July 2023; pp. 480–484. [Google Scholar] [CrossRef]
- Ray, I.; Sreedharan, S.; Podder, R.; Bashir, S.K.; Ray, I. Explainable AI for Prioritizing and Deploying Defenses for Cyber-Physical System Resiliency. In Proceedings of the 2023 5th IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), Atlanta, GA, USA, 1–3 November 2023; pp. 184–192. [Google Scholar] [CrossRef]
- Hellen, N.; Marvin, G. Explainable AI for Safe Water Evaluation for Public Health in Urban Settings. In Proceedings of the 2022 International Conference on Innovations in Science, Engineering and Technology (ICISET), Kumira, Bangladesh, 25–28 February 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Rjoub, G.; Bentahar, J.; Wahab, O.A. Explainable AI-based Federated Deep Reinforcement Learning for Trusted Autonomous Driving. In Proceedings of the 2022 International Wireless Communications and Mobile Computing (IWCMC), Dubrovnik, Croatia, 30 May–3 June 2022; pp. 318–323. [Google Scholar] [CrossRef]
- Bi, C.; Luo, Y.; Lu, C. Explainable Artificial Intelligence for Power System Security Assessment: A Case Study on Short-Term Voltage Stability. In Proceedings of the 2023 IEEE Belgrade PowerTech, Belgrade, Serbia, 25–29 June 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Wang, K.; Yin, S.; Wang, Y.; Li, S. Explainable Deep Learning for Medical Image Segmentation with Learnable Class Activation Mapping. In Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning, Shanghai, China, 17–19 May 2023; pp. 210–215. [Google Scholar] [CrossRef]
- Wickramasinghe, C.S.; Amarasinghe, K.; Marino, D.L.; Rieger, C.; Manic, M. Explainable Unsupervised Machine Learning for Cyber-Physical Systems. IEEE Access 2021, 9, 131824–131843. [Google Scholar] [CrossRef]
- Yang, J.; Tang, D.; Yu, J.; Zhang, J.; Liu, H. Explaining Anomalous Events in Flight Data of UAV With Deep Attention-Based Multi-Instance Learning. IEEE Trans. Veh. Technol. 2024, 73, 107–119. [Google Scholar] [CrossRef]
- Rezazadeh, F.; Chergui, H.; Mangues-Bafalluy, J. Explanation-Guided Deep Reinforcement Learning for Trustworthy 6G RAN Slicing. In Proceedings of the 2023 IEEE International Conference on Communications Workshops (ICC Workshops), Rome, Italy, 28 May–1 June 2023; pp. 1026–1031. [Google Scholar] [CrossRef]
- Kalakoti, R.; Nõmm, S.; Bahsi, H. Improving Transparency and Explainability of Deep Learning Based IoT Botnet Detection Using Explainable Artificial Intelligence (XAI). In Proceedings of the 2023 International Conference on Machine Learning and Applications (ICMLA), Jacksonville, FL, USA, 15–17 December 2023; pp. 595–601. [Google Scholar] [CrossRef]
- Ouhssini, M.; Afdel, K.; Akouhar, M.; Agherrabi, E.; Abarda, A. Interpretable Deep Learning for DDoS Defense: A SHAP-based Approach in Cloud Computing. In Proceedings of the 2024 International Conference on Circuit, Systems and Communication (ICCSC), Fez, Moroco, 28–29 June 2024; pp. 1–8. [Google Scholar] [CrossRef]
- Reza, M.T.; Ahmed, F.; Sharar, S.; Rasel, A.A. Interpretable Retinal Disease Classification from OCT Images Using Deep Neural Network and Explainable AI. In Proceedings of the 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), Khulna, Bangladesh, 14–16 September 2021; pp. 1–4. [Google Scholar] [CrossRef]
- Rani, J.V.; Saeed Ali, H.A.; Jakka, A. IoT Network Intrusion Detection: An Explainable AI Approach in Cybersecurity. In Proceedings of the 2023 4th International Conference on Communication, Computing and Industry 6.0 (C216), Bangalore, India, 15–16 December 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Krishnaveni, S.; Sivamohan, S.; Chen, T.M.; Sathiyanarayanan, M. NexGuard: Industrial Cyber-Physical System Défense Using Ensemble Feature Selection and Explainable Deep Learning Techniques. In Proceedings of the 2023 2nd International Conference on Futuristic Technologies (INCOFT), Belagavi, India, 24–26 November 2023; pp. 1–10. [Google Scholar] [CrossRef]
- Cavaliere, F.; Cioppa, A.D.; Marcelli, A.; Parziale, A.; Senatore, R. Parkinson’s Disease Diagnosis: Towards Grammar-based Explainable Artificial Intelligence. In Proceedings of the 2020 IEEE Symposium on Computers and Communications (ISCC), Rennes, France, 7–10 July 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Feifel, P.; Bonarens, F.; Köster, F. Reevaluating the Safety Impact of Inherent Interpretability on Deep Neural Networks for Pedestrian Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 20–25 June 2021; pp. 29–37. [Google Scholar] [CrossRef]
- Kur, J.; Chen, J.; Huang, J. Scalable Industrial Control System Analysis via XAI-Based Gray-Box Fuzzing. In Proceedings of the 2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE), Kirchberg, Luxembourg, 11–15 September 2023; pp. 1803–1807. [Google Scholar] [CrossRef]
- Ahmad Khan, M.; Khan, M.; Dawood, H.; Dawood, H.; Daud, A. Secure Explainable-AI Approach for Brake Faults Prediction in Heavy Transport. IEEE Access 2024, 12, 114940–114950. [Google Scholar] [CrossRef]
- Duell, J.; Fan, X.; Burnett, B.; Aarts, G.; Zhou, S.M. A Comparison of Explanations Given by Explainable Artificial Intelligence Methods on Analysing Electronic Health Records. In Proceedings of the 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), Virtual, 27–30 July 2021; pp. 1–4. [Google Scholar] [CrossRef]
- Wu, W.; Keller, J.M.; Skubic, M.; Popescu, M. Explainable AI for Early Detection of Health Changes Via Streaming Clustering. In Proceedings of the 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Padua, Italy, 18–23 July 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Yukta; Biswas, A.P.; Kashyap, S. Explainable AI for Healthcare Diagnosis in Renal Cancer. In Proceedings of the 2024 OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 4.0, Raigarh, India, 5–7 June 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Okolo, C.T. Navigating the Limits of AI Explainability: Designing for Novice Technology Users in Low-Resource Settings. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 8–10 August 2023; pp. 959–961. [Google Scholar] [CrossRef]
- Ketata, F.; Masry, Z.A.; Yacoub, S.; Zerhouni, N. A Methodology for Reliability Analysis of Explainable Machine Learning: Application to Endocrinology Diseases. IEEE Access 2024, 12, 101921–101935. [Google Scholar] [CrossRef]
- Pawlicki, M.; Pawlicka, A.; Kozik, R.; Choraś, M. Explainability versus Security: The Unintended Consequences of xAI in Cybersecurity. In Proceedings of the 2nd ACM Workshop on Secure and Trustworthy Deep Learning Systems, New York, NY, USA, 2–20 July 2024; pp. 1–7. [Google Scholar] [CrossRef]
- Vuppala, S.K.; Behera, M.; Jack, H.; Bussa, N. Explainable Deep Learning Methods for Medical Imaging Applications. In Proceedings of the 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 30–31 October 2020; pp. 334–339. [Google Scholar] [CrossRef]
- Solano-Kamaiko, I.R.; Mishra, D.; Dell, N.; Vashistha, A. Explorable Explainable AI: Improving AI Understanding for Community Health Workers in India. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 11–16 May 2024. [Google Scholar] [CrossRef]
- Hu, Q.; Liu, W.; Liu, Y.; Liu, Z. Interpretability Analysis of Pre-trained Convolutional Neural Networks for Medical Diagnosis. In Proceedings of the 2nd International Conference on Artificial Intelligence, Big Data and Algorithms (CAIBDA 2022), Nanjing, China, 17–19 June 2022; pp. 1–8. [Google Scholar]
- Masood, U.; Farooq, H.; Imran, A.; Abu-Dayya, A. Interpretable AI-Based Large-Scale 3D Pathloss Prediction Model for Enabling Emerging Self-Driving Networks. IEEE Trans. Mob. Comput. 2023, 22, 3967–3984. [Google Scholar] [CrossRef]
- Tabassum, S.; Parvin, N.; Hossain, N.; Tasnim, A.; Rahman, R.; Hossain, M.I. IoT Network Attack Detection Using XAI and Reliability Analysis. In Proceedings of the 2022 25th International Conference on Computer and Information Technology (ICCIT), Cox’s Bazar, Bangladesh, 17–19 December 2022; pp. 176–181. [Google Scholar] [CrossRef]
- Oba, Y.; Tezuka, T.; Sanuki, M.; Wagatsuma, Y. Interpretable Prediction of Diabetes from Tabular Health Screening Records Using an Attentional Neural Network. In Proceedings of the 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA), Porto, Portugal, 6–9 October 2021; pp. 1–11. [Google Scholar] [CrossRef]
- Srivastava, D.; Pandey, H.; Agarwal, A.K.; Sharma, R. Opening the Black Box: Explainable Machine Learning for Heart Disease Patients. In Proceedings of the 2023 International Conference on Advanced Computing Technologies and Applications (ICACTA), Mumbai, India, 6–7 October 2023; pp. 1–5. [Google Scholar] [CrossRef]
- Sherry, L.; Baldo, J.; Berlin, B. Design of Flight Guidance and Control Systems Using Explainable AI. In Proceedings of the 2021 Integrated Communications Navigation and Surveillance Conference (ICNS), Virtual Event, 20–22 April 2021; pp. 1–10. [Google Scholar] [CrossRef]
- Sutthithatip, S.; Perinpanayagam, S.; Aslam, S.; Wileman, A. Explainable AI in Aerospace for Enhanced System Performance. In Proceedings of the 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), San Antonio, TX, USA, 3–7 October 2021; pp. 1–7. [Google Scholar] [CrossRef]
- Sun, S.C.; Guo, W. Approximate Symbolic Explanation for Neural Network Enabled Water-Filling Power Allocation. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerpen, Belgium, 27–31 May 2020; pp. 1–4. [Google Scholar] [CrossRef]
- Zhang, K.; Xu, P.; Zhang, J. Explainable AI in Deep Reinforcement Learning Models: A SHAP Method Applied in Power System Emergency Control. In Proceedings of the 2020 IEEE 4th Conference on Energy Internet and Energy System Integration (EI2), Wuhan, China, 30 October–1 November 2020; pp. 711–716. [Google Scholar] [CrossRef]
- Lee, H.; Lim, H.; Lee, B. Explainable AI-based approaches for power quality prediction in distribution networks considering the uncertainty of renewable energy. In Proceedings of the 27th International Conference on Electricity Distribution (CIRED 2023), Rome, Italy, 12–15 June 2023; Volume 2023, pp. 584–588. [Google Scholar] [CrossRef]
- Mahamud, A.H.; Dey, A.K.; Sajedul Alam, A.N.M.; Alam, M.G.R.; Zaman, S. Implementation of Explainable AI in Mental Health Informatics: Suicide Data of the United Kingdom. In Proceedings of the 2022 12th International Conference on Electrical and Computer Engineering (ICECE), Dhaka, Bangladesh, 21–23 December 2022; pp. 457–460. [Google Scholar] [CrossRef]
- Brusini, L.; Cruciani, F.; Dall’Aglio, G.; Zajac, T.; Boscolo Galazzo, I.; Zucchelli, M.; Menegaz, G. XAI-Based Assessment of the AMURA Model for Detecting Amyloid-β and Tau Microstructural Signatures in Alzheimer’s Disease. IEEE J. Transl. Eng. Health Med. 2024, 12, 569–579. [Google Scholar] [CrossRef]
- Price, J.; Yamazaki, T.; Fujihara, K.; Sone, H. XGBoost: Interpretable Machine Learning Approach in Medicine. In Proceedings of the 2022 5th World Symposium on Communication Engineering (WSCE), Nagoya, Japan, 16–18 September 2022; pp. 109–113. [Google Scholar] [CrossRef]
- Zahoor, K.; Bawany, N.Z.; Ghani, U. Explainable AI for Healthcare: An Approach Towards Interpretable Healthcare Models. In Proceedings of the 2023 24th International Arab Conference on Information Technology (ACIT), Ajman, United Arab Emirates, 6–8 December 2023; pp. 1–7. [Google Scholar] [CrossRef]
- Abella, J.; Perez, J.; Englund, C.; Zonooz, B.; Giordana, G.; Donzella, C.; Cazorla, F.J.; Mezzetti, E.; Serra, I.; Brando, A.; et al. SAFEXPLAIN: Safe and Explainable Critical Embedded Systems Based on AI. In Proceedings of the 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE), Antwerpen, Belgium, 17–19 April 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Seetharaman, T.; Sharma, V.; Balamurugan, B.; Grover, V.; Agnihotri, A. An Efficient and Robust Explainable Artificial Intelligence for Securing Smart Healthcare System. In Proceedings of the 2023 Second International Conference on Smart Technologies for Smart Nation (SmartTechCon), Singapore, 18–19 August 2023; pp. 1066–1071. [Google Scholar] [CrossRef]
- Li, B.; Qi, P.; Liu, B.; Di, S.; Liu, J.; Pei, J.; Yi, J.; Zhou, B. Trustworthy AI: From Principles to Practices. ACM Comput. Surv. 2023, 55, 1–46. [Google Scholar] [CrossRef]
- Binder, A.; Montavon, G.; Lapuschkin, S.; Müller, K.R.; Samek, W. Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers. In Proceedings of the Artificial Neural Networks and Machine Learning—ICANN 2016, Barcelona, Spain, 6–9 September 2016; pp. 63–71. [Google Scholar] [CrossRef]
- Korobov, M.; Lopuhin, K. ELI5 Documentation: Overview. 2024. Available online: https://eli5.readthedocs.io/en/latest/overview.html (accessed on 11 October 2024).
- Onyeaka, H.; Anumudu, C.K.; Al-Sharify, Z.T.; Egele-Godswill, E.; Mbaegbu, P. COVID-19 pandemic: A review of the global lockdown and its far-reaching effects. Sci. Prog. 2021, 104, 00368504211019854. [Google Scholar] [CrossRef] [PubMed]
- Mercaldo, F.; Brunese, L.; Cesarelli, M.; Martinelli, F.; Santone, A. Respiratory Disease Detection through Spectogram Analysis with Explainable Deep Learning. In Proceedings of the 2023 8th International Conference on Smart and Sustainable Technologies (SpliTech), Split, Croatia, 20–23 June 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Kalakoti, R.; Bahsi, H.; Nõmm, S. Improving IoT Security With Explainable AI: Quantitative Evaluation of Explainability for IoT Botnet Detection. IEEE Internet Things J. 2024, 11, 18237–18254. [Google Scholar] [CrossRef]
- European Parliament and Council of the European Union. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on In Vitro Diagnostic Medical Devices and Repealing Directive 98/79/EC and Commission Decision 2010/227/EU. Official Journal of the European Union. 2017. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32017R0746 (accessed on 18 October 2024).
- Patel, U.; Patel, V. A comprehensive review: Active learning for hyperspectral image classifications. Earth Sci. Inform. 2023, 16, 1975–1991. [Google Scholar] [CrossRef]
Inclusion-Criteria | Exclusion-Criteria |
---|---|
Written in English | Not publicly available |
Conference paper or journal | Book, magazine, inbook, patent etc. |
Published before 2020 |
Term for Explainability | Term for AI | Term for Critical |
---|---|---|
machine learning ml, deep learning, artificial intelligence, ai, neural network, supervised learning, xai | xai, explanation, explainable, explaining, interpretable, interpretability, explanations | critical, security, safety, legal, ethic, civil protection, nuclear, defense, public administration, autonomous driving, autonomous vehicle, autonomous vehicular, self driving, medicine, medical, health, disease, healthcare, hospital, 5g, 6g, iot, internet of things, cps, cyber physical system, digital infrastructure, control, industry 4.0, agriculture, food supply, energy, electricity, gas, oil, power, aerospace, uav, wastewater, drinking water, transport, transportation, banking, financial, telecommunication, space |
Type of Research | Number of Publications | Publications |
---|---|---|
Framework | 31 | [2,8,9,17,18,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54] |
Method | 28 | [55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82] |
Empirical study | 15 | [83,84,85,86,87,88,89,90,91,92,93,94,95,96,97] |
Use case | 8 | [16,98,99,100,101,102,103,104] |
Conceptual work | 2 | [20,105] |
Sector | Number of Publications | Publications |
---|---|---|
Health | 38 | [2,18,20,29,32,34,35,37,38,39,40,45,49,51,56,59,61,62,63,65,70,76,79,84,85,86,87,89,90,91,94,95,101,102,103,104,106,107] |
Digital infrastructure | 20 | [30,31,33,36,41,44,48,55,60,66,71,73,74,75,77,78,81,88,92,93] |
Transport | 14 | [8,16,42,46,50,53,54,64,68,72,80,82,96,97] |
Energy | 9 | [9,47,52,57,58,69,98,99,100] |
Drinking water | 1 | [67] |
Financial market infrastructure | 1 | [17] |
General | 1 | [105] |
XAI Method | Number of Publications | Publications |
---|---|---|
SHAP | 32 | [2,8,29,30,31,39,44,45,47,49,51,52,53,58,61,67,68,73,75,76,78,82,83,84,87,92,94,95,99,101,102,104] |
LIME | 18 | [2,9,29,31,33,37,39,40,44,45,51,65,76,77,82,83,85,87,100,102,104] |
Others | 13 | [17,18,39,41,42,52,55,60,62,71,79,80,81,91,107] |
CAM | 8 | [29,32,35,42,56,70,85,89,104] |
Custom | 6 | [34,36,66,69,72,106] |
None | 5 | [48,74,90,97,105] |
Decision Trees | 4 | [50,57,77,103] |
ELI5 | 2 | [39,76] |
LRP | 2 | [41,42] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wiggerthale, J.; Reich, C. Explainable Machine Learning in Critical Decision Systems: Ensuring Safe Application and Correctness. AI 2024, 5, 2864-2896. https://doi.org/10.3390/ai5040138
Wiggerthale J, Reich C. Explainable Machine Learning in Critical Decision Systems: Ensuring Safe Application and Correctness. AI. 2024; 5(4):2864-2896. https://doi.org/10.3390/ai5040138
Chicago/Turabian StyleWiggerthale, Julius, and Christoph Reich. 2024. "Explainable Machine Learning in Critical Decision Systems: Ensuring Safe Application and Correctness" AI 5, no. 4: 2864-2896. https://doi.org/10.3390/ai5040138
APA StyleWiggerthale, J., & Reich, C. (2024). Explainable Machine Learning in Critical Decision Systems: Ensuring Safe Application and Correctness. AI, 5(4), 2864-2896. https://doi.org/10.3390/ai5040138