[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
short-survey

Explainable artificial intelligence: : A survey of needs, techniques, applications, and future direction

Published: 18 October 2024 Publication History

Abstract

Artificial intelligence models encounter significant challenges due to their black-box nature, particularly in safety-critical domains such as healthcare, finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI) addresses these challenges by providing explanations for how these models make decisions and predictions, ensuring transparency, accountability, and fairness. Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques. However, there remains a gap in the literature as there are no comprehensive reviews that delve into the detailed mathematical representations, design methodologies of XAI models, and other associated aspects. This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas. The survey is aimed at XAI researchers, XAI practitioners, AI model developers, and XAI beneficiaries who are interested in enhancing the trustworthiness, transparency, accountability, and fairness of their AI models.

References

[1]
Weller A., Transparency: Motivations and challenges, in: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer, 2019, pp. 23–40.
[2]
Samek W., Wiegand T., Müller K.-R., Explainable Artificial Intelligence: Understanding, visualizing and interpreting deep learning models, 2017, arXiv preprint arXiv:1708.08296.
[3]
Shrivastava A., Kumar P., Anubhav K.-R., Vondrick C., Scheirer W., Prijatelj D., Jafarzadeh M., Ahmad T., Cruz S., Rabinowitz R., et al., Novelty in image classification, in: A Unifying Framework for Formal Theories of Novelty: Discussions, Guidelines, and Examples for Artificial Intelligence, Springer, 2023, pp. 37–48.
[4]
Vilone G., Longo L., Explainable Artificial Intelligence: A systematic review, 2020, arXiv preprint arXiv:2006.00093.
[5]
Schwalbe G., Finzel B., A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts, Data Min. Knowl. Discov. (2023) 1–59.
[6]
Marcus G., Deep learning: A critical appraisal, 2018, arXiv preprint arXiv:1801.00631.
[7]
Guidotti R., Monreale A., Ruggieri S., Turini F., Giannotti F., Pedreschi D., A survey of methods for explaining black box models, ACM Comput. Surv. 51 (5) (2018) 1–42.
[8]
Gilpin L.H., Bau D., Yuan B.Z., Bajwa A., Specter M., Kagal L., Explaining explanations: An overview of interpretability of machine learning, in: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics, DSAA, IEEE, 2018, pp. 80–89.
[9]
Adadi A., Berrada M., Peeking inside the black-box: A survey on explainable artificial intelligence (XAI), IEEE Access 6 (2018) 52138–52160.
[10]
Arrieta A.B., Díaz-Rodríguez N., Del Ser J., Bennetot A., Tabik S., Barbado A., García S., Gil-López S., Molina D., Benjamins R., et al., Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, Inf. Fusion 58 (2020) 82–115.
[11]
Minh D., Wang H.X., Li Y.F., Nguyen T.N., Explainable Artificial Intelligence: A comprehensive review, Artif. Intell. Rev. (2022) 1–66.
[12]
Langer M., Oster D., Speith T., Hermanns H., Kästner L., Schmidt E., Sesing A., Baum K., What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research, Artificial Intelligence 296 (2021).
[13]
T. Speith, A review of taxonomies of Explainable Artificial Intelligence (XAI) methods, in: 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2239–2250.
[14]
Räuker T., Ho A., Casper S., Hadfield-Menell D., Toward transparent AI: A survey on interpreting the inner structures of deep neural networks, in: 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML, IEEE, 2023, pp. 464–483.
[15]
Weber L., Lapuschkin S., Binder A., Samek W., Beyond explaining: Opportunities and challenges of XAI-based model improvement, Inf. Fusion 92 (2023) 154–176.
[16]
Islam M.R., Ahmed M.U., Barua S., Begum S., A systematic review of explainable artificial intelligence in terms of different application domains and tasks, Appl. Sci. 12 (3) (2022) 1353.
[17]
Holzinger A., Langs G., Denk H., Zatloukal K., Müller H., Causability and explainability of artificial intelligence in medicine, Wiley Interdiscipl. Rev.: Data Min. Knowl. Discov. 9 (4) (2019).
[18]
Lötsch J., Kringel D., Ultsch A., Explainable Artificial Intelligence (XAI) in biomedicine: Making AI decisions trustworthy for physicians and patients, BioMedInformatics 2 (1) (2021) 1–17.
[19]
González-Alday R., García-Cuesta E., Kulikowski C.A., Maojo V., A scoping review on the progress, applicability, and future of explainable artificial intelligence in medicine, Appl. Sci. 13 (19) (2023) 10778.
[20]
Loh H.W., Ooi C.P., Seoni S., Barua P.D., Molinari F., Acharya U.R., Application of Explainable Artificial Intelligence for healthcare: A systematic review of the last decade (2011–2022), Comput. Methods Programs Biomed. (2022).
[21]
Alam M.N., Kaur M., Kabir M.S., Explainable AI in healthcare: Enhancing transparency and trust upon legal and ethical consideration, 2023.
[22]
Albahri A., Duhaim A.M., Fadhel M.A., Alnoor A., Baqer N.S., Alzubaidi L., Albahri O., Alamoodi A., Bai J., Salhi A., et al., A systematic review of trustworthy and Explainable Artificial Intelligence in healthcare: Assessment of quality, bias risk, and data fusion, Inf. Fusion (2023).
[23]
Saranya A., Subhashini R., A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends, Decis. Anal. J. (2023).
[24]
Longo L., Brcic M., Cabitza F., Choi J., Confalonieri R., Del Ser J., Guidotti R., Hayashi Y., Herrera F., Holzinger A., et al., Explainable artificial intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions, Inf. Fusion (2024).
[25]
Bostrom N., Yudkowsky E., The ethics of artificial intelligence, in: Artificial Intelligence Safety and Security, Chapman and Hall/CRC, 2018, pp. 57–69.
[26]
M.T. Ribeiro, S. Singh, C. Guestrin, “Why should I trust you?” explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
[27]
El Naqa I., Murphy M.J., What Is Machine Learning?, Springer, 2015.
[28]
Moor J.H., Three myths of computer science, British J. Philos. Sci. 29 (3) (1978) 213–222.
[29]
Saxe A., Nelli S., Summerfield C., If deep learning is the answer, what is the question?, Nat. Rev. Neurosci. 22 (1) (2021) 55–67.
[30]
Castelvecchi D., Can we open the black box of AI?, Nat. News 538 (7623) (2016) 20.
[31]
Doran D., Schulz S., Besold T.R., What does explainable AI really mean? A new conceptualization of perspectives, 2017, arXiv preprint arXiv:1710.00794.
[32]
Angelov P.P., Soares E.A., Jiang R., Arnold N.I., Atkinson P.M., Explainable artificial intelligence: An analytical review, Wiley Interdiscipl. Rev.: Data Min. Knowl. Discov. 11 (5) (2021).
[33]
Fan F.-L., Xiong J., Li M., Wang G., On interpretability of artificial neural networks: A survey, IEEE Trans. Radiat. Plasma Med. Sci. 5 (6) (2021) 741–760.
[34]
H.K. Dam, T. Tran, A. Ghose, Explainable software analytics, in: Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, 2018, pp. 53–56.
[35]
Ali S., Abuhmed T., El-Sappagh S., Muhammad K., Alonso-Moral J.M., Confalonieri R., Guidotti R., Del Ser J., Díaz-Rodríguez N., Herrera F., Explainable artificial intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence, Inf. Fusion 99 (2023).
[36]
Y. Zhang, Q.V. Liao, R.K. Bellamy, Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making, in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 295–305.
[37]
Jordan M.I., Mitchell T.M., Machine learning: Trends, perspectives, and prospects, Science 349 (6245) (2015) 255–260.
[38]
Zhang Y., Tiňo P., Leonardis A., Tang K., A survey on neural network interpretability, IEEE Trans. Emerg. Top. Comput. Intell. 5 (5) (2021) 726–742.
[39]
Doshi-Velez F., Kim B., Towards a rigorous science of interpretable machine learning, 2017, arXiv preprint arXiv:1702.08608.
[40]
Q. Zhang, Y.N. Wu, S.-C. Zhu, Interpretable convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8827–8836.
[41]
Samek W., Montavon G., Vedaldi A., Hansen L.K., Müller K.-R., Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer Nature, 2019.
[42]
Amodei D., Olah C., Steinhardt J., Christiano P., Schulman J., Mané D., Concrete problems in AI safety, 2016, arXiv preprint arXiv:1606.06565.
[43]
Dietvorst B.J., Simmons J.P., Massey C., Algorithm aversion: People erroneously avoid algorithms after seeing them err, J. Exp. Psychol.: Gen. 144 (1) (2015) 114.
[44]
Lipton Z.C., The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery, Queue 16 (3) (2018) 31–57.
[45]
Montavon G., Samek W., Müller K.-R., Methods for interpreting and understanding deep neural networks, Digit. Signal Process. 73 (2018) 1–15.
[46]
Fuhrman J.D., Gorre N., Hu Q., Li H., El Naqa I., Giger M.L., A review of explainable and interpretable AI with applications in COVID-19 imaging, Med. Phys. 49 (1) (2022) 1–14.
[47]
Gurmessa D.K., Jimma W., A comprehensive evaluation of explainable Artificial Intelligence techniques in stroke diagnosis: A systematic review, Cogent Eng. 10 (2) (2023).
[48]
Das A., Rad P., Opportunities and challenges in Explainable Artificial Intelligence (XAI): A survey, 2020, arXiv preprint arXiv:2006.11371.
[49]
Marcinkevičs R., Vogt J.E., Interpretability and explainability: A machine learning zoo mini-tour, 2020, arXiv preprint arXiv:2012.01805.
[50]
Rudin C., Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, Nat. Mach. Intell. 1 (5) (2019) 206–215.
[51]
Ribeiro M.T., Singh S., Guestrin C., Model-agnostic interpretability of machine learning, 2016, arXiv preprint arXiv:1606.05386.
[52]
Lundberg S.M., Lee S.-I., A unified approach to interpreting model predictions, Adv. Neural Inf. Process. Syst. 30 (2017).
[53]
Ancona M., Ceolini E., Öztireli C., Gross M., Towards better understanding of gradient-based attribution methods for deep neural networks, 2017, arXiv preprint arXiv:1711.06104.
[54]
H. Chefer, S. Gur, L. Wolf, Transformer interpretability beyond attention visualization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 782–791.
[55]
Ali A., Schnake T., Eberle O., Montavon G., Müller K.-R., Wolf L., XAI for Transformers: Better explanations through conservative propagation, in: International Conference on Machine Learning, PMLR, 2022, pp. 435–451.
[56]
Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser Ł., Polosukhin I., Attention is all you need, Adv. Neural Inf. Process. Syst. 30 (2017).
[57]
Ribeiro M.T., Singh S., Guestrin C., Anchors: High-precision model-agnostic explanations, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018.
[58]
Ancona M., Oztireli C., Gross M., Explaining deep neural networks with a polynomial time algorithm for shapley value approximation, in: International Conference on Machine Learning, PMLR, 2019, pp. 272–281.
[59]
Wachter S., Mittelstadt B., Russell C., Counterfactual explanations without opening the black box: Automated decisions and the GDPR, Harv. JL Tech. 31 (2017) 841.
[60]
Simonyan K., Vedaldi A., Zisserman A., Deep inside convolutional networks: Visualising image classification models and saliency maps, 2013, arXiv preprint arXiv:1312.6034.
[61]
Bach S., Binder A., Montavon G., Klauschen F., Müller K.-R., Samek W., On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PLoS One 10 (7) (2015).
[62]
Montavon G., Binder A., Lapuschkin S., Samek W., Müller K.-R., Layer-wise relevance propagation: An overview, Explain. AI: Interpret., Explain. Vis. Deep Learn. (2019) 193–209.
[63]
B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2921–2929.
[64]
Sundararajan M., Taly A., Yan Q., Axiomatic attribution for deep networks, in: International Conference on Machine Learning, PMLR, 2017, pp. 3319–3328.
[65]
H. Chefer, S. Gur, L. Wolf, Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 397–406.
[66]
Shrikumar A., Greenside P., Kundaje A., Learning important features through propagating activation differences, in: International Conference on Machine Learning, PMLR, 2017, pp. 3145–3153.
[67]
Voita E., Talbot D., Moiseev F., Sennrich R., Titov I., Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned, 2019, arXiv preprint arXiv:1905.09418.
[68]
Wu Z., Ong D.C., On explaining your explanations of BERT: An empirical study with sequence classification, 2021, arXiv preprint arXiv:2101.00196.
[69]
Abnar S., Zuidema W., Quantifying attention flow in transformers, 2020, arXiv preprint arXiv:2005.00928.
[70]
Rana C., Dahiya M., et al., Safety of autonomous systems using reinforcement learning: A comprehensive survey, in: 2023 International Conference on Advances in Computation, Communication and Information Technology, ICAICCIT, IEEE, 2023, pp. 744–750.
[71]
Yu C., Liu J., Nemati S., Yin G., Reinforcement learning in healthcare: A survey, ACM Comput. Surv. 55 (1) (2021) 1–36.
[72]
Ye Y., Zhang X., Sun J., Automated vehicle’s behavior decision making using deep reinforcement learning and high-fidelity simulation environment, Transp. Res. C 107 (2019) 155–170.
[73]
Vouros G.A., Explainable deep reinforcement learning: State of the art and challenges, ACM Comput. Surv. 55 (5) (2022) 1–39.
[74]
Madumal P., Miller T., Sonenberg L., Vetere F., Explainable reinforcement learning through a causal lens, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, 2020, pp. 2493–2500.
[75]
Puiutta E., Veith E.M., Explainable reinforcement learning: A survey, in: International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Springer, 2020, pp. 77–95.
[76]
Heuillet A., Couthouis F., Díaz-Rodríguez N., Collective explainable AI: Explaining cooperative strategies and agent contribution in multiagent reinforcement learning with shapley values, IEEE Comput. Intell. Mag. 17 (1) (2022) 59–71.
[77]
Heuillet A., Couthouis F., Díaz-Rodríguez N., Explainability in deep reinforcement learning, Knowl.-Based Syst. 214 (2021).
[78]
Zhang G., Kashima H., Learning state importance for preference-based reinforcement learning, Mach. Learn. (2023) 1–17.
[79]
Wells L., Bednarz T., Explainable AI and reinforcement learning—A systematic review of current approaches and trends, Front. Artif. Intell. 4 (2021).
[80]
Alharin A., Doan T.-N., Sartipi M., Reinforcement learning interpretation methods: A survey, IEEE Access 8 (2020) 171058–171077.
[81]
Chamola V., Hassija V., Sulthana A.R., Ghosh D., Dhingra D., Sikdar B., A review of trustworthy and Explainable Artificial Intelligence (XAI), IEEE Access (2023).
[82]
Lai V., Chen C., Liao Q.V., Smith-Renner A., Tan C., Towards a science of Human-AI decision making: a survey of empirical studies, 2021, arXiv preprint arXiv:2112.11471.
[83]
Torfi A., Shirvani R.A., Keneshloo Y., Tavaf N., Fox E.A., Natural language processing advancements by deep learning: A survey, 2020, arXiv preprint arXiv:2003.01200.
[84]
D. Jurafsky, J.H. Martin, Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition.
[85]
Usuga-Cadavid J.P., Lamouri S., Grabot B., Fortin A., Using deep learning to value free-form text data for predictive maintenance, Int. J. Prod. Res. 60 (14) (2022) 4548–4575.
[86]
Jain S., Wallace B.C., Attention is not explanation, 2019, arXiv preprint arXiv:1902.10186.
[87]
Gholizadeh S., Zhou N., Model explainability in deep learning based natural language processing, 2021, arXiv preprint arXiv:2106.07410.
[88]
Sundararajan M., Taly A., Yan Q., Axiomatic attribution for deep networks, in: Precup D., Teh Y.W. (Eds.), Proceedings of the 34th International Conference on Machine Learning, in: Proceedings of Machine Learning Research, vol. 70, PMLR, 2017, pp. 3319–3328. URL https://proceedings.mlr.press/v70/sundararajan17a.html.
[89]
Montavon G., Lapuschkin S., Binder A., Samek W., Müller K.-R., Explaining nonlinear classification decisions with deep taylor decomposition, Pattern Recognit. 65 (2017) 211–222.
[90]
Brown T., Mann B., Ryder N., Subbiah M., Kaplan J.D., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., et al., Language models are few-shot learners, Adv. Neural Inf. Process. Syst. 33 (2020) 1877–1901.
[91]
Wei J., Wang X., Schuurmans D., Bosma M., Xia F., Chi E., Le Q.V., Zhou D., et al., Chain-of-thought prompting elicits reasoning in large language models, Adv. Neural Inf. Process. Syst. 35 (2022) 24824–24837.
[92]
White J., Fu Q., Hays S., Sandborn M., Olea C., Gilbert H., Elnashar A., Spencer-Smith J., Schmidt D.C., A prompt pattern catalog to enhance prompt engineering with chatgpt, 2023, arXiv preprint arXiv:2302.11382.
[93]
Jie Y.W., Satapathy R., Mong G.S., Cambria E., et al., How interpretable are reasoning explanations from prompting large language models?, 2024, arXiv preprint arXiv:2402.11863.
[94]
Wu S., Shen E.M., Badrinath C., Ma J., Lakkaraju H., Analyzing chain-of-thought prompting in large language models via gradient-based feature attributions, 2023, arXiv preprint arXiv:2307.13339.
[95]
Madaan A., Yazdanbakhsh A., Text and patterns: For effective chain of thought, it takes two to tango, 2022, arXiv preprint arXiv:2209.07686.
[96]
Wang B., Min S., Deng X., Shen J., Wu Y., Zettlemoyer L., Sun H., Towards understanding chain-of-thought prompting: An empirical study of what matters, 2022, arXiv preprint arXiv:2212.10001.
[97]
Lanham T., Chen A., Radhakrishnan A., Steiner B., Denison C., Hernandez D., Li D., Durmus E., Hubinger E., Kernion J., et al., Measuring faithfulness in chain-of-thought reasoning, 2023, arXiv preprint arXiv:2307.13702.
[98]
Wei J., Wei J., Tay Y., Tran D., Webson A., Lu Y., Chen X., Liu H., Huang D., Zhou D., et al., Larger language models do in-context learning differently, 2023, arXiv preprint arXiv:2303.03846.
[99]
Li Z., Xu P., Liu F., Song H., Towards understanding in-context learning with contrastive demonstrations and saliency maps, 2023, arXiv preprint arXiv:2307.05052.
[100]
Slack D., Krishna S., Lakkaraju H., Singh S., Explaining machine learning models with interactive natural language conversations using TalkToModel, Nat. Mach. Intell. 5 (8) (2023) 873–883.
[101]
Yeh C., Chen Y., Wu A., Chen C., Viégas F., Wattenberg M., AttentionVIX: A global view of transformer attention, IEEE Trans. Vis. Comput. Graphics (2023).
[102]
Zeiler M.D., Fergus R., Visualizing and understanding convolutional networks, in: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, Springer, 2014, pp. 818–833.
[103]
Springenberg J.T., Dosovitskiy A., Brox T., Riedmiller M., Striving for simplicity: The all convolutional net, 2014, arXiv preprint arXiv:1412.6806.
[104]
Krizhevsky A., Sutskever I., Hinton G.E., Imagenet classification with deep convolutional neural networks, Commun. ACM 60 (6) (2017) 84–90.
[105]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
[106]
S. Yang, P. Luo, C.-C. Loy, X. Tang, Wider face: A face detection benchmark, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5525–5533.
[107]
W. Yang, H. Huang, Z. Zhang, X. Chen, K. Huang, S. Zhang, Towards rich feature discovery with class activation maps augmentation for person re-identification, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 1389–1398.
[108]
Linardatos P., Papastefanopoulos V., Kotsiantis S., Explainable AI: A review of machine learning interpretability methods, Entropy 23 (1) (2020) 18.
[109]
Smilkov D., Thorat N., Kim B., Viégas F., Wattenberg M., Smoothgrad: Removing noise by adding noise, 2017, arXiv preprint arXiv:1706.03825.
[110]
Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., et al., An image is worth 16x16 words: Transformers for image recognition at scale, 2020, arXiv preprint arXiv:2010.11929.
[111]
Verma S., Boonsanong V., Hoang M., Hines K.E., Dickerson J.P., Shah C., Counterfactual explanations and algorithmic recourses for machine learning: A review, 2020, arXiv preprint arXiv:2010.10596.
[112]
Guidotti R., Counterfactual explanations and how to find them: Literature review and benchmarking, Data Min. Knowl. Discov. (2022) 1–55.
[113]
Shumway R.H., Stoffer D.S., Stoffer D.S., Time Series Analysis and Its Applications, Springer, 2000.
[114]
Lim B., Zohren S., Time-series forecasting with deep learning: A survey, Phil. Trans. R. Soc. A 379 (2194) (2021).
[115]
Verma R., Sharma J., Jindal S., Time Series Forecasting Using Machine Learning, in: Advances in Computing and Data Sciences: 4th International Conference, ICACDS 2020, Valletta, Malta, April 24–25, 2020, Revised Selected Papers 4, Springer, 2020, pp. 372–381.
[116]
Bao W., Yue J., Rao Y., A deep learning framework for financial time series using stacked autoencoders and long-short term memory, PLoS One 12 (7) (2017).
[117]
Huntingford C., Jeffers E.S., Bonsall M.B., Christensen H.M., Lees T., Yang H., Machine learning and artificial intelligence to aid climate change research and preparedness, Environ. Res. Lett. 14 (12) (2019).
[118]
Farahat A., Reichert C., Sweeney-Reed C.M., Hinrichs H., Convolutional neural networks for decoding of covert attention focus and saliency maps for EEG feature visualization, J. Neural Eng. 16 (6) (2019).
[119]
Huber T., Weitz K., André E., Amir O., Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps, Artificial Intelligence 301 (2021).
[120]
Ismail A.A., Gunady M., Corrada Bravo H., Feizi S., Benchmarking deep learning interpretability in time series predictions, Adv. Neural Inf. Process. Syst. 33 (2020) 6441–6452.
[121]
Cooper J., Arandjelović O., Harrison D.J., Believe the HiPe: Hierarchical perturbation for fast, robust, and model-agnostic saliency mapping, Pattern Recognit. 129 (2022).
[122]
Wang Z., Yan W., Oates T., Time series classification from scratch with deep neural networks: A strong baseline, in: 2017 International Joint Conference on Neural Networks, IJCNN, IEEE, 2017, pp. 1578–1585.
[123]
J.T. Springenberg, A. Dosovitskiy, T. Brox, M. Riedmiller, Towards better analysis of deep convolutional neural networks, in: International Conference on Learning Representations, ICLR, 2015.
[124]
Song W., Liu L., Liu M., Wang W., Wang X., Song Y., Representation learning with deconvolution for multivariate time series classification and visualization, in: Data Science: 6th International Conference of Pioneering Computer Scientists, Engineers and Educators, ICPCSEE 2020, Taiyuan, China, September 18-21, 2020, Proceedings, Part I 6, Springer, 2020, pp. 310–326.
[125]
Siddiqui S.A., Mercier D., Munir M., Dengel A., Ahmed S., Tsviz: Demystification of deep learning models for time-series analysis, IEEE Access 7 (2019) 67027–67040.
[126]
Labrín C., Urdinez F., Principal component analysis, in: R for Political Data Science, Chapman and Hall/CRC, 2020, pp. 375–393.
[127]
Van Der Maaten L., Accelerating t-SNE using tree-based algorithms, J. Mach. Learn. Res. 15 (1) (2014) 3221–3245.
[128]
McInnes L., Healy J., Melville J., UMAP: Uniform manifold approximation and projection for dimension reduction, 2018, arXiv preprint arXiv:1802.03426.
[129]
Agrawal K., Desai N., Chakraborty T., Time series visualization using t-SNE and UMAP, J. Big Data 8 (1) (2021) 1–21.
[130]
Roy A., Maaten L.v.d., Witten D., UMAP reveals cryptic population structure and phenotype heterogeneity in large genomic cohorts, PLoS Genet. 16 (3) (2020).
[131]
Munir M., Thesis Approved by the Department of Computer Science of the TU Kaiserslautern for the Award of the Doctoral Degree Doctor of Engineering, (Ph.D. thesis) Kyushu University, Japan, 2021.
[132]
Mosqueira-Rey E., Hernández-Pereira E., Alonso-Ríos D., Bobes-Bascarán J., Fernández-Leal Á., Human-in-the-loop machine learning: A state of the art, Artif. Intell. Rev. (2022) 1–50.
[133]
Schlegel U., Keim D.A., Time series model attribution visualizations as explanations, in: 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics, TREX, IEEE, 2021, pp. 27–31.
[134]
Plumb G., Wang S., Chen Y., Rudin C., Interpretable decision sets: A joint framework for description and prediction, in: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ACM, 2018, pp. 1677–1686.
[135]
Lipton Z.C., Kale D.C., Wetzel R., et al., Modeling missing data in clinical time series with rnns, Mach. Learn. for Healthc. 56 (2016) 253–270.
[136]
H. Lakkaraju, S.H. Bach, J. Leskovec, Interpretable decision sets: A joint framework for description and prediction, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1675–1684.
[137]
Rudin C., Radin J., Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition, Harvard Data Sci. Rev. 1 (2) (2019) 1–9.
[138]
Hamamoto R., Application of artificial intelligence for medical research, Biomolecules 11 (1) (2021) 90.
[139]
Bharati S., Mondal M.R.H., Podder P., A review on explainable artificial intelligence for healthcare: Why, how, and when?, IEEE Trans. Artif. Intell. (2023).
[140]
Liao J., Li X., Gan Y., Han S., Rong P., Wang W., Li W., Zhou L., Artificial intelligence assists precision medicine in cancer treatment, Front. Oncol. 12 (2023).
[141]
Askr H., Elgeldawi E., Aboul Ella H., Elshaier Y.A., Gomaa M.M., Hassanien A.E., Deep learning in drug discovery: An integrative review and future challenges, Artif. Intell. Rev. 56 (7) (2023) 5975–6037.
[142]
Kha Q.-H., Le V.-H., Hung T.N.K., Nguyen N.T.K., Le N.Q.K., Development and validation of an explainable machine learning-based prediction model for drug–food interactions from chemical structures, Sensors 23 (8) (2023) 3962.
[143]
Panigutti C., Beretta A., Fadda D., Giannotti F., Pedreschi D., Perotti A., Rinzivillo S., Co-design of human-centered, explainable AI for clinical decision support, ACM Trans. Interact. Intell. Syst. (2023).
[144]
Saraswat D., Bhattacharya P., Verma A., Prasad V.K., Tanwar S., Sharma G., Bokoro P.N., Sharma R., Explainable AI for healthcare 5.0: Opportunities and challenges, IEEE Access (2022).
[145]
Ward A., Sarraju A., Chung S., Li J., Harrington R., Heidenreich P., Palaniappan L., Scheinker D., Rodriguez F., Machine learning and atherosclerotic cardiovascular disease risk prediction in a multi-ethnic population, NPJ Digit. Med. 3 (1) (2020) 125.
[146]
Ma X., Niu Y., Gu L., Wang Y., Zhao Y., Bailey J., Lu F., Understanding adversarial attacks on deep learning based medical image analysis systems, Pattern Recognit. 110 (2021).
[147]
Sharma M., Savage C., Nair M., Larsson I., Svedberg P., Nygren J.M., Artificial intelligence applications in health care practice: Scoping review, J. Med. Internet Res. 24 (10) (2022).
[148]
Maliha G., Gerke S., Cohen I.G., Parikh R.B., Artificial intelligence and liability in medicine, Milbank Q. 99 (3) (2021) 629–647.
[149]
Amann J., Blasimme A., Vayena E., Frey D., Madai V.I., Consortium P., Explainability for artificial intelligence in healthcare: A multidisciplinary perspective, BMC Med. Inform. Decis. Making 20 (2020) 1–9.
[150]
Chaddad A., Peng J., Xu J., Bouridane A., Survey of explainable AI techniques in healthcare, Sensors 23 (2) (2023) 634.
[151]
Kerasidou A., Ethics of artificial intelligence in global health: Explainability, algorithmic bias and trust, J. Oral Biol. Craniofacial Res. 11 (4) (2021) 612–614.
[152]
Aranovich T.d., Matulionyte R., Ensuring AI explainability in healthcare: problems and possible policy solutions, Inf. Commun. Technol. Law 32 (2) (2023) 259–275.
[153]
Anton N., Doroftei B., Curteanu S., Catãlin L., Ilie O.-D., Târcoveanu F., Bogdănici C.M., Comprehensive review on the use of artificial intelligence in ophthalmology and future research directions, Diagnostics 13 (1) (2022) 100.
[154]
Li L., Xu M., Liu H., Li Y., Wang X., Jiang L., Wang Z., Fan X., Wang N., A large-scale database and a CNN model for attention-based glaucoma detection, IEEE Trans. Med. Imaging 39 (2) (2019) 413–424.
[155]
Bian Z., Xia S., Xia C., Shao M., Weakly supervised vitiligo segmentation in skin image through saliency propagation, in: 2019 IEEE International Conference on Bioinformatics and Biomedicine, BIBM, IEEE, 2019, pp. 931–934.
[156]
Rajaraman S., Candemir S., Thoma G., Antani S., Visualizing and explaining deep learning predictions for pneumonia detection in pediatric chest radiographs, in: Medical Imaging 2019: Computer-Aided Diagnosis, 10950, SPIE, 2019, pp. 200–211.
[157]
Yang G., Raschke F., Barrick T.R., Howe F.A., Manifold Learning in MR spectroscopy using nonlinear dimensionality reduction and unsupervised clustering, Magn. Resonance Med. 74 (3) (2015) 868–878.
[158]
Ahmed U., Srivastava G., Yun U., Lin J.C.-W., EANDC: An explainable attention network based deep adaptive clustering model for mental health treatment, Future Gener. Comput. Syst. 130 (2022) 106–113.
[159]
Ming Y., Qu H., Bertini E., Rulematrix: Visualizing and understanding classifiers with rules, IEEE Trans. Vis. Comput. Graphics 25 (1) (2018) 342–352.
[160]
Rane N., Choudhary S., Rane J., Explainable Artificial Intelligence (XAI) in healthcare: Interpretable models for clinical decision support, 2023, Available at SSRN 4637897.
[161]
Magunia H., Lederer S., Verbuecheln R., Gilot B.J., Koeppen M., Haeberle H.A., Mirakaj V., Hofmann P., Marx G., Bickenbach J., et al., Machine learning identifies ICU outcome predictors in a multicenter COVID-19 cohort, Critical Care 25 (2021) 1–14.
[162]
Raza A., Tran K.P., Koehl L., Li S., Designing ecg monitoring healthcare system with federated transfer learning and explainable AI, Knowl.-Based Syst. 236 (2022).
[163]
Morabito F.C., Ieracitano C., Mammone N., An explainable Artificial Intelligence approach to study MCI to AD conversion via HD-EEG processing, Clin. EEG Neurosci. 54 (1) (2023) 51–60.
[164]
El-Sappagh S., Alonso J.M., Islam S.R., Sultan A.M., Kwak K.S., A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease, Sci. Rep. 11 (1) (2021) 2660.
[165]
Yang G., Ye Q., Xia J., Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond, Inf. Fusion 77 (2022) 29–52.
[166]
Awotunde J.B., Adeniyi E.A., Ajamu G.J., Balogun G.B., Taofeek-Ibrahim F.A., Explainable artificial intelligence in genomic sequence for healthcare systems prediction, in: Connected E-Health: Integrated IoT and Cloud Computing, Springer, 2022, pp. 417–437.
[167]
Anguita-Ruiz A., Segura-Delgado A., Alcalá R., Aguilera C.M., Alcalá-Fdez J., eXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research, PLoS Comput. Biol. 16 (4) (2020).
[168]
Troncoso-García A., Martínez-Ballesteros M., Martínez-Álvarez F., Troncoso A., Explainable machine learning for sleep apnea prediction, Procedia Comput. Sci. 207 (2022) 2930–2939.
[169]
Tjoa E., Guan C., A survey on Explainable Artificial Intelligence (XAI): Toward medical XAI, IEEE Trans. Neural Netw. Learn. Syst. 32 (11) (2020) 4793–4813.
[170]
Al Shami A.K., Generating Tennis Player by the Predicting Movement Using 2D Pose Estimation, (Ph.D. thesis) University of Colorado Colorado Springs, 2022.
[171]
AlShami A., Boult T., Kalita J., Pose2Trajectory: Using transformers on body pose to predict tennis player’s trajectory, J. Vis. Commun. Image Represent. 97 (2023).
[172]
Atakishiyev S., Salameh M., Yao H., Goebel R., Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions, 2021, arXiv preprint arXiv:2112.11561.
[173]
D. Holliday, S. Wilson, S. Stumpf, User trust in intelligent systems: A journey over time, in: Proceedings of the 21st International Conference on Intelligent User Interfaces, 2016, pp. 164–168.
[174]
Israelsen B.W., Ahmed N.R., “Dave... I can assure you... that it’s going to be all right...” A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships, ACM Comput. Surv. 51 (6) (2019) 1–37.
[175]
Atakishiyev S., Salameh M., Yao H., Goebel R., Towards safe, explainable, and regulated autonomous driving, 2021, arXiv preprint arXiv:2111.10518.
[176]
Corso A., Kochenderfer M.J., Interpretable safety validation for autonomous vehicles, in: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems, ITSC, IEEE, 2020, pp. 1–6.
[177]
McGehee D.V., Brewer M., Schwarz C., Smith B.W., et al., Review of Automated Vehicle Technology: Policy and Implementation Implications, Iowa. Dept. of Transportation, 2016.
[178]
Rahman M., Polunsky S., Jones S., Transportation policies for connected and automated mobility in smart cities, in: Smart Cities Policies and Financing, Elsevier, 2022, pp. 97–116.
[179]
J. Kim, S. Moon, A. Rohrbach, T. Darrell, J. Canny, Advisable learning for self-driving vehicles by internalizing observation-to-action rules, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9661–9670.
[180]
Kim J., Rohrbach A., Akata Z., Moon S., Misu T., Chen Y.-T., Darrell T., Canny J., Toward explainable and advisable model for self-driving cars, Appl. AI Lett. 2 (4) (2021).
[181]
Regulation P., Regulation (EU) 2016/679 of the European Parliament and of the Council, Regulation (eu) 679 (2016) 2016.
[182]
Burton S., Habli I., Lawton T., McDermid J., Morgan P., Porter Z., Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective, Artificial Intelligence 279 (2020).
[183]
Chen V., Liao Q.V., Wortman Vaughan J., Bansal G., Understanding the role of human intuition on reliance in human-AI decision-making with explanations, Proc. ACM Hum.-Comput. Interact. 7 (CSCW2) (2023) 1–32.
[184]
Bussone A., Stumpf S., O’Sullivan D., The role of explanations on trust and reliance in clinical decision support systems, in: 2015 International Conference on Healthcare Informatics, IEEE, 2015, pp. 160–169.
[185]
Dong J., Chen S., Miralinaghi M., Chen T., Li P., Labi S., Why did the AI make that decision? Towards an explainable artificial intelligence (XAI) for autonomous driving systems, Transp. Res. C 156 (2023).
[186]
Mankodiya H., Jadav D., Gupta R., Tanwar S., Hong W.-C., Sharma R., Od-XAI: Explainable AI-based semantic object detection for autonomous vehicles, Appl. Sci. 12 (11) (2022) 5310.
[187]
Karim M.M., Li Y., Qin R., Toward explainable artificial intelligence for early anticipation of traffic accidents, Transp. Res. Rec. 2676 (6) (2022) 743–755.
[188]
Madhav A.S., Tyagi A.K., Explainable Artificial Intelligence (XAI): connecting artificial decision-making and human trust in autonomous vehicles, in: Proceedings of Third International Conference on Computing, Communications, and Cyber-Security: IC4S 2021, Springer, 2022, pp. 123–136.
[189]
Onyekpe U., Lu Y., Apostolopoulou E., Palade V., Eyo E.U., Kanarachos S., Explainable machine learning for autonomous vehicle positioning using SHAP, in: Explainable AI: Foundations, Methodologies and Applications, Springer, 2022, pp. 157–183.
[190]
Cheng X., Wang J., Li H., Zhang Y., Wu L., Liu Y., A method to evaluate task-specific importance of spatio-temporal units based on explainable artificial intelligence, Int. J. Geogr. Inf. Sci. 35 (10) (2021) 2002–2025.
[191]
Rojat T., Puget R., Filliat D., Del Ser J., Gelin R., Díaz-Rodríguez N., Explainable Artificial Intelligence (XAI) on timeseries data: A survey, 2021, arXiv preprint arXiv:2104.00950.
[192]
Nwakanma C.I., Ahakonye L.A.C., Njoku J.N., Odirichukwu J.C., Okolie S.A., Uzondu C., Ndubuisi Nweke C.C., Kim D.-S., Explainable Artificial Intelligence (XAI) for intrusion detection and mitigation in intelligent connected vehicles: A review, Appl. Sci. 13 (3) (2023) 1252.
[193]
Li J., King S., Jennions I., Intelligent fault diagnosis of an aircraft fuel system using machine learning—A literature review, Machines 11 (4) (2023) 481.
[194]
Bendiab G., Hameurlaine A., Germanos G., Kolokotronis N., Shiaeles S., Autonomous vehicles security: Challenges and solutions using blockchain and artificial intelligence, IEEE Trans. Intell. Transp. Syst. (2023).
[195]
Maqsood A., Chen C., Jacobsson T.J., The future of material scientists in an age of artificial intelligence, Adv. Sci. (2024).
[196]
Oviedo F., Ferres J.L., Buonassisi T., Butler K.T., Interpretable and explainable machine learning for materials science and chemistry, Accounts Mater. Res. 3 (6) (2022) 597–607.
[197]
Pilania G., Machine learning in materials science: From explainable predictions to autonomous design, Comput. Mater. Sci. 193 (2021).
[198]
Choudhary K., DeCost B., Chen C., Jain A., Tavazza F., Cohn R., Park C.W., Choudhary A., Agrawal A., Billinge S.J., et al., Recent advances and applications of deep learning methods in materials science, npj Comput. Mater. 8 (1) (2022) 59.
[199]
Wang A.Y.-T., Mahmoud M.S., Czasny M., Gurlo A., CrabNet for explainable deep learning in materials science: Bridging the gap between academia and industry, Integr. Mater. Manuf. Innov. 11 (1) (2022) 41–56.
[200]
Lee K., Ayyasamy M.V., Ji Y., Balachandran P.V., A comparison of explainable artificial intelligence methods in the phase classification of multi-principal element alloys, Sci. Rep. 12 (1) (2022) 11591.
[201]
Feng J., Lansford J.L., Katsoulakis M.A., Vlachos D.G., Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences, Sci. Adv. 6 (42) (2020) eabc3204.
[202]
Harren T., Matter H., Hessler G., Rarey M., Grebner C., Interpretation of structure–activity relationships in real-world drug design data sets using explainable artificial intelligence, J. Chem. Inf. Model. 62 (3) (2022) 447–462.
[203]
Willard J., Jia X., Xu S., Steinbach M., Kumar V., Integrating physics-based modeling with machine learning: A survey, 2020, pp. 1–34. arXiv preprint arXiv:2003.04919. 1 (1).
[204]
Datcu M., Huang Z., Anghel A., Zhao J., Cacoveanu R., Explainable, physics-aware, trustworthy artificial intelligence: A paradigm shift for synthetic aperture radar, IEEE Geosci. Remote Sens. Mag. 11 (1) (2023) 8–25.
[205]
Willard J., Jia X., Xu S., Steinbach M., Kumar V., Integrating scientific knowledge with machine learning for engineering and environmental systems, ACM Comput. Surv. 55 (4) (2022) 1–37.
[206]
Huang Z., Yao X., Liu Y., Dumitru C.O., Datcu M., Han J., Physically explainable CNN for SAR image classification, ISPRS J. Photogramm. Remote Sens. 190 (2022) 25–37.
[207]
Crocker J., Kumar K., Cox B., Using explainability to design physics-aware CNNs for solving subsurface inverse problems, Comput. Geotech. 159 (2023).
[208]
Sadeghi Tabas S., Explainable physics-informed deep learning for rainfall-runoff modeling and uncertainty assessment across the continental United States, 2023.
[209]
Roscher R., Bohn B., Duarte M.F., Garcke J., Explainable machine learning for scientific insights and discoveries, IEEE Access 8 (2020) 42200–42216.
[210]
Tuia D., Schindler K., Demir B., Camps-Valls G., Zhu X.X., Kochupillai M., Džeroski S., van Rijn J.N., Hoos H.H., Del Frate F., et al., Artificial intelligence to advance earth observation: a perspective, 2023, arXiv preprint arXiv:2305.08413.
[211]
Lopes P., Silva E., Braga C., Oliveira T., Rosado L., XAI systems evaluation: A review of human and computer-centred methods, Appl. Sci. 12 (19) (2022) 9423.
[212]
Hassija V., Chamola V., Mahapatra A., Singal A., Goel D., Huang K., Scardapane S., Spinelli I., Mahmud M., Hussain A., Interpreting black-box models: A review on explainable artificial intelligence, Cogn. Comput. 16 (1) (2024) 45–74.
[213]
Mohseni S., Zarei N., Ragan E.D., A multidisciplinary survey and framework for design and evaluation of explainable AI systems, ACM Trans. Interact. Intell. Syst. (TiiS) 11 (3–4) (2021) 1–45.
[214]
Mohseni S., Block J.E., Ragan E.D., A human-grounded evaluation benchmark for local explanations of machine learning, 2018, arXiv preprint arXiv:1801.05075.
[215]
Gunning D., Aha D., DARPA’s Explainable Artificial Intelligence (XAI) program, AI Mag. 40 (2) (2019) 44–58.
[216]
Nourani M., Kabir S., Mohseni S., Ragan E.D., The effects of meaningful and meaningless explanations on trust and perceived system accuracy in intelligent systems, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, 2019, pp. 97–105.
[217]
Hedström A., Weber L., Krakowczyk D., Bareeva D., Motzkus F., Samek W., Lapuschkin S., Höhne M.M.-C., Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond, J. Mach. Learn. Res. 24 (34) (2023) 1–11.
[218]
Zhou J., Gandomi A.H., Chen F., Holzinger A., Evaluating the quality of machine learning explanations: A survey on methods and metrics, Electronics 10 (5) (2021) 593.
[219]
Markus A.F., Kors J.A., Rijnbeek P.R., The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies, J. Biomed. Inform. 113 (2021).
[220]
Velmurugan M., Ouyang C., Moreira C., Sindhgatta R., Developing a fidelity evaluation approach for interpretable machine learning, 2021, arXiv preprint arXiv:2106.08492.
[221]
Sun W., Stability of Machine Learning Algorithms, (Ph.D. thesis) Purdue University, 2015.
[222]
Drenkow N., Sani N., Shpitser I., Unberath M., A systematic review of robustness in deep learning for computer vision: Mind the gap?, 2021, arXiv preprint arXiv:2112.00639.
[223]
Schryen G., Speedup and efficiency of computational parallelization: A unifying approach and asymptotic analysis, 2022, arXiv preprint arXiv:2212.11223.
[224]
DeYoung J., Jain S., Rajani N.F., Lehman E., Xiong C., Socher R., Wallace B.C., ERASER: A benchmark to evaluate rationalized NLP models, 2019, arXiv preprint arXiv:1911.03429.
[225]
Thampi A., Interpretable AI: Building Explainable Machine Learning Systems, Simon and Schuster, 2022.
[226]
Dwivedi R., Dave D., Naik H., Singhal S., Omer R., Patel P., Qian B., Wen Z., Shah T., Morgan G., et al., Explainable AI (XAI): Core ideas, techniques, and solutions, ACM Comput. Surv. 55 (9) (2023) 1–33.
[227]
Wu S., Fei H., Qu L., Ji W., Chua T.-S., NExt-GPT: Any-to-any multimodal LLM, 2023, arXiv preprint arXiv:2309.05519.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Neurocomputing
Neurocomputing  Volume 599, Issue C
Sep 2024
712 pages

Publisher

Elsevier Science Publishers B. V.

Netherlands

Publication History

Published: 18 October 2024

Author Tags

  1. XAI
  2. Explainable artificial intelligence
  3. Interpretable deep learning
  4. Machine learning
  5. Neural networks
  6. Evaluation methods
  7. Computer vision
  8. Natural language processing
  9. NLP
  10. Transformers
  11. Time series
  12. Healthcare
  13. Autonomous cars

Qualifiers

  • Short-survey

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Dec 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media