[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1007/978-3-030-58285-2_11guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Expressive Explanations of DNNs by Combining Concept Analysis with ILP

Published: 21 September 2020 Publication History

Abstract

Explainable AI has emerged to be a key component for black-box machine learning approaches in domains with a high demand for reliability or transparency. Examples are medical assistant systems, and applications concerned with the General Data Protection Regulation of the European Union, which features transparency as a cornerstone. Such demands require the ability to audit the rationale behind a classifier’s decision. While visualizations are the de facto standard of explanations, they come short in terms of expressiveness in many ways: They cannot distinguish between different attribute manifestations of visual features (e.g. eye open vs. closed), and they cannot accurately describe the influence of absence of, and relations between features. An alternative would be more expressive symbolic surrogate models. However, these require symbolic inputs, which are not readily available in most computer vision tasks. In this paper we investigate how to overcome this: We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN). The semantics of the features are mined by a concept analysis approach trained on a set of human understandable visual concepts. The explanation is found by an Inductive Logic Programming (ILP) method and presented as first-order rules. We show that our explanation is faithful to the original black-box model (The code for our experiments is available at https://github.com/mc-lovin-mlem/concept-embeddings-and-ilp/tree/ki2020).

References

[1]
Adadi A and Berrada M Peeking inside the black-box: a survey on explainable artificial intelligence (XAI) IEEE Access 2018 6 52138-52160
[2]
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. CoRR (2019). http://arxiv.org/abs/1909.03012
[3]
Dai, W.Z., Xu, Q., Yu, Y., Zhou, Z.H.: Bridging machine learning and logical reasoning by abductive learning. In: Advances in Neural Information Processing Systems, pp. 2811–2822 (2019)
[4]
Donadello, I., Serafini, L., d’Avila Garcez, A.S.: Logic tensor networks for semantic image interpretation. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 1596–1602. ijcai.org (2017). 10.24963/ijcai.2017/221
[5]
Fong, R., Vedaldi, A.: Net2Vec: quantifying and explaining how concepts are encoded by filters in deep neural networks. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, pp. 8730–8738. IEEE (2018). 10.1109/CVPR.2018.00910
[6]
Ghorbani, A., Wexler, J., Zou, J.Y., Kim, B.: Towards automatic concept-based explanations. In: Advances in Neural Information Processing Systems 32, pp. 9273–9282 (2019). http://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations
[7]
Goodfellow I, Bengio Y, and Courville A Deep Learning 2016 Cambridge MIT Press
[8]
Ji, G., He, S., Xu, L., Liu, K., Zhao, J.: Knowledge graph embedding via dynamic mapping matrix. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol. 1: Long Papers), pp. 687–696 (2015)
[9]
Khan, K., Mauro, M., Leonardi, R.: Multi-class semantic segmentation of faces. In: Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), pp. 827–831. IEEE (2015)
[10]
Khan, K., Mauro, M., Migliorati, P., Leonardi, R.: Head pose estimation through multi-class face segmentation. In: Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), pp. 175–180. IEEE (2017)
[11]
Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 2668–2677. PMLR (2018). http://proceedings.mlr.press/v80/kim18d.html
[12]
Krizhevsky, A.: One weird trick for parallelizing convolutional neural networks. CoRR (2014). http://arxiv.org/abs/1404.5997
[13]
Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W, and Müller KR Unmasking clever hans predictors and assessing what machines really learn Nat. Commun. 2019 10 1 1-8
[14]
Michalski RS, Carbonell JG, and Mitchell TM Machine Learning - An Artificial Intelligence Approach 1983 Palo Alto Tioga
[15]
Mikolov, T., Yih, W.T., Zweig, G.: Linguistic regularities in continuous space word representations. In: Proceedings of the 2013 Conference on North American Chapter Association for Computational Linguistics: Human Language Technologies, pp. 746–751. Association for Computational Linguistics (2013). https://www.aclweb.org/anthology/N13-1090
[16]
Mitchell TM, Keller RM, and Kedar-Cabelli ST Explanation-based generalization: a unifying view Mach. Learn. 1986 1 1 47-80
[17]
Muggleton SInductive logic programmingNew Gener. Comput.199184295-3180712.68022
[18]
Muggleton S, Schmid U, Zeller C, Tamaddoni-Nezhad A, and Besold TUltra-strong machine learning: comprehensibility of programs learned with ILPMach. Learn.201810771119-1140381472206976229
[19]
Rabold, J., Deininger, H., Siebers, M., Schmid, U.: Enriching visual with verbal explanations for relational concepts-combining lime with Aleph. arXiv preprint arXiv:1910.01837 (2019)
[20]
Rabold J, Siebers M, and Schmid U Riguzzi F, Bellodi E, and Zese R Explaining black-box classifiers with ILP – empowering LIME with Aleph to approximate non-linear decisions with relational rules Inductive Logic Programming 2018 Cham Springer 105-117
[21]
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)
[22]
Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in Neural Information Processing Systems, pp. 3856–3866 (2017)
[23]
Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. CoRR (2017). http://arxiv.org/abs/1708.08296
[24]
Schmid, U.: Inductive programming as approach to comprehensible machine learning. In: Proceedings of the 6th Workshop KI & Kognition, KIK-2018. Co-located with KI 2018 (2018). http://ceur-ws.org/Vol-2194/schmid.pdf
[25]
Schmid, U., Finzel, B.: Mutual explanations for cooperative decision making in medicine. KI - Künstliche Intelligenz, Special Issue Challenges in Interactive Machine Learning, vol. 34 (2020)
[26]
Schwalbe, G., Schels, M.: Concept enforcement and modularization as methods for the ISO 26262 safety argumentation of neural networks. In: Proceedings of the 10th European Congress Embedded Real Time Software and Systems (2020). https://hal.archives-ouvertes.fr/hal-02442796
[27]
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations (2015). http://arxiv.org/abs/1409.1556
[29]
Weitz, K., Hassan, T., Schmid, U., Garbas, J.U.: Deep-learned faces of pain and emotions: elucidating the differences of facial expressions with the help of explainable AI methods. tm-Technisches Messen 86(7–8), 404–412 (2019)
[30]
Xie, S., Girshick, R.B., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 5987–5995. IEEE (2017). 10.1109/CVPR.2017.634
[31]
Yeh, C.K., Kim, B., Arik, S.O., Li, C.L., Pfister, T., Ravikumar, P.: On completeness-aware concept-based explanations in deep neural networks. CoRR (2020). http://arxiv.org/abs/1910.07969

Cited By

View all
  • (2024)Explaining deep convolutional models by measuring the influence of interpretable features in image classificationData Mining and Knowledge Discovery10.1007/s10618-023-00915-x38:5(3169-3226)Online publication date: 1-Sep-2024
  • (2024)A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and conceptsData Mining and Knowledge Discovery10.1007/s10618-022-00867-838:5(3043-3101)Online publication date: 1-Sep-2024
  • (2021)Unsupervised Anomaly Detection for Financial Auditing with Model-Agnostic ExplanationsKI 2021: Advances in Artificial Intelligence10.1007/978-3-030-87626-5_22(291-308)Online publication date: 27-Sep-2021

Index Terms

  1. Expressive Explanations of DNNs by Combining Concept Analysis with ILP
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        KI 2020: Advances in Artificial Intelligence: 43rd German Conference on AI, Bamberg, Germany, September 21–25, 2020, Proceedings
        Sep 2020
        366 pages
        ISBN:978-3-030-58284-5
        DOI:10.1007/978-3-030-58285-2

        Publisher

        Springer-Verlag

        Berlin, Heidelberg

        Publication History

        Published: 21 September 2020

        Author Tags

        1. Explainable AI
        2. Concept analysis
        3. Concept embeddings
        4. Inductive Logic Programming

        Qualifiers

        • Article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 30 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Explaining deep convolutional models by measuring the influence of interpretable features in image classificationData Mining and Knowledge Discovery10.1007/s10618-023-00915-x38:5(3169-3226)Online publication date: 1-Sep-2024
        • (2024)A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and conceptsData Mining and Knowledge Discovery10.1007/s10618-022-00867-838:5(3043-3101)Online publication date: 1-Sep-2024
        • (2021)Unsupervised Anomaly Detection for Financial Auditing with Model-Agnostic ExplanationsKI 2021: Advances in Artificial Intelligence10.1007/978-3-030-87626-5_22(291-308)Online publication date: 27-Sep-2021

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media