[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

AI Explainability and Acceptance: A Case Study for Underwater Mine Hunting

Published: 06 March 2024 Publication History

Abstract

In critical operational context such as Mine Warfare, Automatic Target Recognition (ATR) algorithms are still hardly accepted. The complexity of their decision-making hampers understanding of predictions despite performances approaching human expert ones. Much research has been done in Explainability Artificial Intelligence (XAI) field to avoid this “black box” effect. This field of research attempts to provide explanations for the decision-making of complex networks to promote their acceptability. Most of the explanation methods applied on image classifier networks provide heat maps. These maps highlight pixels according to their importance in decision-making.
In this work, we first implement different XAI methods for the automatic classification of Synthetic Aperture Sonar (SAS) images by convolutional neural networks (CNN). These different methods are based on a post hoc approach. We study and compare the different heat maps obtained.
Second, we evaluate the benefits and the usefulness of explainability in an operational framework for collaboration. To do this, different user tests are carried out with different levels of assistance, ranging from classification for an unaided operator to classification with explained ATR. These tests allow us to study whether heat maps are useful in this context.
The results obtained show that the heat maps explanation has a disputed utility according to the operators. Heat map presence does not increase the quality of the classifications. On the contrary, it even increases the response time. Nevertheless, half of operators see a certain usefulness in heat maps explanation.

References

[1]
D. A. Kobus and L. J. Lewandowski. 1991. Critical factor in sonar operation: A survey of experienced operators. Naval Health Res. Cent. San Diego CA (1991). https://apps.dtic.mil/sti/tr/pdf/ADA258924.pdf
[2]
R. E. Hansen. 2011. Introduction to synthetic aperture sonar. IntechOpen (2011).
[3]
D. P. Williams, M. Couillard, and S. Dugelay. 2014. On human perception and automatic target recognition: Strategies for human-computer cooperation. In 22nd International Conference on Pattern Recognition. 4690–4695.
[4]
A. Krizhevsky, I. Sutskever, and G. E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, 1097–1105.
[5]
O. L. Tellez. 2019. Underwater threat recognition: Are automatic target classification alorithms going to replace expert humain operator in the near future. In OCEANS Conference. 1–4.
[6]
D. Williams. 2020. On the use of tiny convolutional neural networks for human-expert-level classification performance in sonar imagery. IEEE J. Ocean. Eng. 46, 1 (2020), 236–260.
[7]
N. Allen and R. Kessel. 2003. The Roles of Human Operator and Machine in Decision Aid Strategies for Target Detection. Technical Report. Defence R&D Canada – Atlantic. 15 pages.
[8]
Zachary and C. Lipton. 2016. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Zachary C Lipton Queue 16, 3 (2016), 31–57.
[9]
F. Z. Hu, T. Kuflik, I. G. Mocanu, S. Najafian, and A. Shulner Tal. 2021. Recent studies of XAI - review. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. Association for Computing Machinery, 421–431.
[10]
K. Holzinger, K. Mak, P. Kieseberg, and A. Holzinger. 2018. Can we trust machine learning results? Artificial intelligence in safety-critical decision support. ERCIM News 112, 1(2018), 42–43.
[11]
Christoph Molnar. 2019. Interpretable Machine Learning. Retrieved from https://christophm.github.io/interpretable-ml-book
[12]
A. Holzinger, A. Saranti, C. Molnar, P. Biececk, and W. Samek. 2022. Explainable AI methods - A brief overview. XXAI - Lecture Notes in Artificial Intelligence, Vol. 13200. Springer, Cham, 13–38.
[13]
L. Bertossi and F. Geerts. 2020. Data quality and explainable AI. J. Data Inf. Qual. 12, 2 (2020).
[14]
Z. Keqing, T. Jie, and H. Haining. 2018. Underwater object images classification based on convolutional neural network. In IEEE 3rd International Conference on Signal and Image Processing (ICSIP’18).
[15]
K. R. Weiss, T. M. Khoshgoftaar, and D. Wang. 2016. A survey of transfer learning. Journal of Big Data (Springer International Publishing) 3, 1 (2016), 9. https://typeset.io/papers/a-survey-of-transfer-learning-w9lk128p94
[16]
Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. https://arxiv.org/pdf/1704.04861.pdf(2017.pdf
[17]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition. 248–255.
[18]
S. Jesus, C. Belém, V. Balayan, J. Bento, P. Saleiro, P. Bizarro, and J. Gama. 2021. How can I choose an explainer? An application-grounded evaluation of post-hoc explanations. In ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 805–815.
[19]
C. Belloni, N. Aouf, A. Balleri, J.-M. Le Caillec, and T. Merlet. 2020. Explainability of deep SAR ATR through feature analysis. IEEE Trans. Aerosp. Electron. Systems 57, 1 (2020), 1–1. https://ieeexplore.ieee.org/document/9233954
[20]
SHAP codes and documentation. Retrieved from https://github.com/shap/shap
[21]
V. Ramanishka, A. Das, J. Zhang, and K. Saenko. 2017. Top-down visual saliency guided by captions. In Computer Vision and Pattern Recognition Conference.
[22]
M. T. Ribeiro, S. Singh, and C. Guestrin. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations.
[23]
S. Lundberg and S. Lee. 2017. A unified approach to interpreting model predictions. NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, 4768–4777.
[24]
M. Sundararajan, A. Taly, and Q. Yan. 2017. Axiomatic attribution for deep networks. In 34th International Conference on Machine Learning. 3319–3328.
[25]
M. Ancona, E. Ceolini, C. Oztireli, and M. Gross. 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations.
[26]
A. Shrikumar, P. Greenside, A. Shcherbina, and A. Kundaje. 2017. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. (2017). https://arxiv.org/abs/1605.01713
[27]
N. Burkart and M. Huber. 2021. A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research 70 (2021), 245–317.
[28]
G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K. Muller. 2019. Layer-wise relevance propagation: An overview. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer Nature Switzerland, 193–209.
[29]
S. Shapley. 1953. A value for n-person games. In Contributions to the Theory of Games II, H. Kuhn and A. Tucker (Eds.). Princeton University Press, Princeton, 307–317.
[30]
G. J. Richard, J. Habonneau, D. Gueriot, and J-M Le Caillec. accepted. Explainable AI methods for underwater mine warfare. IEEE J. Ocean. Eng. (accepted).
[31]
Y. Steiniegr, D. Kraus, and T. Meisen. 2022. Survey on deep learning based computer vision for sonar imagery. Eng. Applicat. Artif. Intell. 114 (2022), 105157.
[32]
A. Holzinger, A. Carrington, and H. Muller. 2020. Measuring the quality of explanations: The system causability scale (SCS). Künstliche Intelligenz 34 (2020), 193–198.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Journal of Data and Information Quality
Journal of Data and Information Quality  Volume 16, Issue 1
March 2024
187 pages
EISSN:1936-1963
DOI:10.1145/3613486
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 March 2024
Online AM: 21 December 2023
Accepted: 02 November 2023
Revised: 16 September 2023
Received: 09 December 2022
Published in JDIQ Volume 16, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Mine counter measure
  2. explainability
  3. automatic target recognition
  4. SHAP

Qualifiers

  • Research-article

Funding Sources

  • Direction Générale de l’Armement (French MoD) and the Association Nationale de la Recherche Technologique

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 283
    Total Downloads
  • Downloads (Last 12 months)283
  • Downloads (Last 6 weeks)27
Reflects downloads up to 27 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media