Abstract
Context-aware systems require context information, which essentially should rely on high-quality data. Object detection is one particular area enabling context information from the environment to be processed. Ensuring the presence of high-quality data is crucial for machine learning methods to detect objects with high precision. This paper presents a framework for explanation-aware visualization and adjudication in object detection, integrating the user into a semi-automatic verification and adjudication process, where targeted information can be transported by visualization and explanation methods. We discuss a tool for supporting such approaches and present first results and perspectives.
A. G. Chowdhury and D. Massanés—Both authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alegre, U., Augusto, J.C., Clark, T.: Engineering context-aware systems and applications: a survey. J. Syst. Softw. 117, 55–83 (2016)
Atzmueller, M.: Declarative aspects in explicative data mining for computational sensemaking. In: Seipel, D., Hanus, M., Abreu, S. (eds.) Declarative Programming, pp. 97–114. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-030-00801-7_7
Bany Muhammad, M., Yeasin, M.: Eigen-cam: visual explanations for deep convolutional neural networks. SN Comput. Sci. 2, 1–14 (2021)
Chowdhury, A.G., Schut, N., Atzmueller, M.: A hybrid information extraction approach using transfer learning on richly-structured documents. In: Proceedings of LWDA 2021 Workshops: FGWM, KDML, FGWI-BIA, and FGIR. CEUR Workshop Proceedings, vol. 2993, pp. 13–25. CEUR-WS.org (2021)
David, E., et al.: Global wheat head detection (GWHD) dataset: a large and diverse dataset of high-resolution RGB-labelled images to develop and benchmark wheat head detection methods. Plant Phenomics (2020)
Dey, A.K.: Understanding and using context. Pers. Ubiquit. Comput. 5, 4–7 (2001)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5) (2018)
Günther, M., Ruiz-Sarmiento, J., Galindo, C., González-Jiménez, J., Hertzberg, J.: Context-aware 3D object anchoring for mobile robots. Robot. Auton. Syst. 110, 12–32 (2018)
Gwon, C., Howell, S.C.: Odsmoothgrad: generating saliency maps for object detectors. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 3685–3689 (2023)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Hong, J.Y., Suh, E.H., Kim, S.J.: Context-aware systems: a literature review and classification. Expert Syst. Appl. 36(4), 8509–8522 (2009)
Krackov, W., Sor, M., Razdan, R., Zheng, H., Kotanko, P.: Artificial intelligence methods for rapid vascular access aneurysm classification in remote or in-person settings. Blood Purif. 50(4–5), 636–641 (2021)
Li, H., Wu, Z., Shrivastava, A., Davis, L.S.: Rethinking pseudo labels for semi-supervised object detection. In: Proceedings of AAAI, vol. 36, pp. 1314–1322 (2022)
Li, Y.F., Liang, D.M.: Safe semi-supervised learning: a brief introduction. Front. Comput. Sci. 13, 669–676 (2019)
Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2117–2125 (2017)
Lu, Y., Young, S.: A survey of public datasets for computer vision tasks in precision agriculture. Comput. Electron. Agric. 178, 105760 (2020)
Martins, R., Bersan, D., Campos, M.F., Nascimento, E.R.: Extending maps with semantic and contextual object information for robot navigation: a learning-based framework using visual and depth cues. J. Intell. Robot. Syst. 99, 555–569 (2020)
Monarch, R.M.: Human-in-the-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI. Simon and Schuster (2021)
Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 73, 1–15 (2018)
Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., Fernández-Leal, Á.: Human-in-the-loop machine learning: a state of the art. Artif. Intell. Rev. 56(4), 3005–3054 (2023)
Muhammad, M.B., Yeasin, M.: Eigen-cam: class activation map using principal components. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE (2020)
Papadopoulos, D.P., Uijlings, J.R.R., Keller, F., Ferrari, V.: We don’t need no bounding-boxes: training object class detectors using only human verification. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Petsiuk, V., et al.: Black-box explanation of object detectors via saliency maps. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11443–11452 (2021)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)
Salazar-Gomez, A., Darbyshire, M., Gao, J., Sklar, E.I., Parsons, S.: Towards practical object detection for weed spraying in precision agriculture. arXiv preprint arXiv:2109.11048 (2021)
Sarkar, S., Majumder, S., Koehler, J.L., Landman, S.R.: An ensemble of features based deep learning neural network for reduction of inappropriate atrial fibrillation detection in implantable cardiac monitors. Heart Rhythm O2 4(1), 51–58 (2023)
Sekachev, B., et al.: opencv/cvat: v1.1.0 (2020). https://doi.org/10.5281/zenodo.4009388
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of IEEE International Conference on Computer Vision, pp. 618–626 (2017)
Shafti, A., Orlov, P., Faisal, A.A.: Gaze-based, context-aware robotic system for assisted reaching and grasping. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 863–869. IEEE (2019)
Shen, Z., Zhang, R., Dell, M., Lee, B.C.G., Carlson, J., Li, W.: Layoutparser: a unified toolkit for deep learning based document image analysis. In: Llados, J., Lopresti, D., Uchida, S. (eds.) ICDAR 2021. LNCS, vol. 12821, pp. 131–146. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86549-8_9
Sreeram, M., Nof, S.Y.: Human-in-the-loop: role in cyber physical agricultural systems. Int. J. Comput. Commun. Control 16(2) (2021)
Stidham, R.W., et al.: Performance of a deep learning model vs human reviewers in grading endoscopic disease severity of patients with ulcerative colitis. JAMA Netw. Open 2(5), e193963–e193963 (2019)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)
Tsiakas, K., Murray-Rust, D.: Using human-in-the-loop and explainable AI to envisage new future work practices. In: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, pp. 588–594 (2022)
Tzutalin: Labelimg. Free Software: MIT License (2015). https://github.com/tzutalin/labelImg
Wada, K.: labelme: Image Polygonal Annotation with Python (2016). https://github.com/wkentaro/labelme
Yürür, Ö., Liu, C.H., Sheng, Z., Leung, V.C., Moreno, W., Leung, K.K.: Context-awareness for mobile sensing: a survey and future directions. IEEE Commun. Surv. Tutor. 18(1), 68–93 (2014)
Acknowledgements
This work has been supported by the funded project FRED, German Federal Ministry for Economic Affairs and Climate Action (BMWK), FKZ: 01MD22003E.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chowdhury, A.G., Massanés, D., Meinert, S., Atzmueller, M. (2024). A Framework for Explanation-Aware Visualization and Adjudication in Object Detection: First Results and Perspectives. In: Ferrández Vicente, J.M., Val Calvo, M., Adeli, H. (eds) Artificial Intelligence for Neuroscience and Emotional Systems. IWINAC 2024. Lecture Notes in Computer Science, vol 14674. Springer, Cham. https://doi.org/10.1007/978-3-031-61140-7_47
Download citation
DOI: https://doi.org/10.1007/978-3-031-61140-7_47
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-61139-1
Online ISBN: 978-3-031-61140-7
eBook Packages: Computer ScienceComputer Science (R0)