Abstract
When AI interacts with humans in complex environments, such as aerospace manufacturing, safety of operation is of paramount importance. Trustworthiness of AI needs to be ensured through, among other things, explainability of its behaviour and rationale, which is typically a challenge for current deep neural network-based systems.
We tackle the knowledge comprehensibility aspect of intrinsic explainability by suggesting a concept-level environment awareness model combining various complementary knowledge sources - statistical learning using dedicated property detectors through publicly available software, and crowd-sourced common-sense knowledge graphs. Our approach also addresses the issue of data-frugal learning, typical for environments with highly specific purpose-built artefacts. We adopt Gärdenfors’s Conceptual Spaces as a cognitively-motivated knowledge representation framework and apply our typicality quantification model in a use case on interpretable classification of manufacturing artefacts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Cf. algorithmic and model transparency [26].
- 2.
Each of us has a unique conceptual space arising from our own experience with the world. Thus, category prototypes may also differ across individuals and so may exact meaning of symbols, whilst sill retaining necessary properties in order to ensure effective natural language communication.
- 3.
Observed as a natural kind [37] and in its natural state (i.e., not painted over or denoting an arbitrary lemon-like artefact).
- 4.
https://fastapi.tiangolo.com (accessed on 23 May 2024).
- 5.
https://cyberbotics.com (accessed on 18 February 2022).
- 6.
https://docs.omniverse.nvidia.com/isaacsim/latest (accessed on 18 March 2024).
- 7.
e.g. https://pixabay.com/photos/tool-equipment-work-craft-allen-379596 (accessed on 10 November 2023).
- 8.
https://github.com/fengsp/color-thief-py (accessed on 27 November 2023).
- 9.
http://api.conceptnet.io (accessed on 22 March 2024).
References
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)
Bellmund, J.L., Gärdenfors, P., Moser, E.I., Doeller, C.F.: Navigating cognition: spatial codes for human thinking. Science 362(6415), eaat6766 (2018)
Bengio, Y., et al.: A meta-transfer objective for learning to disentangle causal mechanisms. arXiv preprint arXiv:1901.10912 (2019)
Bird, S., Klein, E., Loper, E.: Natural language processing with Python: analyzing text with the natural language toolkit. Inc, O’Reilly Media (2009)
Burgess, C.P., et al.: Understanding disentangling in \(\beta \)-vae. arXiv preprint arXiv:1804.03599 (2018)
Constantinescu, A.O., O’Reilly, J.X., Behrens, T.E.: Organizing conceptual knowledge in humans with a gridlike code. Science 352(6292), 1464–1468 (2016)
Croft, W., Cruse, D.A.: Cognitive linguistics. Cambridge University Press (2004)
EASA: Artificial intelligence roadmap 2.0 (2023). https://www.easa.europa.eu/en/downloads/137919/en. Accessed 23 May 2024
European Commission High-level Expert Group on Artificial Intelligence: Ethics guidelines for Trustworthy AI. European Commission (2019). https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Galetić, V.: Formalisation and quantification of a cognitively motivated conceptual space model based on the prototype theory. Ph.D. thesis, University of Zagreb (2016)
Galetić, V., Nottle, A.: Inherently interpretable knowledge representation for a trustworthy artificially intelligent agent teaming with humans in industrial environments. In: AIC, pp. 30–45 (2022)
Galetić, V., Nottle, A.: Flexible and inherently comprehensible knowledge representation for data-efficient learning and trustworthy human-machine teaming in manufacturing environments. arXiv preprint arXiv:2305.11597 (2023)
Gärdenfors, P.: Conceptual spaces: The geometry of thought. MIT press (2004)
Gärdenfors, P.: The Geometry of Meaning: Semantics Based on Conceptual Spaces. MIT Press (2014)
Gärdenfors, P., Williams, M.A.: Reasoning about categories in conceptual spaces. In: IJCAI, pp. 385–392. Citeseer (2001)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 38(1), 142–158 (2015)
Goyal, A., Bengio, Y.: Inductive biases for deep learning of higher-level cognition. Proc. Roy. Soc. A 478(2266), 20210068 (2022)
Hafting, T., Fyhn, M., Molden, S., Moser, M.B., Moser, E.I.: Microstructure of a spatial map in the entorhinal cortex. Nature 436(7052), 801–806 (2005)
Kirillov, A., et al.: Segment anything. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4015–4026 (2023)
Kornblith, H.: Inductive inference and its natural ground: An essay in naturalistic epistemology. Mit Press (1995)
Kriegeskorte, N., Douglas, P.K.: Cognitive computational neuroscience. Nat. Neurosci. 21(9), 1148–1160 (2018)
Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people. Behav. Brain Sci. 40 (2017)
Lakoff, G.: Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press (2008)
Langacker, R.W.: Foundations of cognitive grammar: Volume I: Theoretical prerequisites, vol. 1. Stanford university press (1987)
Lenat, D.B., Guha, R.V.: Building large knowledge-based systems; representation and inference in the Cyc project. Addison-Wesley Longman Publishing Co., Inc (1989)
Lipton, Z.C.: The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)
Liu, W., et al.: Ssd: single shot multibox detector. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pp. 21–37. Springer (2016)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017)
Luo, S., Bimbo, J., Dahiya, R., Liu, H.: Robotic tactile perception of object properties: a review. Mechatronics 48, 54–67 (2017)
Malt, B.C.: An on-line investigation of prototype and exemplar strategies in classification. J. Exp. Psychol. Learn. Mem. Cogn. 15(4), 539 (1989)
Miller, G.A.: Wordnet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Morgenstern, Y., Hartmann, F., Schmidt, F., Tiedemann, H., Prokott, E., Maiello, G., Fleming, R.W.: An image-computable model of human visual shape similarity. PLoS Comput. Biol. 17(6), e1008981 (2021)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Rosch, E., Mervis, C.B.: Family resemblances: studies in the internal structure of categories. Cogn. Psychol. 7(4), 573–605 (1975)
Rosch, E., Mervis, C.B., Gray, W.D., Johnson, D.M., Boyes-Braem, P.: Basic objects in natural categories. Cogn. Psychol. 8(3), 382–439 (1976)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)
Sloman, S.A., Love, B.C., Ahn, W.K.: Feature centrality and conceptual coherence. Cogn. Sci. 22(2), 189–228 (1998)
Speer, R., Chin, J., Havasi, C.: Conceptnet 5.5: an open multilingual graph of general knowledge. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)
Tenenbaum, J.B., Griffiths, T.L., Kemp, C.: Theory-based bayesian models of inductive learning and reasoning. Trends Cogn. Sci. 10(7), 309–318 (2006)
Tenenbaum, J.B., Kemp, C., Griffiths, T.L., Goodman, N.D.: How to grow a mind: statistics, structure, and abstraction. Science 331(6022), 1279–1285 (2011)
Tomsett, R., Braines, D., Harborne, D., Preece, A., Chakraborty, S.: Interpretable to whom? a role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552 (2018)
Zhang, Q., et al.: Towards an integrated evaluation framework for XAI: an experimental study. Procedia Comput. Sci. 207, 3884–3893 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Galetić, V., Sheldon, D., Nottle, A. (2024). Conceptual Knowledge Modelling for Human-AI Teaming in Data-Frugal Industrial Environments. In: Cabrera, I.P., Ferré, S., Obiedkov, S. (eds) Conceptual Knowledge Structures. CONCEPTS 2024. Lecture Notes in Computer Science(), vol 14914. Springer, Cham. https://doi.org/10.1007/978-3-031-67868-4_15
Download citation
DOI: https://doi.org/10.1007/978-3-031-67868-4_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-67867-7
Online ISBN: 978-3-031-67868-4
eBook Packages: Computer ScienceComputer Science (R0)