Abstract
Interactive and collaborative approaches have been successfully used in educational scenarios. For machine learning and AI, however, such approaches typically require a fair amount of technical expertise. In order to reach everyday users of AI technologies, we propose and evaluate a new interactive approach to help end-users gain a better understanding of AI: A participatory machine learning show. During the show, participants were able to collectively gather corpus data for a neural network for keyword recognition, and interactively train and test its accuracy. Furthermore, the network’s decisions were explained by using both an established XAI framework (LIME) and a virtual agent. In cooperation with a museum, we ran several prototype shows and interviewed participating and non-participating visitors to gain insights about their attitude towards (X)AI. We could deduce that the virtual agent and the inclusion of XAI visualisations in our edutainment show were generally rated positively by participants, even though the frameworks we used were originally designed for experts. When comparing both groups, we found that participants felt significantly more competent and positive towards technology compared to non-participating visitors. Our findings suggests that the consideration of specific user needs, personal background, and mental models about (X)AI systems should be included in the XAI design for end-users.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The presented study in this paper as well as the collected data have been approved by the data protection officer of University of Augsburg.
- 2.
- 3.
The Mann-Whitney U-test is the non-parametric equivalent of the t-test for independent samples and is used when the conditions for a parametric procedure are not met (in our case: homogeneity of variances and a non-normal distribution of the data).
- 4.
This result was no longer significant due to the alpha error correction.
References
De Carolis, B., Rossano, V.: A team of presentation agents for edutainment. In: Proceedings of the 8th International Conference on Interaction Design and Children, pp. 150–153. IDC 2009, ACM, New York, NY, USA (2009)
European Commission: Special Eurobarometer 460 (2017)
Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. Int. J. Comput. Vis. 59(2), 167–181 (2004)
Fulton, L.B., Lee, J.Y., Wang, Q., Yuan, Z., Hammer, J., Perer, A.: Getting playful with explainable AI: games with a purpose to improve human understanding of AI. In: CHI Conference on Human Factors in Computing Systems, pp. 1–8. CHI EA 2020, Association for Computing Machinery, Honolulu, HI, USA (2020)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an approach to evaluating interpretability of machine learning (2018)
Gilpin, L.H., Testart, C., Fruchter, N., Adebayo, J.: Explaining explanations to society (2019)
Haake, M.: Virtual pedagogical agents-beyond the constraints of the computational approach (2006)
Hammer, S., Kirchner, K., André, E., Lugrin, B.: Touch or talk? Comparing social robots and tablet pcs for an elderly assistant recommender system. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 129–130. HRI 2017, ACM, New York, NY, USA (2017)
Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y., De Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)
Heimerl, A., Weitz, K., Baur, T., Andre, E.: Unraveling ml models of emotion with nova: multi-level explainable AI for non-experts. IEEE Transactions on Affective Computing, p. 1 (2020)
Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015)
Hoffman, J.D., et al.: Human-automation collaboration in dynamic mission planning: a challenge requiring an ecological approach. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 50(23), 2482–2486 (2006)
Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects (2018)
Huber, T., Weitz, K., André, E., Amir, O.: Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps. CoRR abs/2005.08874 (2020)
Jian, J.Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 4(1), 53–71 (2000)
Jin, S.A.A.: The effects of incorporating a virtual agent in a computer-aided test designed for stress management education: the mediating role of enjoyment. Comput. Hum. Behav. 26(3), 443–451 (2010)
Karrer, K., Glaser, C., Clemens, C., Bruder, C.: Technikaffinität erfassen-der fragebogen ta-eg. Der Mensch im Mittelpunkt technischer Systeme 8, 196–201 (2009)
Kisler, T., Reichel, U., Schiel, F.: Multilingual processing of speech via web services. Comput. Speech Lang. 45, 326–347 (2017)
Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)
Lepouras, G., Vassilakis, C.: Virtual museums for all: employing game technology for edutainment. Virtual Real. 8(2), 96–106 (2004)
Lester, J.C., Converse, S.A., Kahler, S.E., Barlow, S.T., Stone, B.A., Bhogal, R.S.: The persona effect: affective impact of animated pedagogical agents. In: Proceedings of the conference on Human factors in computing systems CHI 1997, pp. 359–366. ACM Press, Atlanta, Georgia, United States (1997)
Marsh, S., Dibben, M.R.: Trust, untrust, distrust and mistrust – an exploration of the dark(er) side. In: Herrmann, P., Issarny, V., Shiu, S. (eds.) iTrust 2005. LNCS, vol. 3477, pp. 17–33. Springer, Heidelberg (2005). https://doi.org/10.1007/11429760_2
Mayer, R.E., DaPra, C.S.: An embodiment effect in computer-based learning with animated pedagogical agents. J. Exp. Psychol. Appl. 18(3), 239–252 (2012)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)
Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum (2017)
Ming, Y., Ruan, Q., Gao, G.: A mandarin edutainment system integrated virtual learning environments. Speech Commun. 55(1), 71–83 (2013)
Reich-Stiebert, N., Eyssel, F., Hohnemann, C.: Involve the user! changing attitudes toward robots by user participation in a robot prototyping process. Comput. Hum. Behav. 91, 290–296 (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22Nd ACM SIGKDD Int. Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, New York, NY, USA (2016)
Rutjes, H., Willemsen, M., IJsselsteijn, W.: Considerations on explainable ai and users’ mental models. In: Where is the Human? Bridging the Gap Between AI and HCI. Association for Computing Machinery Inc, United States (May 2019)
Sainath, T.N., Parada, C.: Convolutional neural networks for small-footprint keyword spotting. In: Proceedings of Interspeech, 2015, pp. 1478–1482. ISCA Archive, Dresden, Germany (2015)
Sanders, T., Oleson, K.E., Billings, D.R., Chen, J.Y.C., Hancock, P.A.: A model of human-robot trust: theoretical model development. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 55(1), 1432–1436 (2011)
Schneider, J., Handali, J.: Personalized explanation in machine learning: a conceptualization. arXiv preprint arXiv:1901.00770 (2019)
Stubbs, K., Hinds, P.J., Wettergreen, D.: Autonomy and common ground in human-robot interaction: a field study. IEEE Intell. Syst. 22(2), 42–50 (2007)
Van Mulken, S., André, E., Müller, J.: The persona effect: how substantial is it? In: Johnson, H., Nigay, L., Roast, C. (eds.) People and computers XIII, pp. 53–66. Springer, London (1998). https://doi.org/10.1007/978-1-4471-3605-7_4
Warden, P.: Speech commands: a dataset for limited-vocabulary speech recognition (2018)
Weitz, K., Hassan, T., Schmid, U., Garbas, J.U.: Deep-learned faces of pain and emotions: elucidating the differences of facial expressions with the help of explainable AI methods. tm-Technisches Messen 86(7–8), 404–412 (2019)
Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E.: “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design. J. Multimodal User Interfaces 15(2), 87–98 (2021)
Acknowledgements
This work was partially funded by the Volkswagen Stiftung in the project AI-FORA (Az. 98 563) and by the German Federal Ministry of Education and Research (BMBF) in the project DIGISTA (grant number 01U01820A). We thank Deutsches Museum Munich, who made it possible for us to conduct the study.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Weitz, K., Schlagowski, R., André, E. (2021). Demystifying Artificial Intelligence for End-Users: Findings from a Participatory Machine Learning Show. In: Edelkamp, S., Möller, R., Rueckert, E. (eds) KI 2021: Advances in Artificial Intelligence. KI 2021. Lecture Notes in Computer Science(), vol 12873. Springer, Cham. https://doi.org/10.1007/978-3-030-87626-5_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-87626-5_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87625-8
Online ISBN: 978-3-030-87626-5
eBook Packages: Computer ScienceComputer Science (R0)