Abstract
With the advancement of AI technology, AI tutors are already being utilized in classrooms throughout the world as teaching aides, tutors, and peer learning specialists. AI tutors are good at facilitating various teaching-learning practices within and outside the classroom, and support students 24/7. However, little is known whether AI tutors can be as effective in learning languages as human tutors, and what factors would cause student learning outcome differences between human tutors and AI tutors, especially for those students who are not engaged in learning. The current study sought to address these questions by discovering the combination of two factors: involvement (high vs. low) and tutor type (human tutor vs. AI tutor) under two conditions: weak and high writing quality. The findings indicate that there is an interaction effect between user involvement and tutor type. When user involvement is low, the AI tutor is perceived to have a higher writing quality than the human tutor; when user involvement was high, tutor type did not affect the perceived writing quality of the tutor, no matter the tutor was a human or an AI. The reason that the human tutor is preferred is that the human tutor is perceived to have a higher level of controllability than the AI tutor. The writing quality of tutors affects the credibility of tutors as well.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kelly, R.: AI-powered tutor uses google cloud to generate learning activities. https://campustechnology.com/articles/2021/09/07/ai-powered-tutor-uses-google-cloud-to-generate-learning-activities.aspx
Preston, J.: Jill Watson, an AI Pioneer in Education, Turns 4. https://ic.gatech.edu/news/631545/jill-watson-ai-pioneer-education-turns-4
Hao, K.: China has started a grand experiment in AI education. It could reshape how the world learns. https://www.technologyreview.com/2019/08/02/131198/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-how-the/
Nazari, N., Shabbir, M.S., Setiawan, R.: Application of artificial intelligence powered digital writing assistant in higher education: randomized controlled trial. Heliyon 7 (2021). https://doi.org/10.1016/j.heliyon.2021.e07014
Ruan, S., et al.: EnglishBot: an AI-powered conversational system for second language learning. In: 26th International Conference on Intelligent User Interfaces, pp. 434–444. ACM, College Station TX USA (2021)
Tu, J.: Learn to speak like a native: AI-powered chatbot simulating natural conversation for language tutoring. J. Phys. Conf. Ser. 1693, 012216 (2020). https://doi.org/10.1088/1742-6596/1693/1/012216
Pokrivcakova, S.: Preparing teachers for the application of AI-powered technologies in foreign language education. J. Lang. Cult. Educ. 7, 135–153 (2019). https://doi.org/10.2478/jolace-2019-0025
Ahmad, M.I., Mubin, O., Orlando, J.: Understanding behaviours and roles for social and adaptive robots in education: teacher’s perspective. In: Proceedings of the Fourth International Conference on Human Agent Interaction, pp. 297–304. Association for Computing Machinery, New York, NY, USA (2016)
Kim, J., Merrill, K., Xu, K., Sellnow, D.D.: My teacher is a machine: understanding students’ perceptions of AI teaching assistants in online education. Int. J. Hum. Comput. Interact. 36, 1902–1911 (2020). https://doi.org/10.1080/10447318.2020.1801227
Hsu, C.-K., Hwang, G.-J., Chang, C.-K.: A personalized recommendation-based mobile learning approach to improving the reading performance of EFL students. Comput. Educ. 63, 327–336 (2013). https://doi.org/10.1016/j.compedu.2012.12.004
Bradac, V., Walek, B.: A comprehensive adaptive system for e-learning of foreign languages. Expert Syst. Appl. 90, 414–426 (2017). https://doi.org/10.1016/j.eswa.2017.08.019
McDill, S.: Robots get private view of major pop art show (2020). https://www.reuters.com/article/us-health-coronavirus-art-robots-idUKKBN27E1ZZ
Szymanska, Z.: Avatar robot goes to school for ill German boy (2022). https://www.reuters.com/technology/avatar-robot-goes-school-ill-german-boy-2022-01-14/
Kohli, D.: An Andover preschool hired an unusual teacher’s aide: a robot–The Boston Globe. https://www.bostonglobe.com/2021/12/06/business/an-andover-preschool-teacher-is-robot/
Hams, M.: Robot teacher introduced in Gaza classroom. https://www.i24news.tv/en/news/middle-east/palestinian-territories/1639688788-robot-teacher-introduced-in-gaza-classroom
McDonagh, M.: Sligo schoolchildren’s new teacher will be Nao–a robot. https://www.irishtimes.com/news/education/sligo-schoolchildren-s-new-teacher-will-be-nao-a-robot-1.4659247
Wolhuter, S.: AI in education: how chatbots are transforming learning (2019). https://wearebrain.com/blog/ai-data-science/top-5-chatbots-in-education/
Edwards, A., Edwards, C., Spence, P.R., Harris, C., Gambino, A.: Robots in the classroom: differences in students’ perceptions of credibility and learning between teacher as robot and robot as teacher. Comput. Hum. Behav. 65, 627–634 (2016). https://doi.org/10.1016/j.chb.2016.06.005
Edwards, C., Edwards, A., Albrehi, F., Spence, P.: Interpersonal impressions of a social robot versus human in the context of performance evaluations. Commun. Educ. 70, 165–182 (2021). https://doi.org/10.1080/03634523.2020.1802495
Abendschein, B., Edwards, C., Edwards, A., Rijhwani, V., Stahl, J.: Human-Robot teaming configurations: a study of interpersonal communication perceptions and affective learning in higher education. J. Commun. Pedagog. 4, 123–132 (2021). https://doi.org/10.3316/INFORMIT.105941407010443
Edwards, C., Edwards, A., Stoll, B., Lin, X., Massey, N.: Evaluations of an artificial intelligence instructor’s voice: social identity theory in human-robot interactions. Comput. Hum. Behav. 90, 357–362 (2019). https://doi.org/10.1016/j.chb.2018.08.027
Xu, K.: First encounter with robot alpha: how individual differences interact with vocal and kinetic cues in users’ social responses. New Media Soc. 21, 2522–2547 (2019)
Chérif, E., Lemoine, J.-F.: Anthropomorphic virtual assistants and the reactions of internet users: an experiment on the assistant’s voice. Recherche et Appl. en Mark. (Engl. Ed.) 34, 28–47 (2019). https://doi.org/10.1177/2051570719829432
Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 72–78 (1994)
Nass, C., Fogg, B.J., Moon, Y.: Can computers be teammates? Int. J. Hum Comput Stud. 45, 669–678 (1996). https://doi.org/10.1006/ijhc.1996.0073
Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56, 81–103 (2000). https://doi.org/10.1111/0022-4537.00153
Nass, C., Lee, K.M.: Does computer-synthesized speech manifest personality? Experimental tests of recognition, similarity-attraction, and consistency-attraction. J. Exp. Psychol. Appl. 7, 171–181 (2001). https://doi.org/10.1037/1076-898X.7.3.171
Edwards, C., Edwards, A., Spence, P.R., Shelton, A.K.: Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter. Comput. Hum. Behav. 33, 372–376 (2014). https://doi.org/10.1016/j.chb.2013.08.013
Waddell, T.F.: Can an algorithm reduce the perceived bias of news? Testing the effect of machine attribution on news readers’ evaluations of bias, anthropomorphism, and credibility. Journal. Mass Commun. Q. 96, 82–100 (2019). https://doi.org/10.1177/1077699018815891
Sundar, S.S., Kim, J.: Machine heuristic: when we trust computers more than humans with our personal information. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 538:1–538:9. ACM, New York, NY, USA (2019)
Petty, R.E., Cacioppo, J.T., Goldman, R.: Personal involvement as a determinant of argument-based persuasion. J. Pers. Soc. Psychol. 41, 847–855 (1981). https://doi.org/10.1037/0022-3514.41.5.847
Petty, R.E., Cacioppo, J.T., Schumann, D.: Central and peripheral routes to advertising effectiveness: the moderating role of involvement. J. Consum. Res. 10, 135–146 (1983). https://doi.org/10.1086/208954
Petty, R.E., Cacioppo, J.T.: The elaboration likelihood model of persuasion. In: Communication and Persuasion. Springer Series in Social Psychology. Springer, New York, NY (1986). https://doi.org/10.1007/978-1-4612-4964-1_1
Weiner, B.: Attribution, emotion, and action. In: Handbook of Motivation and Cognition: Foundations of Social Behavior, pp. 281–312. Guilford Press, New York, NY, US (1986)
Leo, X., Huh, Y.E.: Who gets the blame for service failures? Attribution of responsibility toward robot versus human service providers and service firms. Comput. Hum. Behav. 113, 106520 (2020). https://doi.org/10.1016/j.chb.2020.106520
Van Vaerenbergh, Y., Orsingher, C., Vermeir, I., Larivière, B.: A meta-analysis of relationships linking service failure attributions to customer outcomes. J. Serv. Res. 17, 381–398 (2014). https://doi.org/10.1177/1094670514538321
Gray, H., Gray, K., Wegner, D.: Dimensions of mind perception. Science 315, 619 (2007). https://doi.org/10.1126/science.1134475
Gray, K., Wegner, D.M.: Feeling robots and human zombies: mind perception and the uncanny valley. Cognition 125, 125–130 (2012). https://doi.org/10.1016/j.cognition.2012.06.007
McCroskey, J.C., Teven, J.J.: Goodwill: a reexamination of the construct and its measurement. Commun. Monogr. 66, 90–103 (1999). https://doi.org/10.1080/03637759909376464
Hong, J.W.: Why is artificial intelligence blamed more? Analysis of faulting artificial intelligence for self-driving car accidents in experimental settings. Int. J. Hum. Comput. Interact. 36, 1768–1774 (2020). https://doi.org/10.1080/10447318.2020.1785693
Hong, J.-W., Williams, D.: Racism, responsibility and autonomy in HCI: testing perceptions of an AI agent. Comput. Hum. Behav. 100, 79–84 (2019). https://doi.org/10.1016/j.chb.2019.06.012
Hong, J.-W., Choi, S., Williams, D.: Sexist AI: an experiment integrating CASA and ELM. Int. J. Hum. Comput. Interact. 1–14 (2020). https://doi.org/10.1080/10447318.2020.1801226
Sundar, S.: The MAIN model: a heuristic approach to understanding technology effects on credibility. In: MacArthur Foundation Digital Media and Learning Initiative. The MIT Press, Cambridge (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, M., Liu, F., Lee, YH. (2022). My Tutor is an AI: The Effects of Involvement and Tutor Type on Perceived Quality, Perceived Credibility, and Use Intention. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_15
Download citation
DOI: https://doi.org/10.1007/978-3-031-05643-7_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05642-0
Online ISBN: 978-3-031-05643-7
eBook Packages: Computer ScienceComputer Science (R0)