[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3584931.3607010acmconferencesArticle/Chapter ViewAbstractPublication PagescscwConference Proceedingsconference-collections
research-article

When Gestures and Words Synchronize: Exploring A Human Lecturer's Multimodal Interaction for the Design of Embodied Pedagogical Agents

Published: 14 October 2023 Publication History

Abstract

Embodied Pedagogical Agents (EPAs) possess significant potential for using multimodal interactions to effectively convey social and educational information. While previous studies have examined the impact of EPAs on learning performance, the incorporation of meaningful gestures into EPAs remains relatively unexplored. This study employs a case study approach and systemic functional-multimodal discourse analysis to investigate the multimodal interactions of a human lecturer during teaching and explores the relationship between in-class meaningful gestures and corresponding words. We analyze the alignment between gestures, words, and contextual meanings, with a specific focus on personal pronouns, adjectives, adverbs, and verbs. The findings contribute to the understanding of lecturers' multimodal orchestration in classroom settings. Additionally, we provide a preliminary design guideline for EPAs that integrates meaningful word-synchronized gestures, using multimodal information to enhance students' embodied learning experiences. This research contributes to the advancement of embodied pedagogical agent design and the broader field of multimodal interaction.

References

[1]
G. Ali, M. Lee, and J.-I. Hwang. 2020. Automatic text-to-gesture rule generation for embodied conversational agents. Computer Animation and Virtual Worlds 31, 4–5. https://doi.org/10.1002/cav.1944
[2]
Paolo Bernardis and Maurizio Gentilucci. 2006. Speech and gesture share the same communication system. Neuropsychologia 44, 2: 178–190. https://doi.org/10.1016/j.neuropsychologia.2005.05.007
[3]
S. Cook, Zachary Mitchell, and S. Goldin-Meadow. 2008. Gesturing makes learning last. Cognition. https://doi.org/10.1016/j.cognition.2007.04.010
[4]
Susan Wagner Cook, Howard S. Friedman, Katherine A. Duggan, Jian Cui, and Voicu Popescu. 2017. Hand Gesture and Mathematics Learning: Lessons From an Avatar. Cognitive Science 41, 2: 518–535. https://doi.org/10.1111/cogs.12344
[5]
Riccardo Dalla Volta, Maddalena Fabbri-Destro, Maurizio Gentilucci, and Pietro Avanzini. 2014. Spatiotemporal dynamics during processing of abstract and concrete verbs: An ERP study. Neuropsychologia 61: 163–174. https://doi.org/10.1016/j.neuropsychologia.2014.06.019
[6]
Nicole Dargue, Naomi Sweller, and Michael P. Jones. 2019. When our hands help us understand: A meta-analysis into the effects of gesture on comprehension. Psychological Bulletin 145, 8: 765–784. https://doi.org/10.1037/bul0000202
[7]
Robert O. Davis, Taejung Park, and Joseph Vincent. 2021. A systematic narrative review of agent persona on learning outcomes and design variables to enhance personification. Journal of Research on Technology in Education 53, 1: 89–106. https://doi.org/10.1080/15391523.2020.1830894
[8]
Robert O. Davis, Joseph Vincent, and Lili Wan. 2021. Does a pedagogical agent's gesture frequency assist advanced foreign language users with learning declarative knowledge? International Journal of Educational Technology in Higher Education 18, 1: 21. https://doi.org/10.1186/s41239-021-00256-z
[9]
Doriana De Marco, Elisa De Stefani, and Maurizio Gentilucci. 2015. Gesture and word analysis: the same or different processes? NeuroImage 117: 375–385. https://doi.org/10.1016/j.neuroimage.2015.05.080
[10]
Susan Goldin-Meadow. 1999. The role of gesture in communication and thinking. Trends in Cognitive Sciences 3, 11: 419–429. https://doi.org/10.1016/S1364-6613(99)01397-2
[11]
Susan Goldin-Meadow. 2000. Beyond Words: The Importance of Gesture to Researchers and Learners. Child Development 71, 1: 231–239. https://doi.org/10.1111/1467-8624.00138
[12]
Susan Goldin-Meadow, Howard Nusbaum, Spencer D. Kelly, and Susan Wagner. 2001. Explaining Math: Gesturing Lightens the Load. Psychological Science 12, 6: 516–522. https://doi.org/10.1111/1467-9280.00395
[13]
Foteini Grivokostopoulou, Michael Paraskevas, Isidoros Perikos, Sasha Nikolic, Konstantinos Kovas, and Ioannis Hatzilygeroudis. 2018. Examining the Impact of Pedagogical Agents on Students Learning Experience in Virtual Worlds. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), 602–607. https://doi.org/10.1109/TALE.2018.8615421
[14]
Alessandro Innocenti, Elisa Stefani, Mariateresa Sestito, and Maurizio Gentilucci. 2013. Understanding of action-related and abstract verbs in comparison: A behavioral and TMS study. Cognitive processing 15. https://doi.org/10.1007/s10339-013-0583-z
[15]
Adam Kendon. 2004. Gesture: Visible Action as Utterance. Cambridge University Press.
[16]
Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the 2020 International Conference on Multimodal Interaction (ICMI ’20), 242–250. https://doi.org/10.1145/3382507.3418815
[17]
Fei Lim. 2011. A systemic functional multimodal discourse analysis approach to pedagogic discourse.
[18]
Victor Fei Lim. 2019. Analysing the teachers’ use of gestures in the classroom: A Systemic Functional Multimodal Discourse Analysis approach. Social Semiotics 29, 1: 83–111. https://doi.org/10.1080/10350330.2017.1412168
[19]
Xian Liu, Qianyi Wu, Hang Zhou, Yuanqi Du, Wayne Wu, Dahua Lin, and Ziwei Liu. 2022. Audio-Driven Co-Speech Gesture Video Generation. Retrieved December 7, 2022 from https://openreview.net/forum?id=VhgC3SMTiy
[20]
Ati Suci Dian Martha and Harry B. Santoso. 2019. The Design and Impact of the Pedagogical Agent: A Systematic Literature Review. Journal of Educators Online 16, 1. Retrieved January 11, 2022 from https://eric.ed.gov/?id=EJ1204376
[21]
David McNeill. 2011. Hand and Mind. In Hand and Mind. De Gruyter Mouton, 351–374. https://doi.org/10.1515/9783110874259.351
[22]
Michael Noetel, Shantell Griffith, Oscar Delaney, Nicola Rose Harris, Taren Sanders, Philip Parker, Borja del Pozo Cruz, and Chris Lonsdale. 2021. Multimedia Design for Learning: An Overview of Reviews With Meta-Meta-Analysis. Review of Educational Research: 00346543211052329. https://doi.org/10.3102/00346543211052329
[23]
Rafael Núñez. 2006. Do Real Numbers Really Move? Language, Thought, and Gesture: The Embodied Cognitive Foundations of Mathematics. In 18 Unconventional Essays on the Nature of Mathematics, Reuben Hersh (ed.). Springer, New York, NY, 160–181. https://doi.org/10.1007/0-387-29831-2_9
[24]
V.I. Pavlovic, R. Sharma, and T.S. Huang. 1997. Visual interpretation of hand gestures for human-computer interaction: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 7: 677–695. https://doi.org/10.1109/34.598226
[25]
Siddharth S. Rautaray and Anupam Agrawal. 2015. Vision based hand gesture recognition for human computer interaction: a survey. Artificial Intelligence Review 43, 1: 1–54. https://doi.org/10.1007/s10462-012-9356-9
[26]
Lindsey E. Richland, Osnat Zur, and Keith J. Holyoak. 2007. Cognitive Supports for Analogies in the Mathematics Classroom. Science 316, 5828: 1128–1129. https://doi.org/10.1126/science.1142103
[27]
Lindsey Engle Richland. 2015. Linking Gestures: Cross-Cultural Variation During Instructional Analogies. Cognition and Instruction 33, 4: 295–321. https://doi.org/10.1080/07370008.2015.1091459
[28]
Ferdinand de Saussure. 2011. Course in General Linguistics. Columbia University Press.
[29]
Tsung-Han Tsai, Chih-Chi Huang, and Kung-Long Zhang. 2020. Design of hand gesture recognition system for human-computer interaction. Multimedia Tools and Applications 79, 9: 5989–6007. https://doi.org/10.1007/s11042-019-08274-w
[30]
Fuxing Wang, Wenjing Li, Richard E Mayer, and Huashan Liu. 2018. Animated pedagogical agents as aids in multimedia learning: Effects on eye-fixations during learning and learning outcomes. Journal of Educational Psychology 110, 2: 250. https://doi.org/10.1037/edu0000221
[31]
Qi Wu, Cheng-Ju Wu, Yixin Zhu, and Jungseock Joo. 2021. Communicative Learning with Natural Gestures for Embodied Navigation Agents with Human-in-the-Scene. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 4095–4102. https://doi.org/10.1109/IROS51168.2021.9636208
[32]
Ying Choon Wu and Seana Coulson. 2005. Meaningful gestures: Electrophysiological indices of iconic gesture comprehension. Psychophysiology 42, 6: 654–667. https://doi.org/10.1111/j.1469-8986.2005.00356.x
[33]
Xinhao Xu, Jina Kang, and Lili Yan. 2022. Understanding embodied immersion in technology-enabled embodied learning environments. Journal of Computer Assisted Learning 38, 1: 103–119. https://doi.org/10.1111/jcal.12594
[34]
Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. 2019. Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots. In 2019 International Conference on Robotics and Automation (ICRA), 4303–4309. https://doi.org/10.1109/ICRA.2019.8793720
[35]
Youngwoo Yoon, Keunwoo Park, Minsu Jang, Jaehong Kim, and Geehyuk Lee. 2021. SGToolkit: An Interactive Gesture Authoring Toolkit for Embodied Conversational Agents. In The 34th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, New York, NY, USA, 826–840. Retrieved January 11, 2022 from https://doi.org/10.1145/3472749.3474789

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CSCW '23 Companion: Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing
October 2023
596 pages
ISBN:9798400701290
DOI:10.1145/3584931
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 October 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Gesture
  2. Human behavior
  3. Multimodal interaction
  4. Pedagogical agent
  5. Word synchronized

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • the University Grants Committee of Hong Kong

Conference

CSCW '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 2,235 of 8,521 submissions, 26%

Upcoming Conference

CSCW '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 183
    Total Downloads
  • Downloads (Last 12 months)125
  • Downloads (Last 6 weeks)1
Reflects downloads up to 30 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media