[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3458380.3458404acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicdspConference Proceedingsconference-collections
research-article

Predicting Group Work Performance from Physical Handwriting Features in a Smart English Classroom

Published: 23 September 2021 Publication History

Abstract

Embodied cognition theory states that students thinking in a learning environment is embodied in physical activity. In this regard, recent research has shown that signal-level handwriting dynamics can distinguish learning performance. Although machine learning has been considered to detect how multimodal modalities correlate to specific learning processes, the use of deep learning has received insufficient attention. With this in mind, we build a Group Work Performance Prediction system from analysis of 3D (including strokes frequency) handwriting signals of students in a smart English classroom, with deep convolutional neuronal network (CNN) based regression models. For labelling of their proficiency level, their spoken language performance is being used. The students were working together in groups. A 3D (2D writing coordinates plus frequency) handwriting dataset (3D-Writing-DB) was collected through a collaboration platform known as ‘creative digital space’. We extracted the 3D handwriting signal from a table tablet during English discussion sessions. Afterwards, professional English teachers annotated the English speech (values vary from 0 - 5). Our experimental results indicate that group work performance can be successfully predicted from physical handwriting features by using deep learning, as shown by our best result, i. e., 0.32 in regression assessment by applying RMSE for evaluation.

References

[1]
Charles Antaki, Michael Billig, Derek Edwards, and Jonathan Potter. 2003. Discourse analysis means doing analysis: A critique of six analytic shortcomings. academia.edu (2003), 1–12.
[2]
Eugene Bagdasaryan, Griffin Berlstein, Jason Waterman, Eleanor Birrell, Nate Foster, Fred B Schneider, and Deborah Estrin. 2019. Ancile: Enhancing Privacy for Ubiquitous Computing with Use-Based Privacy. In Proc. Privacy in the Electronic Society. London, UK, 111–124.
[3]
Tianfeng Chai and Roland R Draxler. 2014. Root mean square error (RMSE) or mean absolute error (MAE)?–Arguments against avoiding RMSE in the literature. Geoscientific Model Development 7, 3 (2014), 1247–1250.
[4]
Bin Chen, Koki Hatada, Keiju Okabayashi, Hiroyuki Kuromiya, Ichiro Hidaka, Yoshiharu Yamamoto, and Kazumasa Togami. 2019. Group Activity Recognition to Support Collaboration in Creative Digital Space. In Proc. Computer Supported Cooperative Work and Social Computing. Austin, USA, 175–179.
[5]
Paul Cobb, Terry Wood, Erna Yackel, and Betsy McNeal. 1992. Characteristics of classroom mathematics traditions: An interactional analysis. American educational research journal 29, 3 (1992), 573–604.
[6]
Philip R Cohen and Sharon Oviatt. 2017. Multimodal Speech and Pen Interfaces. In The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations. Vol. 1. 403–447.
[7]
Eduardo Coutinho, Florian Hönig, Yue Zhang, Simone Hantke, Anton Batliner, Elmar Nöth, and Björn Schuller. 2016. Assessing the prosody of non-native speakers of english: Measures and feature sets. In Proc. Language Resources and Evaluation (LREC’16). 1328–1332.
[8]
Jun Deng, Nicholas Cummins, Jing Han, Xinzhou Xu, Zhao Ren, Vedhas Pandit, Zixing Zhang, and Björn Schuller. 2016. The university of Passau open emotion recognition system for the multimodal emotion challenge. In Proc. Chinese Conference on Pattern Recognition. Chengdu, China, 652–666.
[9]
Mohammad Sadegh Ebrahimi and Hossein Karkeh Abadi. 2018. Study of residual networks for image recognition. arXiv preprint arXiv:1805.00325(2018).
[10]
Tobias Fredlund, John Airey, and Cedric Linder. 2012. Exploring the role of physics representations: An Illustrative example from students sharing knowledge about refraction. European Journal of Physics 33, 3 (2012), 657.
[11]
Jing Han, Kun Qian, Meishu Song, Zijiang Yang, Zhao Ren, Shuo Liu, Juan Liu, Huaiyuan Zheng, Wei Ji, Tomoya Koike, Xiao Li, Zixing Zhang, Yoshiharu Yamamoto, and Björn W Schuller. 2020. An early study on intelligent analysis of speech under COVID-19: severity, sleep quality, fatigue, and anxiety. arXiv preprint arXiv:2005.00096(2020), 1–5.
[12]
Jing Han, Zixing Zhang, and Bjorn Schuller. 2019. Adversarial training in affective computing and sentiment analysis: Recent advances and perspectives. IEEE Computational Intelligence Magazine 14, 2 (2019), 68–81.
[13]
Tien Ho-Phuoc. 2018. CIFAR10 to compare visual recognition performance between deep neural networks and humans. arXiv preprint arXiv:1811.07270(2018).
[14]
Jeremy Hodgen. 2007. Formative assessment. Tools for transforming school mathematics towards a dialogic practice. In Proc. European Society for Research in Mathematics Education. Larnaca, Cyprus, 1886–1895.
[15]
Yu-Liang Hsu, Cheng-Ling Chu, Yi-Ju Tsai, and Jeen-Shing Wang. 2014. An inertial pen with dynamic time warping recognizer for handwriting and gesture recognition. IEEE Sensors Journal 15, 1 (2014), 154–163.
[16]
Michael Ignelzi. 2000. Meaning-making in the learning and teaching process. New directions for teaching and learning 82 (2000), 5–14.
[17]
Fadi Imad, Sharifah Mumtazah Syed Ahmad, Shaiful Hashim, Khairulmizam Samsudin, and Marwan Ali. 2018. Real-Time Pen Input System for Writing Utilizing Stereo Vision. System 2(2018), 1000–1009.
[18]
Olivier Janssens, Rik Van de Walle, Mia Loccufier, and Sofie Van Hoecke. 2017. Deep learning for infrared thermal image based machine health monitoring. IEEE/ASME Transactions on Mechatronics 23, 1 (2017), 151–159.
[19]
Barbara Johnstone. 2018. Discourse analysis. John Wiley & Sons.
[20]
Hyeon Woo Lee. 2015. Does Touch-based Interaction in Learning with Interactive Images Improve Students’ Learning?The Asia-Pacific Education Researcher 24, 4 (2015), 731–735.
[21]
Hang Li, Yu Kang, Wenbiao Ding, Song Yang, Songfan Yang, Gale Yan Huang, and Zitao Liu. 2020. Multimodal learning for classroom activity detection. In Proc. International Conference on Acoustics, Speech and Signal Processing. Onlinestream, 9234–9238.
[22]
Zedong Li, Hao Liu, Cheng Ouyang, Wei Hong Wee, Xingye Cui, Tian Jian Lu, Belinda Pingguan-Murphy, Fei Li, and Feng Xu. 2016. Recent advances in pen-based writing electronics and their emerging applications. Advanced Functional Materials 26, 2 (2016), 165–180.
[23]
Wang Liao, Wei Xu, SiCong Kong, Fowad Ahmad, and Wei Liu. 2019. A two-stage method for hand-raising gesture recognition in classroom. In Proc. Educational and Information Technology. Cambridge, UK, 38–44.
[24]
Jionghao Lin, Shirui Pan, Cheng Siong Lee, and Sharon Oviatt. 2019. An explainable deep fusion network for affect recognition using physiological signals. In Proc. International Conference on Information and Knowledge Management. Suzhou, China, 2069–2072.
[25]
Shu Liu, Bo Li, Yang-Yu Fan, Zhe Guo, and Ashok Samal. 2017. Facial attractiveness computation by label distribution learning with deep CNN and geometric features. In Proc. International Conference on Multimedia and Expo. Hong Kong, China, 1344–1349.
[26]
Jessica M Nolan, Bridget G Hanley, Timothy P DiVietri, and Nailah A Harvey. 2018. She who teaches learns: Performance benefits of a jigsaw activity in a college classroom.Scholarship of Teaching and Learning in Psychology 4, 2 (2018), 93.
[27]
Sharon Oviatt and Adrienne Cohen. 2013. Written and multimodal representations as predictors of expertise and problem-solving success in mathematics. In Proc. International conference on multimodal interaction. Sydney, Australia, 599–606.
[28]
Emilia Parada-Cabaleiro, Alice Baird, Nicholas Cummins, and Björn W Schuller. 2017. Stimulation of psychological listener experiences by semi-automatically composed electroacoustic environments. In Proc. International Conference on Multimedia and Expo. Hong Kong, China, 1051–1056.
[29]
Emilia Parada-Cabaleiro, Anton Batliner, Alice Baird, and Björn Schuller. 2020. The perception of emotional cues by children in artificial background noise. International Journal of Speech Technology 23, 1 (2020), 169–182.
[30]
Stephen Petrina. 2006. Advanced teaching methods for the technology classroom. IGI Global, Vancouver, Canada.
[31]
Kun Qian, Hiroyuki Kuromiya, Zhao Ren, Maximilian Schmitt, Zixing Zhang, Toru Nakamura, Kazuhiro Yoshiuchi, Björn W Schuller, and Yoshiharu Yamamoto. 2019. Automatic detection of major depressive disorder via a bag-of-behaviour-words approach. In Proc. Image Computing and Digital Medicine. Xi’an, P. R. China, 71–75.
[32]
Kun Qian, Hiroyuki Kuromiya, Zixing Zhang, Jinhyuk Kim, Toru Nakamura, Kazuhiro Yoshiuchi, Björn W Schuller, and Yoshiharu Yamamoto. 2019. Teaching machines to know your depressive state: on physical activity in health and major depressive disorder. In Proc.Engineering in Medicine and Biology Society. Berlin, Germany, 3592–3595.
[33]
Zhao Ren, Qiuqiang Kong, Jing Han, Mark D Plumbley, and Björn W Schuller. 2019. Attention-based Atrous Convolutional Neural Networks: Visualisation and Understanding Perspectives of Acoustic Scenes. In Proc. International Conference on Acoustics, Speech and Signal Processing. Brighton, UK, 56–60.
[34]
Maximilian Schmitt and Björn Schuller. 2019. End-to-end audio classification with small datasets–making it work. In Proc. European Signal Processing Conference. A Coruña, Spain, 1–5.
[35]
Orit Shaer and Eva Hornecker. 2010. Tangible user interfaces: past, present, and future directions. Foundations and Trends in Human-computer Interaction 3, 1–2(2010), 1–137.
[36]
Kenneth Silseth and Øystein Gilje. 2019. Multimodal composition and assessment: A sociocultural perspective. Assessment in Education: Principles, Policy & Practice 26, 1(2019), 26–42.
[37]
Meishu Song, Zijiang Yang, Alice Baird, Emilia Parada-Cabaleiro, Zixing Zhang, Ziping Zhao, and Björn Schuller. 2019. Audiovisual Analysis for Recognising Frustration during Game-Play: Introducing the Multimodal Game Frustration Database. In Proc. International Conference on Affective Computing and Intelligent Interaction. Cambridge, UK, 517–523.
[38]
Terry T Um, Franz MJ Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kulić. 2017. Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks. In Proc. International Conference on Multimodal Interaction. Glasgow, UK, 216–220.
[39]
Anwar Ali Yahya, Addin Osman, Ahmad Taleb, and Ahmed Abdu Alattab. 2013. Analyzing the cognitive level of classroom questions using machine learning techniques. Procedia-Social and Behavioral Sciences 97 (2013), 587–595.
[40]
Zijiang Yang, Kun Qian, Zhao Ren, Alice Baird, Zixing Zhang, and Björn Schuller. 2020. Learning multi-resolution representations for acoustic scene classification via neural networks. In Proc. Sound and Music Technology. Haerbin, China, 133–143.
[41]
Janez Zaletelj. 2017. Estimation of students’ attention in the classroom from kinect features. In Proc. International Symposium on Image and Signal Processing and Analysis. Ljubljana, Slovenia, 220–224.
[42]
Yu Zhang, Fei Qin, Bo Liu, Xuan Qi, Yingying Zhao, and Dan Zhang. 2018. Wearable neurophysiological recordings in middle-school classroom correlate with students’ academic performance. Frontiers in Human Neuroscience 12 (2018), 457.
[43]
Yue Zhang, Felix Weninger, Anton Batliner, Florian Hönig, and Björn Schuller. 2016. Language proficiency assessment of English L2 speakers based on joint analysis of prosody and native language. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. 274–278.
[44]
Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. 2017. Random erasing data augmentation. arXiv preprint arXiv:1708.04896(2017).

Cited By

View all

Index Terms

  1. Predicting Group Work Performance from Physical Handwriting Features in a Smart English Classroom
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        ICDSP '21: Proceedings of the 2021 5th International Conference on Digital Signal Processing
        February 2021
        336 pages
        ISBN:9781450389365
        DOI:10.1145/3458380
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 23 September 2021

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. English speaking proficiency
        2. deep learning 11Dr. Kun Qian is the corresponding author.
        3. digital classroom
        4. group work
        5. handwriting

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        • Zhejiang Lab's International Talent Fund for Young Professionals
        • JSPS Postdoctoral Fellowship for Research in Japan
        • Grants-in-Aid for Scientific Research

        Conference

        ICDSP 2021

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 74
          Total Downloads
        • Downloads (Last 12 months)14
        • Downloads (Last 6 weeks)1
        Reflects downloads up to 12 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media