Humans use gestures in most communicative acts. How are these gestures initiated and performed__ __ What kinds of communicative roles do they play and what kinds of meanings do they convey__ __ How do listeners extract and understand these meanings__ __ Will it be possible to build computerized communicating agents that can extract and understand the meanings and accordingly simulate and display expressive gestures on the computer in such a way that they can be effective conversational partners__ __ All these questions are easy to ask, but far more difficult to answer. In the thesis we try to address these questions regarding the synthesis and acquisition of communicative gestures.
Our approach to gesture is based on the principles of movement observation science, specifically Laban Movement Analysis (LMA) and its Effort and Shape components. LMA, developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Its Effort and Shape component provide us with a comprehensive and valuable set of parameters to characterize gesture formation. The computational model (the EMOTE system) we have built offers power and flexibility to procedurally synthesize gestures based on predefined key pose and time information plus Effort and Shape qualities.
To provide real quantitative foundations for a complete communicative gesture model, we have built a computational framework where the observable characteristics of gestures—not only key pose and timing but also the underlying motion qualities—can be extracted from live performance, either in 3D motion capture data or in 2D video data, and correlated with observations validated by LMA notators. Experiments of this sort have not been conducted before and should be of interest not only to the computer animation and computer vision community but would be a powerful and valuable methodological tool for creating personalized, communicating agents.
Cited By
- Mckendrick Z, Somin L, Finn P and Sharlin E Virtual Rehearsal Suite: An Environment and Framework for Virtual Performance Practice Proceedings of the 2023 ACM International Conference on Interactive Media Experiences, (27-39)
- Otterbein R, Jochum E, Overholt D, Bai S and Dalsgaard A Dance and Movement-Led Research for Designing and Evaluating Wearable Human-Computer Interfaces Proceedings of the 8th International Conference on Movement and Computing, (1-9)
- Volioti C, Manitsaris S, Hemery E, Hadjidimitriou S, Charisis V, Hadjileontiadis L, Katsouli E, Moutarde F and Manitsaris A (2018). A Natural User Interface for Gestural Expression and Emotional Elicitation to Access the Musical Intangible Cultural Heritage, Journal on Computing and Cultural Heritage , 11:2, (1-20), Online publication date: 7-Jun-2018.
- Santos O and Eddy M Modeling Psychomotor Activity Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization, (305-310)
- Yang Y, Shum H, Aslam N and Zeng L Temporal clustering of motion capture data with optimal partitioning Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1, (479-482)
- Dardard F, Gnecco G and Glowinski D (2016). Automatic Classification of Leading Interactions in a String Quartet, ACM Transactions on Interactive Intelligent Systems, 6:1, (1-27), Online publication date: 5-May-2016.
- Lockyer M, Bartram L, Schiphorst T and Studd K Extending computational models of abstract motion with movement qualities Proceedings of the 2nd International Workshop on Movement and Computing, (92-99)
- Zacharatos H, Gatzoulis C, Chrysanthou Y and Aristidou A Emotion Recognition for Exergames using Laban Movement Analysis Proceedings of Motion on Games, (61-66)
- Baird B and Izmirli O (2011). Motion capture in a CS curriculum, Journal of Computing Sciences in Colleges, 26:6, (165-167), Online publication date: 1-Jun-2011.
- Sundström P and Höök K Hand in hand with the material Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (463-472)
- Santos L, Prado J and Dias J Human robot interaction studies on laban human movement analysis and dynamic background segmentation Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, (4984-4989)
- Swaminathan D, Thornburg H, Mumford J, Rajko S, James J, Ingalls T, Campana E, Qian G, Sampath P and Peng B (2009). A dynamic Bayesian approach to computational Laban shape quality analysis, Advances in Human-Computer Interaction, 2009, (1-17), Online publication date: 1-Jan-2009.
- Deng Z, Gu Q and Li Q Perceptually consistent example-based human motion retrieval Proceedings of the 2009 symposium on Interactive 3D graphics and games, (191-198)
- Sheppard R, Kamali M, Rivas R, Tamai M, Yang Z, Wu W and Nahrstedt K Advancing interactive collaborative mediums through tele-immersive dance (TED) Proceedings of the 16th ACM international conference on Multimedia, (579-588)
- Rett J and Dias J Human robot interaction based on Bayesian analysis of human movements Proceedings of the aritficial intelligence 13th Portuguese conference on Progress in artificial intelligence, (530-541)
- Schiphorst T, Nack F, KauwATjoe M, de Bakker S, Stock , Aroyo L, Rosillio A, Schut H and Jaffe N PillowTalk Proceedings of the 1st international conference on Tangible and embedded interaction, (23-30)
- Bhuyan M, Ghosh D and Bora P Continuous hand gesture segmentation and co-articulation detection Proceedings of the 5th Indian conference on Computer Vision, Graphics and Image Processing, (564-575)
- Moen J Towards people based movement interaction and kinaesthetic interaction experiences Proceedings of the 4th decennial conference on Critical computing: between sense and sensibility, (121-124)
- Camurri A, Lagerlöf I and Volpe G (2003). Recognizing emotion from dance movement, International Journal of Human-Computer Studies, 59:1-2, (213-225), Online publication date: 1-Jul-2003.
- Allbeck J, Kipper K, Adams C, Schuler W, Zoubanova E, Badler N, Palmer M and Joshi A ACUMEN Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, (191-198)
Recommendations
Seeing, Sensing and Recognizing Laban Movement Qualities
CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing SystemsHuman movement has historically been approached as a functional component of interaction within human computer interaction. Yet movement is not only functional, it is also highly expressive. In our research, we explore how movement expertise as ...
Communicative gestures in coreference identification in multiparty meetings
ICMI-MLMI '09: Proceedings of the 2009 international conference on Multimodal interfacesDuring multiparty meetings, participants can use non-verbal modalities such as hand gestures to make reference to the shared environment. Therefore, one hypothesis is that incorporating hand gestures can improve coreference identification, a task that ...
Multitask learning for Laban movement analysis
MOCO '15: Proceedings of the 2nd International Workshop on Movement and ComputingThis paper presents the results of a multitask learning method for recognition of Laban Movement Analysis (LMA) qualities from a markerless motion capture camera. LMA is a well-accepted method for describing, interpreting and documenting human movement ...