Abstract
In the present work we observe two subjects interacting in a collaborative task on a shared environment. One goal of the experiment is to measure the change in behavior with respect to gaze when one interactant is wearing dark glasses and hence his/her gaze is not visible by the other one. The results show that if one subject wears dark glasses while telling the other subject the position of a certain object, the other subject needs significantly more time to locate and move this object. Hence, eye gaze – when visible – of one subject looking at a certain object speeds up the location of the cube by the other subject. The second goal of the currently ongoing work is to collect data on the multimodal behavior of one of the subjects by means of audio recording, eye gaze and head motion tracking in order to build a model that can be used to control a robot in a comparable scenario in future experiments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Argyle, M., Cook, M.: Gaze and Mutual gaze. Cambridge University Press, Cambridge (1976)
Bailly, G., Raidt, S., Elisei, F.: Gaze, conversational agents and face-to-face communication. Speech Communication 52(3), 598–612 (2010)
Berezm, A.L.: Review of EUDICO Linguistic Annotator (ELAN). Language Documentation & Conservation 1(2) (2007)
Boersma, P., Weenink, D.: Praat: doing phonetics by computer (Version 5.1.05) [Computer program] (2009), http://www.praat.org/ (retrieved May 1, 2009)
Bull, P.E., Brown, R.: Body movement and emphasis in speech. Journal of Nonverbal Behavior 16 (1977)
Busso, C., Deng, Z., Neumann, U., Narayanan, S.S.: Natural head motion synthesis driven by acoustic prosodic features. Computer Animation and Virtual Worlds 16(3-4), 283–290 (2005)
Collier, G.: Emotional Expression. Lawrence Erlbaum Associates, Mahwah (1985)
Graf, H.P., Cosatto, E., Strom, V., Huang, F.J.: Visual prosody: Facial movements accompanying speech. In: Proceedings of Automatic Face and Gesture Recognition, pp. 396–401 (2002)
Hadar, U., Steiner, T.J., Grant, E.C., Clifford Rose, F.: Kinematics of head movements accompanying speech during conversation. Human Movement Science 2, 35–46 (1983)
Heath, C.: Body Movement and Speech in Medical Interaction. Cambridge University Press, Cambridge (2004)
Heylen, D.: Head gestures. gaze and the principles of conversational structure. Journal of Humanoid Robotics 3(3), 241–267 (2006)
Hofer, G., Shimodaira, H.: Automatic head motion prediction from speech data. In: Proceedings of Interspeech (2007)
Ito, K., Speer, S.R.: Anticipatory effects of intonation: Eye movements during instructed visual search. Journal of Memory and Language 58(2), 541–573 (2008)
Kendon, A.: Gesture and speech: How they interact. In: Wiemann, J.M., Harrison, R.P. (eds.) Nonverbal Interaction, pp. 13–45. Sage Publications, Beverly Hills CA (1983)
Lee, J., Marsella, S., Traum, D., Gratch, J., Lance, B.: The Rickel Gaze Model: A window on the mind of a virtual human. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 296–303. Springer, Heidelberg (2007)
Maricchiolo, F., Bonaiuto, M., Gnisci, A.: Hand gestures in speech: Studies of their roles in social interaction. In: Proceedings of the Conference of the International Society for Gesture Studies (2005)
McClave, E.Z.: Linguistic functions of head movements in the context of speech. Journal of Pragmatics 32, 855–878 (2000)
Pelachaud, C., Badler, N.I., Steedman, M.: Generating facial expressions for speech. Cognitive Science 20(1), 1–46 (1969)
Sargin, M.E., Yemez, Y., Erzin, E., Tekalp, A.M.: Analysis of head gesture and prosody patterns for prosody-driven head-gesture animation. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(8), 1330–1345 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Fagel, S., Bailly, G. (2011). Speech, Gaze and Head Motion in a Face-to-Face Collaborative Task. In: Esposito, A., Esposito, A.M., Martone, R., Müller, V.C., Scarpetta, G. (eds) Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues. Lecture Notes in Computer Science, vol 6456. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-18184-9_21
Download citation
DOI: https://doi.org/10.1007/978-3-642-18184-9_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-18183-2
Online ISBN: 978-3-642-18184-9
eBook Packages: Computer ScienceComputer Science (R0)