[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2401836.2401846acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Gaze and conversational engagement in multiparty video conversation: an annotation scheme and classification of high and low levels of engagement

Published: 26 October 2012 Publication History

Abstract

When using a multiparty video mediated system, interacting participants assume a range of various roles and exhibit behaviors according to how engaged in the communication they are. In this paper we focus on estimation of conversational engagement from gaze signal. In particular, we present an annotation scheme for conversational engagement, a statistical analysis of gaze behavior across varying levels of engagement, and we classify vectors of computed eye tracking measures. The results show that in 74% of cases the level of engagement can be correctly classified into either high or low level. In addition, we describe the nuances of gaze during distinct levels of engagement.

References

[1]
R. Bednarik, H. Vrzakova, and M. Hradis. What do you want to do next: a novel approach for intent prediction in gaze-based interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA '12, pages 83--90, New York, NY, USA, 2012. ACM.
[2]
H. H. Clark and S. E. Brennan. Grounding in Communication. In L. B. Resnick, J. M. Levine, and S. D. Teasley, editors, Perspectives on socially shared cognition, pages 127--149. American Psychological Association, 1991.
[3]
C. Cortes and V. Vapnik. Support-Vector Networks. Machine Learning, 20(3):273--297, 1995.
[4]
M. Hradis, S. Eivazi, and R. Bednarik. Voice activity detection from gaze in video mediated communication. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA '12, pages 329--332, New York, NY, USA, 2012. ACM.
[5]
R. Ishii and Y. I. Nakano. An empirical study of eye-gaze behaviors: towards the estimation of conversational engagement in human-agent communication. In Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction, EGIHMI '10, pages 33--40, New York, NY, USA, 2010. ACM.
[6]
K. Jokinen. Gaze and Gesture Activity in Communication. In UAHCI '09, volume 5615, pages 537--546. Springer Berlin/Heidelberg, 2009.
[7]
K. Jokinen. Turn taking, utterance density, and gaze patterns as cues to conversational activity. In ICMI Workshop on Multimodal Corpora for Machine Learning: Taking Stock and Roadmapping the Future, pages 1--6, 2010.
[8]
K. Jokinen, K. Harada, M. Nishida, and S. Yamamoto. Turn-Alignment Using Eye-Gaze and Speech in Conversational Interaction. Information Systems Journal, (September):2018--2021, 2010.
[9]
K. Jokinen, M. Nishida, and S. Yamamoto. Eye-gaze experiments for conversation monitoring. In IUCS '09, pages 303--308, New York, 2009. ACM.
[10]
D. D. Salvucci and J. H. Goldberg. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research & applications, ETRA '00, pages 71--78, New York, NY, USA, 2000. ACM.
[11]
R. Vertegaal, R. Slagter, G. van der Veer, and A. Nijholt. Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 301--308. ACM, 2001.

Cited By

View all
  • (2024)Sensors, Techniques, and Future Trends of Human-Engagement-Enabled Applications: A ReviewAlgorithms10.3390/a1712056017:12(560)Online publication date: 6-Dec-2024
  • (2024)MultiMediate'24: Multi-Domain Engagement EstimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3689004(11377-11382)Online publication date: 28-Oct-2024
  • (2024)DAT: Dialogue-Aware Transformer with Modality-Group Fusion for Human Engagement EstimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3688988(11397-11403)Online publication date: 28-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
Gaze-In '12: Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
October 2012
88 pages
ISBN:9781450315166
DOI:10.1145/2401836
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 October 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. annotation
  2. conversation
  3. engagement
  4. gaze tracking
  5. machine learning

Qualifiers

  • Research-article

Funding Sources

Conference

ICMI '12
Sponsor:
ICMI '12: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
October 26, 2012
California, Santa Monica

Acceptance Rates

Overall Acceptance Rate 19 of 21 submissions, 90%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)40
  • Downloads (Last 6 weeks)1
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Sensors, Techniques, and Future Trends of Human-Engagement-Enabled Applications: A ReviewAlgorithms10.3390/a1712056017:12(560)Online publication date: 6-Dec-2024
  • (2024)MultiMediate'24: Multi-Domain Engagement EstimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3689004(11377-11382)Online publication date: 28-Oct-2024
  • (2024)DAT: Dialogue-Aware Transformer with Modality-Group Fusion for Human Engagement EstimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3688988(11397-11403)Online publication date: 28-Oct-2024
  • (2024)Automatic Context-Aware Inference of Engagement in HMI: A SurveyIEEE Transactions on Affective Computing10.1109/TAFFC.2023.327870715:2(445-464)Online publication date: Apr-2024
  • (2024)TCA-NET: Triplet Concatenated-Attentional Network for Multimodal Engagement Estimation2024 IEEE International Conference on Image Processing (ICIP)10.1109/ICIP51287.2024.10647692(2062-2068)Online publication date: 27-Oct-2024
  • (2024)A Cross-Multi-modal Fusion Approach for Enhanced Engagement RecognitionSpeech and Computer10.1007/978-3-031-78014-1_1(3-17)Online publication date: 22-Nov-2024
  • (2023)A multimodal approach for modeling engagement in conversationFrontiers in Computer Science10.3389/fcomp.2023.10623425Online publication date: 2-Mar-2023
  • (2023)MultiMediate '23: Engagement Estimation and Bodily Behaviour Recognition in Social InteractionsProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3613851(9640-9645)Online publication date: 26-Oct-2023
  • (2023)Sliding Window Seq2seq Modeling for Engagement EstimationProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612852(9496-9500)Online publication date: 26-Oct-2023
  • (2022)Glancee: An Adaptable System for Instructors to Grasp Student Learning Status in Synchronous Online ClassesProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517482(1-25)Online publication date: 29-Apr-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media