[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3242969.3264993acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction

Published: 02 October 2018 Publication History

Abstract

This paper details the sixth Emotion Recognition in the Wild (EmotiW) challenge. EmotiW 2018 is a grand challenge in the ACM International Conference on Multimodal Interaction 2018, Colarado, USA. The challenge aims at providing a common platform to researchers working in the affective computing community to benchmark their algorithms on 'in the wild' data. This year EmotiW contains three sub-challenges: a) Audio-video based emotion recognition; b) Student engagement prediction; and c) Group-level emotion recognition. The databases, protocols and baselines are discussed in detail.

References

[1]
Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency. 2016. Openface: an open source facial behavior analysis toolkit. In Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on. IEEE, 1--10.
[2]
Cheng Chang, Chen Zhang, Lei Chen, and Yang Liu. 2018. An Ensemble Model Using Face and Body Tracking for Engagement Detection. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[3]
Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey, and Tom Gedeon. 2017. From individual to group-level emotion recognition: EmotiW 5.0. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. ACM, 524--528.
[4]
Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. 2012. Collecting large, richly annotated facial-expression databases from movies. IEEE Multimedia 19, 3 (2012), 0034.
[5]
Abhinav Dhall, Jyoti Joshi, Karan Sikka, Roland Goecke, and Nicu Sebe. 2015. The more the merrier: Analysing the affect of a group of people in images. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG). 1--8.
[6]
Yingruo Fan. 2018. Video-based Emotion Recognition Using Deeply-Supervised Neural Networks. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[7]
Xin Guo. 2018. Group-Level Emotion Recognition using Hybrid Deep Models based on Faces, Scenes, Skeletons and Visual Attentions. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[8]
Aarush Gupta, Dakshit Agrawal, Hardik Chauhan, Jose Dolz, and Marco Pedersoli. 2018. An Attention Model for group-level emotion recognition. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[9]
Amanjot Kaur, Aamir Mustafa, Love Mehta, and Abhinav Dhall. 2018. Prediction and Localization of Student Engagement in the Wild. arXiv preprint arXiv:1804.00858 (2018).
[10]
Ahmed-Shehab Khan, Zhiyuan Li, Jie Cai, Zibo Meng, James O'Reilly, and Yan Tong. 2018. Group-Level Emotion Recognition using Deep Models with A Fourstream Hybrid Network. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[11]
Chuanhe Liu. 2018. Multi-Feature Based Emotion Recognition for Video Clips. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[12]
Cheng Lu and Wenming Zheng. 2018. Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the Wild. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[13]
Xuesong Niu, Hu Han, Jiabei Zeng, Xuran Sun, Shiguang Shan, and Xilin Chen. 2018. Automatic Engagement Prediction with GAP Feature. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[14]
Fabien Ringeval, Björn Schuller, Michel Valstar, Jonathan Gratch, Roddy Cowie, Stefan Scherer, Sharon Mozgai, Nicholas Cummins, Maximilian Schmitt, and Maja Pantic. 2017. Avec 2017: Real-life depression, and affect recognition workshop and challenge. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge. ACM, 3--9.
[15]
Chinchu Thomas, Nitin Nair, and Dinesh Babu J. 2018. Predicting Engagement Intensity in the Wild Using Temporal Convolutional Network. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[16]
Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, László A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, and Maja Pantic. 2017. Fera 2017-addressing head pose in the third facial expression recognition and analysis challenge. In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on. IEEE, 839--847.
[17]
Valentin Vielzeuf, Corentin Kervadec, Alexis Lechervy, Stephane Pateux, and Frederic Jurie. 2018. An Occam's Razor View on Learning Audiovisual Emotion Recognition with Small Training Sets. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[18]
Kai Wang, Yu Qiao, Jianfei Yang, Xiaojiang Peng, Xiaoxing Zeng, Debin Meng, and Kaipeng Zhang. 2018. Cascade Attention Networks For Group Emotion Recognition with Face, Body and Image Cues. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[19]
Xuehan Xiong and Fernando De la Torre. 2013. Supervised Descent Method and Its Applications to Face Alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 532--539.
[20]
Jianfei Yang, Kai Wang, Xiaojiang Peng, and Yu Qiao. 2018. Deep Recurrent Multi-instance Learning with Spatio-temporal Features for Engagement Intensity Prediction. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM.
[21]
Guoying Zhao and Matti Pietikainen. 2007. Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions. IEEE Transaction on Pattern Analysis and Machine Intelligence 29, 6 (2007), 915--928.
[22]
Xiangxin Zhu and Deva Ramanan. 2012. Face detection, pose estimation, and landmark localization in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2879--2886.

Cited By

View all
  • (2024)A study on automatic identification of students’ emotional states using convolutional neural networksApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-34309:1Online publication date: 25-Nov-2024
  • (2024)EVAC 2024 – Empathic Virtual Agent Challenge: Appraisal-based Recognition of Affective StatesProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3689029(677-683)Online publication date: 4-Nov-2024
  • (2024)Predicting Student Engagement Using Sequential Ensemble ModelIEEE Transactions on Learning Technologies10.1109/TLT.2023.334286017(939-950)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal Interaction
    October 2018
    687 pages
    ISBN:9781450356923
    DOI:10.1145/3242969
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    • SIGCHI: Specialist Interest Group in Computer-Human Interaction of the ACM

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 October 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. affective computing
    2. emotion recognition

    Qualifiers

    • Short-paper

    Funding Sources

    • Nvidia

    Conference

    ICMI '18
    Sponsor:
    • SIGCHI

    Acceptance Rates

    ICMI '18 Paper Acceptance Rate 63 of 149 submissions, 42%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)63
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 03 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A study on automatic identification of students’ emotional states using convolutional neural networksApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-34309:1Online publication date: 25-Nov-2024
    • (2024)EVAC 2024 – Empathic Virtual Agent Challenge: Appraisal-based Recognition of Affective StatesProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3689029(677-683)Online publication date: 4-Nov-2024
    • (2024)Predicting Student Engagement Using Sequential Ensemble ModelIEEE Transactions on Learning Technologies10.1109/TLT.2023.334286017(939-950)Online publication date: 2024
    • (2024)Adaptive Log-Euclidean Metrics for SPD Matrix LearningIEEE Transactions on Image Processing10.1109/TIP.2024.345193033(5194-5205)Online publication date: 1-Jan-2024
    • (2024)Implementing the Affective Mechanism for Group Emotion Recognition With a New Graph Convolutional Network ArchitectureIEEE Transactions on Affective Computing10.1109/TAFFC.2023.332010115:3(1104-1115)Online publication date: Jul-2024
    • (2024)Group-Level Emotion Recognition Using Hierarchical Dual-Branch Cross Transformer with Semi-Supervised Learning2024 IEEE 4th International Conference on Software Engineering and Artificial Intelligence (SEAI)10.1109/SEAI62072.2024.10674336(252-256)Online publication date: 21-Jun-2024
    • (2024)Facial Expression Recognition with an Improved VGG16 Network Based on SE Modules and Residual Connections2024 4th International Conference on Machine Learning and Intelligent Systems Engineering (MLISE)10.1109/MLISE62164.2024.10674455(85-89)Online publication date: 28-Jun-2024
    • (2024)A Unified Model for Style Classification and Emotional Response AnalysisIEEE Access10.1109/ACCESS.2024.341985112(91770-91779)Online publication date: 2024
    • (2024)Dynamic facial expression recognition based on spatial key-points optimized region feature fusion and temporal self-attentionEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108535133(108535)Online publication date: Jul-2024
    • (2024)Leveraging part-and-sensitive attention network and transformer for learner engagement detectionAlexandria Engineering Journal10.1016/j.aej.2024.06.074107(198-204)Online publication date: Nov-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media