[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2988257.2988260acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Continuous Multimodal Human Affect Estimation using Echo State Networks

Published: 16 October 2016 Publication History

Abstract

A continuous multimodal human affect recognition for both arousal and valence dimensions in a non-acted spontaneous scenario is investigated in this paper. Different regression models based on Random Forests and Echo State Networks are evaluated and compared in terms of robustness and accuracy. Moreover, an extension of Echo State Networks to a bi-directional model is introduced to improve the regression accuracy. A hybrid method using Random Forests, Echo State Networks and linear regression fusion is developed and applied on the test subset of the AVEC16 challenge. Finally, the label shift and prediction delay is discussed and an annotator specific regression model, as well as fusion architecture, is proposed for future work.

References

[1]
J. Ali, R. Khan, N. Ahmad, and I. Maqsood. Random forests and decision trees. International Journal of Computer Science Issues, 9(3):272--278, sep 2012.
[2]
T. R. Almaev and M. F. Valstar. Local gabor binary patterns from three orthogonal planes for automatic facial expression recognition. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII '13, pages 356--361. IEEE Computer Society, 2013.
[3]
L. Chao, J. Tao, M. Yang, Y. Li, and Z. Wen. Long short term memory recurrent neural network based multimodal dimensional emotion recognition. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge, AVEC '15, pages 65--72. ACM, 2015.
[4]
J. Dai, G. K. Venayagamoorthy, and R. G. Harley. An introduction to the echo state netwwork and its applications in power system. In International Conference on Intelligent System Applications to Power Systems, pages 1--7, nov 2009.
[5]
N. J. de Vos. Echo state networks as an alternative to traditional artificial neural networks in rainfall-runoff modelling. Hydrology and Earth System Sciences, 17(1):253--267, 2013.
[6]
M. Grimm, K. Kroschel, et al. Emotion estimation in speech using a 3d emotion space concept. Citeseer, 2007.
[7]
H. Gunes and M. Pantic. Automatic, dimensional and continuous emotion recognition. Int. J. Synth. Emot., 1(1):68--99, Jan. 2010.
[8]
L. He, D. Jiang, L. Yang, E. Pei, P. Wu, and H. Sahli. Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge, pages 73--80. ACM, 2015.
[9]
H. Jaeger. The echo state approach to analysing and training recurrent neural networks. Technical report, German National Research Center for Information Technology, 2001.
[10]
M. Kachele, M. Amirian, P. Thiam, P. Werner, S. Walter, G. Palm, and F. Schwenker. Adaptive confidence learning for the personalization of pain intensity estimation systems. Evolving Systems, online first:1--13, 2016.
[11]
M. K\"achele, M. Glodek, D. Zharkov, S. Meudt, and F. Schwenker. Fusion of audio-visual features using hierarchical classifier systems for the recognition of affective states and the state of depression. In M. De Marsico, A. Tabbone, and A. Fred, editors, Proceedings of the International Conference on Pattern Recognition Applications and Methods (ICPRAM), pages 671--678. SciTePress, 2014.
[12]
M. Kachele, M. Schels, S. Meudt, G. Palm, and F. Schwenker. Revisiting the EmotiW challenge: how wild is it really? Journal on Multimodal User Interfaces, pages 1--12, 2016.
[13]
M. Kachele, M. Schels, and F. Schwenker. Inferring depression and affect from application dependent meta knowledge. In Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge, AVEC '14, pages 41--48. ACM, 2014.
[14]
M. Kachele, P. Thiam, M. Amirian, F. Schwenker, and G. Palm. Methods for person-centered continuous pain intensity assessment from bio-physiological channels. IEEE Journal of Selected Topics in Signal Processing, PP(99):1--1, 2016.
[15]
M. Kachele, P. Thiam, G. Palm, F. Schwenker, and M. Schels. Ensemble methods for continuous affect recognition: multi-modality, temporality, and challenges. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge, pages 9--16. ACM, 2015.
[16]
S. Kaltwang, O. Rudovic, and M. Pantic. Continuous pain intensity estimation from facial expressions. In G. Bebis, R. Boyle, B. Parvin, D. Koracin, C. Fowlkes, S. Wang, M.-H. Choi, S. Mantler, J. Schulze, D. Acevedo, K. Mueller, and M. Papka, editors, Advances in Visual Computing, volume 7432 of LNCS, pages 368--377. Springer Berlin Heidelberg, 2012.
[17]
M. Lukosevicius, H. Jaeger, and B. Schrauwen. Reservoir computing trends. KI - Künstliche Intelligenz, 26(4):365--371, 2012.
[18]
M. A. Nicolaou, H. Gunes, and M. Pantic. Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Transactions on Affective Computing, 2(2):92--105, 2011.
[19]
T. L. Nwe, S. W. Foo, and L. C. De Silva. Classification of stress in speech using linear and nonlinear features. In Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03). 2003 IEEE International Conference on, volume 2, pages II--9. IEEE, 2003.
[20]
M. Pantic and L. J. Rothkrantz. Toward an affect-sensitive multimodal human-computer interaction. Proceedings of the IEEE, 91(9):1370--1390, 2003.
[21]
F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on, pages 1--8. IEEE, 2013.
[22]
M. Schels, M. Glodek, F. Schwenker, and G. Palm. Revisiting AVEC 2011--an information fusion architecture. In Neural Nets and Surroundings, pages 385--393. Springer, 2013.
[23]
M. Schels, M. K\"achele, M. Glodek, D. Hrabal, S. Walter, and F. Schwenker. Using unlabeled data to improve classification of emotional states in human computer interaction. Journal on Multimodal User Interfaces, 8(1):5--16, 2014.
[24]
S. Scherer, M. Glodek, F. Schwenker, N. Campbell, and G. Palm. Spotting laughter in natural multiparty conversations: A comparison of automatic online and offline approaches using audiovisual data. ACM Transactions on Interactive Intelligent Systems: Special Issue on Affective Interaction in Natural Environments, 2(1):1--31, 2012.
[25]
B. Schuller, D. Seppi, A. Batliner, A. Maier, and S. Steidl. Towards more reality in the recognition of emotional speech. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP'07, volume 4, pages IV--941. IEEE, 2007.
[26]
S. Steidl, A. Batliner, B. Schuller, and D. Seppi. The hinterland of emotions: facing the open-microphone challenge. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pages 1--8. IEEE, 2009.
[27]
G. Stratou, S. Scherer, J. Gratch, and L.-P. Morency. Automatic nonverbal behavior indicators of depression and PTSD: Exploring gender differences. In Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), pages 147--152, 2013.
[28]
E. Trentin, S. Scherer, and F. Schwenker. Emotion recognition from speech signals via a probabilistic echo-state network. Pattern Recognition Letters, 66:4--12, 2015.
[29]
M. Valstar, J. Gratch, B. Schuller, F. Ringeval, D. Lalanne, M. T. Torres, S. Scherer, G. Stratou, R. Cowie, and M. Pantic. AVEC 2016-depression, mood, and emotion recognition workshop and challenge. arXiv preprint arXiv:1605.01600, 2016.
[30]
M. Valstar, B. Schuller, K. Smith, T. Almaev, F. Eyben, J. Krajewski, R. Cowie, and M. Pantic. Avec 2014: 3d dimensional affect and depression recognition challenge. In Proceedings of AVEC, AVEC '14, pages 3--10. ACM, 2014.
[31]
M. Wöllmer, M. Kaiser, F. Eyben, B. Schuller, and G. Rigoll. LS™-modeling of continuous emotions in an audiovisual affect recognition framework. Image and Vision Computing, 31(2):153--163, 2013.
[32]
M. Wöllmer, B. Schuller, F. Eyben, and G. Rigoll. Combining long short-term memory and dynamic bayesian networks for incremental emotion-sensitive artificial listening. IEEE Journal of Selected Topics in Signal Processing, 4(5):867--881, 2010.
[33]
X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 532--539, 2013.
[34]
Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE transactions on pattern analysis and machine intelligence, 31(1):39--58, 2009.

Cited By

View all
  • (2024)Predicting the Arousal and Valence Values of Emotional States Using Learned, Predesigned, and Deep Visual FeaturesSensors10.3390/s2413439824:13(4398)Online publication date: 7-Jul-2024
  • (2023)Prediction of Continuous Emotional Measures through Physiological and Visual DataSensors10.3390/s2312561323:12(5613)Online publication date: 15-Jun-2023
  • (2023)A Bayesian Filtering Framework for Continuous Affect Recognition From Facial ImagesIEEE Transactions on Multimedia10.1109/TMM.2022.316424825(3709-3722)Online publication date: 1-Jan-2023
  • Show More Cited By

Index Terms

  1. Continuous Multimodal Human Affect Estimation using Echo State Networks

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      AVEC '16: Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge
      October 2016
      114 pages
      ISBN:9781450345163
      DOI:10.1145/2988257
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 16 October 2016

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. affect recognition
      2. echo state networks
      3. multi-modal fusion

      Qualifiers

      • Research-article

      Conference

      MM '16
      Sponsor:
      MM '16: ACM Multimedia Conference
      October 16, 2016
      Amsterdam, The Netherlands

      Acceptance Rates

      AVEC '16 Paper Acceptance Rate 12 of 14 submissions, 86%;
      Overall Acceptance Rate 52 of 98 submissions, 53%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)15
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 12 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Predicting the Arousal and Valence Values of Emotional States Using Learned, Predesigned, and Deep Visual FeaturesSensors10.3390/s2413439824:13(4398)Online publication date: 7-Jul-2024
      • (2023)Prediction of Continuous Emotional Measures through Physiological and Visual DataSensors10.3390/s2312561323:12(5613)Online publication date: 15-Jun-2023
      • (2023)A Bayesian Filtering Framework for Continuous Affect Recognition From Facial ImagesIEEE Transactions on Multimedia10.1109/TMM.2022.316424825(3709-3722)Online publication date: 1-Jan-2023
      • (2021)Spatio-Temporal Encoder-Decoder Fully Convolutional Network for Video-Based Dimensional Emotion RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2019.294022412:3(565-578)Online publication date: 1-Jul-2021
      • (2021)A Two-Stage Spatiotemporal Attention Convolution Network for Continuous Dimensional Emotion Recognition From Facial VideoIEEE Signal Processing Letters10.1109/LSP.2021.306360928(698-702)Online publication date: 2021
      • (2019)Using computer-vision and machine learning to automate facial coding of positive and negative affect intensityPLOS ONE10.1371/journal.pone.021173514:2(e0211735)Online publication date: 5-Feb-2019
      • (2019)Continuous Emotion Recognition in Videos by Fusing Facial Expression, Head Pose and Eye Gaze2019 International Conference on Multimodal Interaction10.1145/3340555.3353739(40-48)Online publication date: 14-Oct-2019
      • (2018)Towards a Better Gold StandardProceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop10.1145/3266302.3266307(73-81)Online publication date: 15-Oct-2018
      • (2017)AFEW-VA database for valence and arousal estimation in-the-wildImage and Vision Computing10.5555/3143567.314365565:C(23-36)Online publication date: 1-Sep-2017

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media