A multimodal emotion recognition method based on multiple fusion of audio-visual modalities
Abstract
References
Index Terms
- A multimodal emotion recognition method based on multiple fusion of audio-visual modalities
Recommendations
Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects
AbstractEmotion recognition has recently attracted extensive interest due to its significant applications to human–computer interaction. The expression of human emotion depends on various verbal and non-verbal languages like audio, visual, text, etc. ...
Deep learning based multimodal emotion recognition using model-level fusion of audio–visual modalities
AbstractEmotion identification based on multimodal data (e.g., audio, video, text, etc.) is one of the most demanding and important research fields, with various uses. In this context, this research work has conducted a rigorous exploration of ...
Highlights- Deep learning-based feature extractor networks for video and audio data are proposed.
Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech
Affect and Emotion in Human-Computer InteractionIn this paper we present a multimodal approach for the recognition of eight emotions. Our approach integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
- Research
- Refereed limited
Conference
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 32Total Downloads
- Downloads (Last 12 months)32
- Downloads (Last 6 weeks)10
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign inFull Access
View options
View or Download as a PDF file.
PDFeReader
View online with eReader.
eReaderHTML Format
View this article in HTML Format.
HTML Format