[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3551876.3554817acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress

Published: 10 October 2022 Publication History

Abstract

The Multimodal Sentiment Analysis Challenge (MuSe) 2022 is dedicated to multimodal sentiment and emotion recognition. For this year's challenge, we feature three datasets: (i) the Passau Spontaneous Football Coach Humor (Passau-SFCH) dataset that contains audio-visual recordings of German football coaches, labelled for the presence of humour; (ii) the Hume-Reaction dataset in which reactions of individuals to emotional stimuli have been annotated with respect to seven emotional expression intensities, and (iii) the Ulm-Trier Social Stress Test (Ulm-TSST) dataset comprising of audio-visual data labelled with continuous emotion values (arousal and valence) of people in stressful dispositions. Using the introduced datasets, MuSe 2022 addresses three contemporary affective computing problems: in the Humor Detection Sub-Challenge (MuSe-Humor), spontaneous humour has to be recognised; in the Emotional Reactions Sub-Challenge (MuSe-Reaction), seven fine-grained 'in-the-wild' emotions have to be predicted; and in the Emotional Stress Sub-Challenge (MuSe-Stress), a continuous prediction of stressed emotion values is featured. The challenge is designed to attract different research communities, encouraging a fusion of their disciplines. Mainly, MuSe 2022 targets the communities of audio-visual emotion recognition, health informatics, and symbolic sentiment analysis. This baseline paper describes the datasets as well as the feature sets extracted from them. A recurrent neural network with LSTM cells is used to set competitive baseline results on the test partitions for each sub-challenge. We report an Area under the Curve (AUC) of .8480 for MuSe-Humor; .2801 mean (from 7-classes) Pearson's Correlations Coefficient for MuSe-Reaction, as well as .4931 Concordance Correlation Coefficient (CCC) and .4761 for valence and arousal in MuSe-Stress, respectively.

References

[1]
Shahin Amiriparian, Lukas Christ, Andreas König, Eva-Maria Meßner, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2022. MuSe 2022 Challenge: Multimodal Humour, Emotional Reactions, and Stress. In Proceedings of the 30th ACM International Conference on Multimedia (MM'22), October 10--14, 2022, Lisbon, Portugal. Association for Computing Machinery, Lisbon, Portugal. 3 pages, to appear.
[2]
Shahin Amiriparian, Nicholas Cummins, Sandra Ottl, Maurice Gerczuk, and Björn Schuller. 2017. Sentiment Analysis Using Image-based Deep Spectrum Features. In Proceedings 2nd International Workshop on Automatic Sentiment Analysis in the Wild (WASA 2017) held in conjunction with the 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017). AAAC, IEEE, San Antonio, TX, 26--29.
[3]
Shahin Amiriparian, Maurice Gerczuk, Sandra Ottl, Nicholas Cummins, Michael Freitag, Sergey Pugachevskiy, and Björn Schuller. 2017. Snore Sound Classification Using Image-based Deep Spectrum Features. In Proceedings INTERSPEECH 2017, 18th Annual Conference of the International Speech Communication Association. ISCA, ISCA, Stockholm, Sweden, 3512--3516.
[4]
Shahin Amiriparian, Maurice Gerczuk, Lukas Stappen, Alice Baird, Lukas Koebe, Sandra Ottl, and Björn Schuller. 2020. Towards Cross-Modal Pre-Training and Learning Tempo-Spatial Characteristics for Audio Recognition with Convolutional and Recurrent Neural Networks. EURASIP Journal on Audio, Speech, and Music Processing 2020, 19 (2020), 1--11.
[5]
Shahin Amiriparian, Tobias Hübner, Vincent Karas, Maurice Gerczuk, Sandra Ottl, and Björn W. Schuller. 2022. DeepSpectrumLite: A Power-Efficient Transfer Learning Framework for Embedded Speech and Audio Processing From Decentralized Data. Frontiers in Artificial Intelligence 5 (2022), 10. https: //doi.org/10.3389/frai.2022.856232
[6]
R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber. 2020. Common Voice: A MassivelyMultilingual Speech Corpus. In Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020). European Language Resources Association (ELRA), Marseille, France, 4211--4215.
[7]
Alice Baird, Shahin Amiriparian, and Björn Schuller. 2019. Can deep generative audio be emotional? Towards an approach for personalised emotional audio generation. In 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP). IEEE, IEEE, Kuala Lumpur, Malaysia, 1--5.
[8]
Alice Baird, Lukas Stappen, Lukas Christ, Lea Schumann, Eva-Maria Meßner, and Björn W Schuller. 2021. A Physiologically-adapted Gold Standard for Arousal During a Stress Induced Scenario. In Proceedings of the 2nd Multimodal Sentiment Analysis Challenge, co-located with the 29th ACM International Conference on Multimedia (ACMMM). ACM, Association for Computing Machinery, Changu, China, 69--73.
[9]
Alice Baird, Andreas Triantafyllopoulos, Sandra Zänkert, Sandra Ottl, Lukas Christ, Lukas Stappen, Julian Konzok, Sarah Sturmbauer, Eva-Maria Meßner, Brigitte M. Kudielka, Nicolas Rohleder, Harald Baumeister, and Björn W. Schuller. 2021. An Evaluation of Speech-Based Recognition of Emotional and Physiological Markers of Stress. Frontiers in Computer Science 3 (2021), 19. https://doi.org/10. 3389/fcomp.2021.750284
[10]
Björn W. Schuller and Anton Batliner and Christian Bergler and Cecilia Mascolo and Jing Han and Iulia Lefter and Heysem Kaya and Shahin Amiriparian and Alice Baird and Lukas Stappen and Sandra Ottl and Maurice Gerczuk and Panaguiotis Tzirakis and Chloë Brown and Jagmohan Chauhan and Andreas Grammenos and Apinan Hasthanasombat and Dimitris Spathis and Tong Xia and Pietro Cicuta and Leon J. M. Rothkrantz and Joeri Zwerts and Jelle Treep and Casper Kaandorp. 2021. The INTERSPEECH 2021 Computational Paralinguistics Challenge: COVID-19 Cough, COVID-19 Speech, Escalation & Primates. In Proceedings INTERSPEECH 2021, 22nd Annual Conference of the International Speech Communication Association. ISCA, ISCA, Brno, Czechia, 431--435.
[11]
Cong Cai, Yu He, Licai Sun, Zheng Lian, Bin Liu, Jianhua Tao, Mingyu Xu, and Kexin Wang. 2021. Multimodal Sentiment Analysis based on Recurrent Neural Network and Multimodal Attention. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, New York, NY, USA, 61--67.
[12]
Yekta Said Can, Bert Arnrich, and Cem Ersoy. 2019. Stress detection in daily life scenarios using smart phones and wearable sensors: A survey. Journal of Biomedical Informatics 92 (2019), 22.
[13]
Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. 2018. VGGFace2: A Dataset for Recognising Faces across Pose and Age. In 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018). IEEE, New York, NY, 67--74. https://doi.org/10.1109/FG.2018.00020
[14]
Delphine Caruelle, Anders Gustafsson, Poja Shams, and Line Lervik-Olsen. 2019. The use of electrodermal activity (EDA) measurement to understand consumer emotions--a literature review and a call for action. Journal of Business Research 104 (2019), 146--160.
[15]
Pierre Chalfoun, Soumaya Chaffar, and Claude Frasson. 2006. Predicting the emotional reaction of the learner with a machine learning technique. In Workshop on Motivaional and Affective Issues in ITS, ITS'06, International Conference on Intelligent Tutoring Systems. Springer, Jhongli, Taiwan, 3.
[16]
Peng-Yu Chen and Von-Wun Soo. 2018. Humor recognition using deep learning. In Proceedings of the 2018 conference of the north american chapter of the association for computational linguistics: Human language technologies, volume 2 (short papers). Association for Computational Linguistics, New Orleans, Louisiana, 113--117.
[17]
Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. 2020. Unsupervised cross-lingual representation learning for speech recognition. arXiv:arXiv preprint arXiv:2006.13979
[18]
Alan S Cowen and Dacher Keltner. 2017. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences 114, 38 (2017), E7900--E7909.
[19]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Minneapolis, Minnesota, 4171--4186.
[20]
Anh-Quang Duong, Ngoc-Huynh Ho, Hyung-Jeong Yang, Guee-Sang Lee, and Soo-Hyung Kim. 2021. Multi-modal Stress Recognition Using Temporal Convolution and Recurrent Network with Positional Embedding. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, New York, NY, USA, 37--42.
[21]
Paul Ekman and Wallace V Friesen. 1978. Facial action coding system. Environmental Psychology & Nonverbal Behavior (1978).
[22]
Florian Eyben, Klaus R Scherer, Björn W Schuller, Johan Sundberg, Elisabeth André, Carlos Busso, Laurence Y Devillers, Julien Epps, Petri Laukka, Shrikanth S Narayanan, et al. 2015. The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing. IEEE Transactions on Affective Computing 7, 2 (2015), 190--202.
[23]
Florian Eyben, Martin Wöllmer, and Björn Schuller. 2010. Opensmile: the munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM International Conference on Multimedia. Association for Computing Machinery, Firenze, Italy, 1459--1462.
[24]
Maurice Gerczuk, Shahin Amiriparian, Sandra Ottl, and Björn Schuller. 2022. EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion Recognition. IEEE Transactions on Affective Computing 13 (2022).
[25]
Panagiotis Gkorezis, Eugenia Petridou, and Panteleimon Xanthiakos. 2014. Leader positive humor and organizational cynicism: LMX as a mediator. Leadership & Organization Development Journal 35 (2014), 305 -- 315.
[26]
Michael Grimm and Kristian Kroschel. 2005. Evaluation of natural emotions using self assessment manikins. In IEEE Workshop on Automatic Speech Recognition and Understanding, 2005. IEEE, IEEE, Cancún, Mexico, 381--385.
[27]
Salam Hamieh, Vincent Heiries, Hussein Al Osman, and Christelle Godin. 2021. Multi-modal Fusion for Continuous Emotion Recognition by Using AutoEncoders. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, New York, NY, USA, 21--27.
[28]
Md Kamrul Hasan, Wasifur Rahman, Amir Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, et al. 2019. Ur-funny: A multimodal language dataset for understanding humor. arXiv:arXiv preprint arXiv:1904.06618
[29]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE, Las Vegas, Nevada, 770--778.
[30]
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE, Honolulu, Hawaii, 4700--4708.
[31]
Clemens Kirschbaum, Karl-Martin Pirke, and Dirk H Hellhammer. 1993. The ?Trier Social Stress Test'--a tool for investigating psychobiological stress responses in a laboratory setting. Neuropsychobiology 28, 1--2 (1993), 76--81.
[32]
T Senthil Kumar and T Senthil. 2021. Construction of hybrid deep learning model for predicting children behavior based on their emotional reaction. Journal of Information Technology 3, 01 (2021), 29--43.
[33]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep Learning Face Attributes in the Wild. In Proceedings of International Conference on Computer Vision (ICCV). IEEE, Santiago, Chile, 3730 -- 3738.
[34]
Ziyu Ma, Fuyan Ma, Bin Sun, and Shutao Li. 2021. Hybrid Mutimodal Fusion for Dimensional Emotion Recognition. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, New York, NY, USA, 29--36.
[35]
Adria Mallol-Ragolta, Nicholas Cummins, and Björn W Schuller. 2020. An Investigation of Cross-Cultural Semi-Supervised Learning for Continuous Affect Recognition. In INTERSPEECH. International Speech Communication Association (ISCA), Shanghai, China, 511--515.
[36]
Rod A Martin, Patricia Puhlik-Doris, Gwen Larsen, Jeanette Gray, and Kelly Weir. 2003. Individual differences in uses of humor and their relation to psychological well-being: Development of the Humor Styles Questionnaire. Journal of research in personality 37, 1 (2003), 48--75.
[37]
Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. Montreal Forced Aligner: Trainable Text-Speech AlignmenUsing Kaldi. In Proceedings of INTERSPEECH, Vol. 2017. International Speech Communication Association (ISCA), Stockholm, Sweden, 498--502.
[38]
Anirudh Mittal, Pranav Jeevan, Prerak Gandhi, Diptesh Kanojia, and Pushpak Bhattacharyya. 2021. " So You Think You're Funny?": Rating the Humour Quotient in Standup Comedy. arXiv:arXiv preprint arXiv:2110.12765
[39]
Sandra Ottl, Shahin Amiriparian, Maurice Gerczuk, Vincent Karas, and Björn Schuller. 2020. Group-level Speech Emotion Recognition Utilising Deep Spectrum Features. In Proceedings of the 8th ICMI 2020 EmotiW -- Emotion Recognition In The Wild Challenge (EmotiW 2020), 22nd ACM International Conference on Multimodal Interaction (ICMI 2020). ACM, ACM, Utrecht, The Netherlands, 821--826.
[40]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision 115, 3 (2015), 211--252.
[41]
G Skoraczy'ski, P Dittwald, B Miasojedow, S Szymku?, EP Gajewska, Bartosz A Grzybowski, and A Gambin. 2017. Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? Scientific reports 7, 1 (2017), 1--9.
[42]
Lukas Stappen, Alice Baird, Lukas Christ, Lea Schumann, Benjamin Sertolli, Eva-Maria Messner, Erik Cambria, Guoying Zhao, and Björn W Schuller. 2021. The MuSe 2021 multimodal sentiment analysis challenge: sentiment, emotion, physiological-emotion, and stress. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, New York, NY, USA, 5--14.
[43]
Lukas Stappen, Alice Baird, Georgios Rizos, Panagiotis Tzirakis, Xinchen Du, Felix Hafner, Lea Schumann, Adria Mallol-Ragolta, Bjoern W. Schuller, Iulia Lefter, Erik Cambria, and Ioannis Kompatsiaris. 2020. MuSe 2020 Challenge and Workshop: Multimodal Sentiment Analysis, Emotion-Target Engagement and Trustworthiness Detection in Real-Life Media. In Proceedings of the 1st International on Multimodal Sentiment Analysis in Real-Life Media Challenge and Workshop. ACM, Association for Computing Machinery, New York, NY, USA, 35--44.
[44]
Lukas Stappen, Alice Baird, Lea Schumann, and Björn Schuller. 2021. The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset: Collection, Insights and Improvements. IEEE Transactions on Affective Computing (Early Access) (June 2021). https://doi.org/10.1109/TAFFC.2021.3097002
[45]
Lukas Stappen, Lea Schumann, Benjamin Sertolli, Alice Baird, Benjamin Weigel, Erik Cambria, and Björn W Schuller. 2021. MuSe-Toolbox: The Multimodal Sentiment Analysis Continuous Annotation Fusion and Discrete Class Transformation Toolbox. In Proceedings of the 2nd Multimodal Sentiment Analysis Challenge, colocated with the 29th ACM International Conference on Multimedia (ACMMM). ACM, Association for Computing Machinery, Changu, China, 75--82.
[46]
Jennifer J Sun, Ting Liu, Alan S Cowen, Florian Schroff, Hartwig Adam, and Gautam Prasad. 2020. EEV: A large-scale dataset for studying evoked expressions from video. arXiv:arXiv preprint arXiv:2001.05488
[47]
Licai Sun, Zheng Lian, Jianhua Tao, Bin Liu, and Mingyue Niu. 2020. Multimodal Continuous Dimensional Emotion Recognition Using Recurrent Neural Network and Self-Attention Mechanism. In Proceedings of the 1st International on Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop. Association for Computing Machinery, New York, NY, USA, 27--34.
[48]
Ron Tamborini, James Stiff, and Carl Heidel. 1990. Reacting to graphic horror: A model of empathy and emotional behavior. Communication Research 17, 5 (1990), 616--640.
[49]
Panagiotis Tzirakis, Stefanos Zafeiriou, and Bjorn W Schuller. 2018. End2You--The Imperial Toolkit for Multimodal Profiling by End-to-End Learning. arXiv:arXiv preprint arXiv:1802.01115
[50]
Bogdan Vlasenko, RaviShankar Prasad, and Mathew Magimai.-Doss. 2021. Fusion of Acoustic and Linguistic Information using Supervised Autoencoder for Improved Emotion Recognition. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, New York, NY, USA, 51--59.
[51]
Jiaming Wu, Hongfei Lin, Liang Yang, and Bo Xu. 2021. MUMOR: A Multimodal Dataset for Humor Detection in Conversations. In CCF International Conference on Natural Language Processing and Chinese Computing. Springer, Springer, Qingdao, China, 619--627.
[52]
Diyi Yang, Alon Lavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extraction. In Proceedings of the 2015 conference on empirical methods in natural language processing. Association for Computational Linguistics, Lisbon, Portugal, 2367--2376.
[53]
Shuo Yang, Ping Luo, Chen Change Loy, and Xiaoou Tang. 2016. WIDER FACE: A Face Detection Benchmark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Las Vegas, NV, USA, 9.
[54]
Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Processing Letters 23 (04 2016).
[55]
Tenggan Zhang, Zhaopei Huang, Ruichen Li, Jinming Zhao, and Qin Jin. 2021. Multimodal Fusion Strategies for Physiological-emotion Analysis. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, New York, NY, USA, 43--50.
[56]
Feng Zhou and Fernando De la Torre. 2015. Generalized canonical time warping. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 2 (2015), 279-- 294.
[57]
Dov Zohar, Orna Tzischinsky, Rachel Epstein, and Peretz Lavie. 2005. The effects of sleep loss on medical residents' emotional reactions to work events: a cognitiv

Cited By

View all
  • (2024)Beyond Deep Learning: Charting the Next Frontiers of Affective ComputingIntelligent Computing10.34133/icomputing.00893Online publication date: 16-Sep-2024
  • (2024)Emotion Recognition from Videos Using Multimodal Large Language ModelsFuture Internet10.3390/fi1607024716:7(247)Online publication date: 13-Jul-2024
  • (2024)EVAC 2024 – Empathic Virtual Agent Challenge: Appraisal-based Recognition of Affective StatesProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3689029(677-683)Online publication date: 4-Nov-2024
  • Show More Cited By

Index Terms

  1. The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        MuSe' 22: Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge
        October 2022
        118 pages
        ISBN:9781450394840
        DOI:10.1145/3551876
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 10 October 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. affective computing
        2. benchmark
        3. challenge
        4. emotion recognition
        5. humor detection
        6. multimodal fusion
        7. multimodal sentiment analysis

        Qualifiers

        • Research-article

        Funding Sources

        Conference

        MM '22
        Sponsor:

        Acceptance Rates

        MuSe' 22 Paper Acceptance Rate 14 of 17 submissions, 82%;
        Overall Acceptance Rate 14 of 17 submissions, 82%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)117
        • Downloads (Last 6 weeks)5
        Reflects downloads up to 01 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Beyond Deep Learning: Charting the Next Frontiers of Affective ComputingIntelligent Computing10.34133/icomputing.00893Online publication date: 16-Sep-2024
        • (2024)Emotion Recognition from Videos Using Multimodal Large Language ModelsFuture Internet10.3390/fi1607024716:7(247)Online publication date: 13-Jul-2024
        • (2024)EVAC 2024 – Empathic Virtual Agent Challenge: Appraisal-based Recognition of Affective StatesProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3689029(677-683)Online publication date: 4-Nov-2024
        • (2024)Video-Based Emotional Reaction Intensity Estimation Based on Multimodal Feature Extraction2024 International Russian Automation Conference (RusAutoCon)10.1109/RusAutoCon61949.2024.10693960(838-842)Online publication date: 8-Sep-2024
        • (2024)Exploring Multimodal Features to Understand Cultural Context for Spontaneous Humor PredictionIntelligent Human Computer Interaction10.1007/978-3-031-53827-8_14(143-152)Online publication date: 29-Feb-2024
        • (2023)Computational charisma—A brick by brick blueprint for building charismatic artificial intelligenceFrontiers in Computer Science10.3389/fcomp.2023.11352015Online publication date: 2-Nov-2023
        • (2023)ECG-Coupled Multimodal Approach for Stress DetectionProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613103(67-72)Online publication date: 1-Nov-2023
        • (2023)COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion RecognitionIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.332577046:2(805-822)Online publication date: 18-Oct-2023
        • (2023)Emotion Recognition of Humans using modern technology of AI: A Survey2023 7th International Symposium on Innovative Approaches in Smart Technologies (ISAS)10.1109/ISAS60782.2023.10391385(1-10)Online publication date: 23-Nov-2023
        • (2023)Integrating Holistic and Local Information to Estimate Emotional Reaction Intensity2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW59228.2023.00631(5934-5939)Online publication date: Jun-2023
        • Show More Cited By

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media