[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3551876.3554815acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
keynote

The Dos and Don'ts of Affect Analysis

Published: 10 October 2022 Publication History

Abstract

As an inseparable and crucial component of communication affects play a substantial role in human-device and human-human interaction. They convey information about a person's specific traits and states [1, 4, 5], how one feels about the aims of a conversation, the trustworthiness of one's verbal communication [3], and the degree of adaptation in interpersonal speech [2]. This multifaceted nature of human affects poses a great challenge when it comes to applying machine learning systems for their automatic recognition and understanding. Contemporary self-supervised learning architectures such as Transformers, which define state-of-the-art (SOTA) in this area, have shown noticeable deficits in terms of explainability, while more conventional, non-deep machine learning methods, which provide more transparency, often fall (far) behind SOTA systems. So, is it possible to get the best of these two 'worlds'? And more importantly, at what price? In this talk, I provide a set of Dos and Don'ts guidelines for addressing affective computing tasks w. r. t. (i) preserving privacy for affective data and individuals/groups, (ii) being efficient in computing such data in a transparent way, (iii) ensuring reproducibility of the results, (iv) knowing the differences between causation and correlation, and (v) properly applying social and ethical protocols.

References

[1]
Shahin Amiriparian, Lukas Christ, Andreas König, Eva-Maria Meßner, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2022. MuSe 2022 Challenge: Multimodal Humour, Emotional Reactions, and Stress. In Proceedings of the 30th ACM International Conference on Multimedia (MM'22), October 10--14, 2022, Lisbon, Portugal. Association for Computing Machinery, Lisbon, Portugal. 3 pages, to appear.
[2]
Shahin Amiriparian, Jing Han, Maximilian Schmitt, Alice Baird, Adria Mallol-Ragolta, Manuel Milling, Maurice Gerczuk, and Björn Schuller. 2019. Synchronization in Interpersonal Speech. Frontiers in Robotics and AI, Vol. 6 (2019). https://doi.org/10.3389/frobt.2019.00116
[3]
Shahin Amiriparian, Jouni Pohjalainen, Erik Marchi, Sergey Pugachevskiy, and Björn Schuller. 2016. Is Deception Emotional? An Emotion-Driven Predictive Approach. In Interspeech 2016. 2011--2015. https://doi.org/10.21437/Interspeech.2016--565
[4]
Lukas Christ, Shahin Amiriparian, Alice Baird, Panagiotis Tzirakis, Alexander Kathan, Niklas Müller, Lukas Stappen, Eva-Maria Meßner, Andreas König, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2022. The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress. In Proceedings of the 3rd Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, Lisbon, Portugal. Workshop held at ACM Multimedia 2022, to appear.
[5]
Björn Schuller, Stefan Steidl, Anton Batliner, Alessandro Vinciarelli, Klaus Scherer, Fabien Ringeval, Mohamed Chetouani, Felix Weninger, Florian Eyben, Erik Marchi, et al. 2013. The INTERSPEECH 2013 computational paralinguistics challenge: Social signals, conflict, emotion, autism. In Proceedings of INTERSPEECH.

Cited By

View all
  • (2023)MuSe 2023 Challenge: Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of AffectsProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3610943(9723-9725)Online publication date: 26-Oct-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MuSe' 22: Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge
October 2022
118 pages
ISBN:9781450394840
DOI:10.1145/3551876
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 October 2022

Check for updates

Author Tags

  1. affective computing
  2. explainability
  3. multimodal sentiment analysis
  4. reproducibility

Qualifiers

  • Keynote

Conference

MM '22
Sponsor:

Acceptance Rates

MuSe' 22 Paper Acceptance Rate 14 of 17 submissions, 82%;
Overall Acceptance Rate 14 of 17 submissions, 82%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 18 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)MuSe 2023 Challenge: Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of AffectsProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3610943(9723-9725)Online publication date: 26-Oct-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media