[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent

Published: 01 January 2012 Publication History

Abstract

SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from interactions between users and an "operator” simulating a SAL agent, in different configurations: Solid SAL (designed so that operators displayed an appropriate nonverbal behavior) and Semi-automatic SAL (designed so that users' experience approximated interacting with a machine). We then recorded user interactions with the developed system, Automatic SAL, comparing the most communicatively competent version to versions with reduced nonverbal skills. High quality recording was provided by five high-resolution, high-framerate cameras, and four microphones, recorded synchronously. Recordings total 150 participants, for a total of 959 conversations with individual SAL characters, lasting approximately 5 minutes each. Solid SAL recordings are transcribed and extensively annotated: 6-8 raters per clip traced five affective dimensions and 27 associated categories. Other scenarios are labeled on the same pattern, but less fully. Additional information includes FACS annotation on selected extracts, identification of laughs, nods, and shakes, and measures of user engagement with the automatic system. The material is available through a web-accessible database.

Cited By

View all
  • (2024)MSP-GEO Corpus: A Multimodal Database for Understanding Video-Learning ExperienceProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685737(488-497)Online publication date: 4-Nov-2024
  • (2024)ECFCON: Emotion Consequence Forecasting in ConversationsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681413(2233-2241)Online publication date: 28-Oct-2024
  • (2024)Analyzing Continuous-Time and Sentence-Level Annotations for Speech Emotion RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2024.337238015:3(1754-1768)Online publication date: 1-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing  Volume 3, Issue 1
January 2012
128 pages

Publisher

IEEE Computer Society Press

Washington, DC, United States

Publication History

Published: 01 January 2012

Author Tags

  1. Emotional corpora
  2. affective annotation
  3. affective computing
  4. social signal processing.

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)MSP-GEO Corpus: A Multimodal Database for Understanding Video-Learning ExperienceProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685737(488-497)Online publication date: 4-Nov-2024
  • (2024)ECFCON: Emotion Consequence Forecasting in ConversationsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681413(2233-2241)Online publication date: 28-Oct-2024
  • (2024)Analyzing Continuous-Time and Sentence-Level Annotations for Speech Emotion RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2024.337238015:3(1754-1768)Online publication date: 1-Jul-2024
  • (2024)A survey of dialogic emotion analysisPattern Recognition10.1016/j.patcog.2024.110794156:COnline publication date: 18-Nov-2024
  • (2024)Multimodal Emotion Recognition with Deep LearningInformation Fusion10.1016/j.inffus.2023.102218105:COnline publication date: 1-May-2024
  • (2024)A multi-modal driver emotion dataset and studyEngineering Applications of Artificial Intelligence10.1016/j.engappai.2023.107772130:COnline publication date: 1-Apr-2024
  • (2024)Virtual humans as social actorsComputers in Human Behavior10.1016/j.chb.2024.108161155:COnline publication date: 1-Jun-2024
  • (2024)EmoTwiCS: a corpus for modelling emotion trajectories in Dutch customer service dialogues on TwitterLanguage Resources and Evaluation10.1007/s10579-023-09700-058:2(505-546)Online publication date: 1-Jun-2024
  • (2024)A multimodal fusion-based deep learning framework combined with local-global contextual TCNs for continuous emotion recognition from videosApplied Intelligence10.1007/s10489-024-05329-w54:4(3040-3057)Online publication date: 1-Feb-2024
  • (2024)A Three-stage multimodal emotion recognition network based on text low-rank fusionMultimedia Systems10.1007/s00530-024-01345-530:3Online publication date: 7-May-2024
  • Show More Cited By

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media