[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2818346.2823305acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Temporal Association Rules for Modelling Multimodal Social Signals

Published: 09 November 2015 Publication History

Abstract

In this paper, we present the first step of a methodology dedicated to deduce automatically sequences of signals expressed by humans during an interaction. The aim is to link interpersonal stances with arrangements of social signals such as modulations of Action Units and prosody during a face-to-face exchange. The long-term goal is to infer association rules of signals. We plan to use them as an input to the animation of an Embodied Conversational Agent (ECA). In this paper, we illustrate the proposed methodology to the SEMAINE-DB corpus from which we automatically extracted Action Units (AUs), head positions, turn-taking and prosody information. We have applied the data mining algorithm that is used to find the sequences of social signals featuring different social stances. We finally discuss our primary results focusing on given AUs (smiles and eyebrows) and the perspectives of this method.

References

[1]
Alessandro Vinciarelli, Maja Pantic, and Herve Bourlard. Social signal processing: Survey of an emerging domain. Image and Vision Computing, 27(12) :1743--1759, 2009.
[2]
Alessandro Vinciarelli, Maja Pantic, Dirk Heylen, Catherine Pelachaud, Isabella Poggi, Francesca D'Errico, and Marc Schroder. Bridging the gap between social animal and unsocial machine: A survey of social signal processing. Affective Computing, 3(1) :69--87, 2012.
[3]
Ognjen Rudovic, Mihalis A Nicolaou, and Vladimir Pavlovic. 1 machine learning methods for social signal processing.
[4]
Alex Pentland. Social dynamics : Signals and behavior. In ICDL, volume 5, 2004.
[5]
Georgia Sandbach, Stefanos Zafeiriou, and Maja Pantic. Markov random field structures for facial action unit intensity estimation. In ICCVW, 2013, pages 738--745. IEEE, 2013.
[6]
Arman Savran, Haichuan Cao, Ani Nenkova, and Rajesh Verma. Temporal bayesian fusion for affect sensing: Combining video, audio, and lexical modalities. 2014.
[7]
Klaus R Scherer. What are emotions -- and how can they be measured -- Social science information, 44(4) :695--729, 2005.
[8]
Massimo Chindamo, Julian Allwood, and Elisabeth Ahlsen. Some suggestions for the study of stance in communication. In PASSAT, 2012 and SocialCom 2012, pages 617--622. IEEE, 2012.
[9]
Elisabetta Bevacqua and Catherine Pelachaud. Expressive audio-visual speech. Computer Animation and Virtual Worlds, 15(3--4) :297--304, 2004.
[10]
Qiuhua Fu, R op den Akker, and M Bruijnes. A literature review of typical behavior of different interpersonal attitude. Capita Selecta HMI, University of Twente, 2014.
[11]
Jens Allwood and Loredana Cerrato. A study of gestural feedback expressions. In First nordic symposium on multimodal communication, pages 7--22. Copenhagen, 2003.
[12]
Ken Prepin, Magalie Ochs, and Catherine Pelachaud. Mutual stance building in dyad of virtual agents : Smile alignment and synchronisation. In PASSAT, 2012 and SocialCom, 2012, pages 938--943. IEEE, 2012.
[13]
Ken Prepin, Magalie Ochs, and Catherine Pelachaud. Beyond backchannels : co-construction of dyadic stancce by reciprocal reinforcement of smiles between virtual agents. In International Conference CogSci, 2013.
[14]
Angelo Cafaro, Hannes Högni Vilhjálmsson, Timothy Bickmore, Dirk Heylen, Kamilla Rún Jóhannsdóttir, and Gunnar Steinn Valgardsson. First impressions : Users judgments of virtual agents personality and interpersonal attitude in first encounters. In IVA, pages 67--80. Springer, 2012.
[15]
Jina Lee and Stacy Marsella. Modeling speaker behavior : A comparison of two approaches. In IVA, pages 161--174. Springer, 2012.
[16]
Brian Ravenet, Magalie Ochs, and Catherine Pelachaud. From a user-created corpus of virtual agents non-verbal behavior to a computational model of interpersonal attitudes. In IVA, pages 263--274. Springer, 2013.
[17]
Hector P Martínez and Georgios N Yannakakis. Mining multimodal sequential patterns : a case study on affect detection. In Proc. 13th ICMI, pages 3--10. ACM, 2011.
[18]
Mathieu Chollet, Magalie Ochs, and Catherine Pelachaud. From non-verbal signals sequence mining to bayesian networks for interpersonal attitudes expression. In IVA, pages 120--133. Springer, 2014.
[19]
Mathieu Guillame-Bert and James L Crowley. Learning temporal association rules on symbolic time sequences. In ACML, pages 159--174, 2012.
[20]
Mathieu Guillame-Bert and James L Crowley. Planning with inaccurate temporal rules. In ICTAI, 2012, volume 1, pages 492--499. IEEE, 2012.
[21]
Magalie Ochs and Catherine Pelachaud. Model of the perception of smiling virtual character. In Proc. 11th International Conference on AAMAS, pages 87--94. International Foundation for Autonomous Agents and Multiagent Systems, 2012.
[22]
Gary McKeown, Michel Valstar, Roddy Cowie, Maja Pantic, and Marc Schröder. The semaine database : Annotated multimodal records of emotionally colored conversations between a person and a limited agent. Affective Computing, 3(1) :5--17, 2012.
[23]
Jeremie Nicolle, Kevin Bailly, and Mohamed Chetouani. Facial action unit intensity prediction via hard multi-task metric learning for kernel regression. In FERA Challenge In : IEEE International Conference on Automatic Face and Festure Recognition, 2015.
[24]
Jessica Lin, Eamonn Keogh, Li Wei, and Stefano Lonardi. Experiencing sax : a novel symbolic representation of time series. Data Mining and knowledge discovery, 15(2) :107--144, 2007.

Cited By

View all
  • (2019)Timing is Everything: Identifying Diverse Interaction Dynamics in Scenario and Non-Scenario Meetings2019 15th International Conference on eScience (eScience)10.1109/eScience.2019.00029(203-212)Online publication date: Sep-2019
  • (2018)Using Parallel Episodes of Speech to Represent and Identify Interaction Dynamics for Group MeetingsProceedings of the Group Interaction Frontiers in Technology10.1145/3279981.3279983(1-7)Online publication date: 16-Oct-2018

Index Terms

  1. Temporal Association Rules for Modelling Multimodal Social Signals

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '15: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction
    November 2015
    678 pages
    ISBN:9781450339124
    DOI:10.1145/2818346
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 November 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. action unit
    2. data mining
    3. data processing
    4. facial expression
    5. interpersonal stance
    6. prosody
    7. sequence mining
    8. social signal processing

    Qualifiers

    • Research-article

    Funding Sources

    • Labex SMART ANR

    Conference

    ICMI '15
    Sponsor:
    ICMI '15: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
    November 9 - 13, 2015
    Washington, Seattle, USA

    Acceptance Rates

    ICMI '15 Paper Acceptance Rate 52 of 127 submissions, 41%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 04 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2019)Timing is Everything: Identifying Diverse Interaction Dynamics in Scenario and Non-Scenario Meetings2019 15th International Conference on eScience (eScience)10.1109/eScience.2019.00029(203-212)Online publication date: Sep-2019
    • (2018)Using Parallel Episodes of Speech to Represent and Identify Interaction Dynamics for Group MeetingsProceedings of the Group Interaction Frontiers in Technology10.1145/3279981.3279983(1-7)Online publication date: 16-Oct-2018

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media