[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3462244.3480975acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
abstract
Public Access

2nd Workshop on Social Affective Multimodal Interaction for Health (SAMIH)

Published: 18 October 2021 Publication History

Abstract

This workshop discusses how interactive, multimodal technology such as virtual agents can be used in social skills training for measuring and training social-affective interactions. Sensing technology now enables analyzing user’s behaviors and physiological signals. Various signal processing and machine learning methods can be used for such prediction tasks. Such social signal processing and tools can be applied to measure and reduce social stress in everyday situations, including public speaking at schools and workplaces.

References

[1]
American Psychiatric Association. 2013. Diagnostic and statistical manual of mental disorders: DSM-5 (5th ed. ed.). Autor, Washington, DC.
[2]
A.S. Bellack, K.T. Mueser, S. Gingerich, and J. Agresta. 2013. Social Skills Training for Schizophrenia, Second Edition: A Step-by-Step Guide. Guilford Publications.
[3]
Merijn Bruijnes, Jeroen Linssen, and Dirk Heylen. 2019. Special issue editorial: Virtual Agents for Social Skills Training. Journal on Multimodal User Interfaces 13, 1 (01 Mar 2019), 1–2. https://doi.org/10.1007/s12193-018-00291-7
[4]
Merijn Bruijnes, Jeroen Linssen, and Dirk Heylen. 2019. Special issue editorial: Virtual Agents for Social Skills Training. Journal on multimodal user interfaces 13, 1 (8 March 2019), 1–2. https://doi.org/10.1007/s12193-018-00291-7
[5]
S. Burke, Tammy L. Bresnahan, T. Li, K. Epnere, A. Rizzo, Mary Partin, Robert M Ahlness, and M. Trimmer. 2018. Using Virtual Interactive Training Agents (ViTA) with Adults with Autism and Other Developmental Disabilities. Journal of Autism and Developmental Disorders 48 (2018), 905–912.
[6]
J. Daniels, J. N. Schwartz, C. Voss, N. Haber, A. Fazel, A. Kline, P. Washington, C. Feinstein, T. Winograd, and D. P. Wall. 2018. Exploratory study examining the at-home feasibility of a wearable tool for social-affective learning in children with autism. NPJ Digit Med 1(2018), 32.
[7]
J. Daniels, J. N. Schwartz, C. Voss, N. Haber, A. Fazel, A. Kline, P. Washington, C. Feinstein, T. Winograd, and D. P. Wall. 2018. Exploratory study examining the at-home feasibility of a wearable tool for social-affective learning in children with autism. NPJ Digit Med 1(2018), 32.
[8]
K. K. Fitzpatrick, A. Darcy, and M. Vierhile. 2017. Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment Health 4, 2 (Jun 2017), e19.
[9]
Ouriel Grynszpan, Julie Bouteiller, Séverine Grynszpan, Florence Le Barillier, Jean-Claude Martin, and Jacqueline Nadel. 2019. Altered sense of gaze leading in autism. Research in Autism Spectrum Disorders 67 (2019), 101441. https://doi.org/10.1016/j.rasd.2019.101441
[10]
Rahul Gupta, Nishant Nath, Taruna Agrawal, Panayiotis Georgiou, David C. Atkins, and Shrikanth S. Narayanan. 2016. Laughter Valence Prediction in Motivational Interviewing Based on Lexical and Acoustic Cues. In Interspeech 2016. 505–509. https://doi.org/10.21437/Interspeech.2016-184
[11]
L. Hemamou, G. Felhi, J. Martin, and C. Clavel. 2019. Slices of Attention in Asynchronous Video Job Interviews. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). 1–7. https://doi.org/10.1109/ACII.2019.8925439
[12]
L’eo Hemamou, Ghazi Felhi, Vincent Vandenbussche, Jean-Claude Martin, and Chloé Clavel. 2019. HireNet: A Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews. In AAAI.
[13]
Mohammed (Ehsan) Hoque, Matthieu Courgeon, Jean-Claude Martin, Bilge Mutlu, and Rosalind W. Picard. 2013. MACH: My Automated Conversation Coach. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Zurich, Switzerland) (UbiComp ’13). Association for Computing Machinery, New York, NY, USA, 697–706. https://doi.org/10.1145/2493432.2493502
[14]
H. Maki, H. Tanaka, S. Sakti, and S. Nakamura. 2018. Graph Regularized Tensor Factorization for Single-Trial EEG Analysis. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 846–850.
[15]
Isabella Poggi, Catherine Pelachaud, F. Rosis, Valeria Carofiglio, and Berardina Carolis. 2005. Greta. A Believable Embodied Conversational Agent. 3–25. https://doi.org/10.1007/1-4020-3051-7_1
[16]
Hiroki Tanaka, Satoshi Nakamura, Jean-Claude Martin, and Catherine Pelachaud. 2020. Social Affective Multimodal Interaction for Health. In Proceedings of the 2020 International Conference on Multimodal Interaction (Virtual Event, Netherlands) (ICMI ’20). Association for Computing Machinery, New York, NY, USA, 893–894. https://doi.org/10.1145/3382507.3420059
[17]
H. Tanaka, H. Negoro, H. Iwasaka, and S. Nakamura. 2017. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders. PLoS ONE 12, 8 (2017), e0182151.
[18]
M. Iftekhar Tanveer, Emy Lin, and Mohammed (Ehsan) Hoque. 2015. Rhema: A Real-Time In-Situ Intelligent Interface to Help People with Public Speaking. In Proceedings of the 20th International Conference on Intelligent User Interfaces(Atlanta, Georgia, USA) (IUI ’15). Association for Computing Machinery, New York, NY, USA, 286–295. https://doi.org/10.1145/2678025.2701386

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction
October 2021
876 pages
ISBN:9781450384810
DOI:10.1145/3462244
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 October 2021

Check for updates

Author Tags

  1. affective computing
  2. cognitive behavioral therapy
  3. motivational interview
  4. physiological signal processing
  5. social robotics
  6. social signal processing
  7. social skills training
  8. virtual agents

Qualifiers

  • Abstract
  • Research
  • Refereed limited

Funding Sources

Conference

ICMI '21
Sponsor:
ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
October 18 - 22, 2021
QC, Montréal, Canada

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 154
    Total Downloads
  • Downloads (Last 12 months)67
  • Downloads (Last 6 weeks)15
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media