[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2818346.2820747acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Combining Two Perspectives on Classifying Multimodal Data for Recognizing Speaker Traits

Published: 09 November 2015 Publication History

Abstract

Human communication involves conveying messages both through verbal and non-verbal channels (facial expression, gestures, prosody, etc.). Nonetheless, the task of learning these patterns for a computer by combining cues from multiple modalities is challenging because it requires effective representation of the signals and also taking into consideration the complex interactions between them. From the machine learning perspective this presents a two-fold challenge: a) Modeling the intermodal variations and dependencies; b) Representing the data using an apt number of features, such that the necessary patterns are captured but at the same time allaying concerns such as over-fitting. In this work we attempt to address these aspects of multimodal recognition, in the context of recognizing two essential speaker traits, namely passion and credibility of online movie reviewers. We propose a novel ensemble classification approach that combines two different perspectives on classifying multimodal data. Each of these perspectives attempts to independently address the two-fold challenge. In the first, we combine the features from multiple modalities but assume inter-modality conditional independence. In the other one, we explicitly capture the correlation between the modalities but in a space of few dimensions and explore a novel clustering based kernel similarity approach for recognition. Additionally, this work investigates a recent technique for encoding text data that captures semantic similarity of verbal content and preserves word-ordering. The experimental results on a recent public dataset shows significant improvement of our approach over multiple baselines. Finally, we also analyze the most discriminative elements of a speaker's non-verbal behavior that contribute to his/her perceived credibility/passionateness.

References

[1]
Facet. url: http://www.emotient.com/cert.
[2]
Gensim. url:https://radimrehurek.com/gensim/.
[3]
Okao vision. url: http://www.omron.com/technology/core.html.
[4]
M. Abouelenien et al. Deception detection using a multimodal approach. In 16th ACM ICMI, 2014.
[5]
E. Alpaydin. Introduction to machine learning. MIT press, 2014.
[6]
O. Aran and D. Gatica-Perez. One of a kind: Inferring personality impressions in meetings. In ACM ICMI, 2013.
[7]
Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. The Journal of Machine Learning Research, 3, 2003.
[8]
C. M. Bishop. Mixture models and the em algorithm. Microsoft Research, Cambridge, 2006.
[9]
C. M. Bishop et al. Pattern recognition and machine learning, volume 4. springer New York, 2006.
[10]
M. Chatterjee, S. Park, H. S. Shim, K. Sagae, and L.-P. Morency. Verbal behaviors and persuasiveness in online multimedia content. SocialNLP 2014.
[11]
M. Chatterjee, G. Stratou, S. Scherer, and L.-P. Morency. Context-based signal descriptors of heart-rate variability for anxiety assessment. In 39th IEEE ICASSP, 2014.
[12]
H. Hotelling. Canonical correlation analysis (cca). Journal of Educational Psychology, 1935.
[13]
G. A. Kennedy. On rhetoric: A theory of civic discourse. OUP, 1991.
[14]
J. R. Kettenring. Canonical analysis of several sets of variables. Biometrika, 1971.
[15]
V. Lavrenko, R. Manmatha, and J. Jeon. A model for learning the semantics of pictures. In NIPS, 2003.
[16]
Q. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML, 2014.
[17]
D. D. Lewis and W. A. Gale. A sequential algorithm for training text classifiers. In 17th ACM SIGIR, 1994.
[18]
T. M. Mitchell. Machine learning. wcb, 1997.
[19]
G. Mohammadi et al. Who is persuasive? the role of perceived personality and communication modality in social multimedia. In 15th ACM ICMI, 2013.
[20]
L. Morency et al. Generalized adaptive view-based appearance model: Integrated framework for monocular head pose estimation. In 8th IEEE FG, 2008.
[21]
N. Morgan. How to become an authentic speaker, harvard business review, url: https://hbr.org/2008/11/how-tobecome-an-authentic-speaker.
[22]
S. Park, H. S. Shim, M. Chatterjee, K. Sagae, and L.-P. Morency. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proceedings of the 16th ACM ICMI, 2014.
[23]
Y. Song, L.-P. Morency, and R. Davis. Multimodal human behavior analysis: learning correlation and interaction across modalities. In 14th ACM ICMI, 2012.
[24]
R. Subramanian, Y. Yan, J. Staiano, O. Lanz, and N. Sebe. On the relationship between head pose, social attention and personality prediction for unstructured and dynamic group interactions. In Proceedings of the 15th ACM ICMI, 2013.
[25]
J. Vıa, I. Santamarıa, and J. Pérez. Canonical correlation analysis (cca) algorithms for multiple data sets: Application to blind simo equalization. In 13th EUSIPCO, 2005.
[26]
Y. T. Zhuang et al. Supervised coupled dictionary learning with group structures for multi-modal retrieval. In AAAI, 2013.

Cited By

View all
  • (2023)Survey on multimodal approaches to emotion recognitionNeurocomputing10.1016/j.neucom.2023.126693556(126693)Online publication date: Nov-2023
  • (2020)Speaker Personality Recognition With Multimodal Explicit Many2many Interactions2020 IEEE International Conference on Multimedia and Expo (ICME)10.1109/ICME46284.2020.9102820(1-6)Online publication date: Jul-2020
  • (2020)Personality Trait Classification Based on Co-occurrence Pattern Modeling with Convolutional Neural NetworkHCI International 2020 – Late Breaking Papers: Interaction, Knowledge and Social Media10.1007/978-3-030-60152-2_27(359-370)Online publication date: 27-Sep-2020
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '15: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction
November 2015
678 pages
ISBN:9781450339124
DOI:10.1145/2818346
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 November 2015

Permissions

Request permissions for this article.

Check for updates

Badges

  • Best Paper

Author Tags

  1. clustering
  2. discriminative model
  3. ensemble
  4. generative model
  5. kernels
  6. multimodal

Qualifiers

  • Research-article

Funding Sources

  • U. S. Army Research Laboratory

Conference

ICMI '15
Sponsor:
ICMI '15: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
November 9 - 13, 2015
Washington, Seattle, USA

Acceptance Rates

ICMI '15 Paper Acceptance Rate 52 of 127 submissions, 41%;
Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)0
Reflects downloads up to 04 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Survey on multimodal approaches to emotion recognitionNeurocomputing10.1016/j.neucom.2023.126693556(126693)Online publication date: Nov-2023
  • (2020)Speaker Personality Recognition With Multimodal Explicit Many2many Interactions2020 IEEE International Conference on Multimedia and Expo (ICME)10.1109/ICME46284.2020.9102820(1-6)Online publication date: Jul-2020
  • (2020)Personality Trait Classification Based on Co-occurrence Pattern Modeling with Convolutional Neural NetworkHCI International 2020 – Late Breaking Papers: Interaction, Knowledge and Social Media10.1007/978-3-030-60152-2_27(359-370)Online publication date: 27-Sep-2020
  • (2019)Modeling Dyadic and Group Impressions with Intermodal and Interperson FeaturesACM Transactions on Multimedia Computing, Communications, and Applications10.1145/326575415:1s(1-30)Online publication date: 24-Jan-2019
  • (2019)Medical and health systemsThe Handbook of Multimodal-Multisensor Interfaces10.1145/3233795.3233808(423-476)Online publication date: 1-Jul-2019
  • (2019)Multimodal integration for interactive conversational systemsThe Handbook of Multimodal-Multisensor Interfaces10.1145/3233795.3233798(21-76)Online publication date: 1-Jul-2019
  • (2018)Multimodal prediction of the audience's impression in political debatesProceedings of the 20th International Conference on Multimodal Interaction: Adjunct10.1145/3281151.3281157(1-6)Online publication date: 16-Oct-2018
  • (2018)Multimodal analysis of social signalsThe Handbook of Multimodal-Multisensor Interfaces10.1145/3107990.3107999(203-226)Online publication date: 1-Oct-2018
  • (2018)Investigating Effectiveness of Linguistic Features Based on Speech Recognition for Storytelling Skill AssessmentRecent Trends and Future Technology in Applied Intelligence10.1007/978-3-319-92058-0_14(148-157)Online publication date: 30-May-2018
  • (2017)Multimodal sentiment analysis with word-level fusion and reinforcement learningProceedings of the 19th ACM International Conference on Multimodal Interaction10.1145/3136755.3136801(163-171)Online publication date: 3-Nov-2017
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media