Schels et al., 2013 - Google Patents
Multi-modal classifier-fusion for the recognition of emotionsSchels et al., 2013
View PDF- Document ID
- 14578113860484645473
- Author
- Schels M
- Glodek M
- Meudt S
- Scherer S
- Schmidt M
- Layher G
- Tschechne S
- Brosch T
- Hrabal D
- Walter S
- Traue H
- Palm G
- Schwenker F
- Rojc M
- Campbell N
- Publication year
- Publication venue
- Coverbal Synchrony in Human-Machine Interaction
External Links
Snippet
Research activities in the field of human-computer interaction increasingly addressed the aspect of integrating features that characterize different types of emotional intelligence. Human emotions are expressed through different modalities such as speech, facial …
- 230000004927 fusion 0 abstract description 30
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6232—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
- G06K9/6247—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on an approximation criterion, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
- G06K9/00268—Feature extraction; Face representation
- G06K9/00281—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00335—Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00362—Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
- G06F19/30—Medical informatics, i.e. computer-based analysis or dissemination of patient or disease data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/18—Digital computers in general; Data processing equipment in general in which a programme is changed according to experience gained by the computer itself during a complete run; Learning machines
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108805089B (en) | Multi-modal-based emotion recognition method | |
CN108805087B (en) | Time sequence semantic fusion association judgment subsystem based on multi-modal emotion recognition system | |
CN108899050B (en) | Voice signal analysis subsystem based on multi-modal emotion recognition system | |
CN108877801B (en) | Multi-turn dialogue semantic understanding subsystem based on multi-modal emotion recognition system | |
Sharma et al. | Real-time emotional health detection using fine-tuned transfer networks with multimodal fusion | |
Barros et al. | Developing crossmodal expression recognition based on a deep neural model | |
Verma et al. | Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals | |
Schels et al. | Multi-modal classifier-fusion for the recognition of emotions | |
Sharma et al. | A survey on automatic multimodal emotion recognition in the wild | |
Kächele et al. | Inferring depression and affect from application dependent meta knowledge | |
Gladys et al. | Survey on multimodal approaches to emotion recognition | |
Jayanthi et al. | An integrated framework for emotion recognition using speech and static images with deep classifier fusion approach | |
Zhang et al. | Spatio-temporal EEG representation learning on riemannian manifold and euclidean space | |
Kim et al. | Multimodal affect classification at various temporal lengths | |
Arumugam | Emotion classification using facial expression | |
Haddad et al. | Emotion recognition from audio-visual information based on convolutional neural network | |
Thiam et al. | A temporal dependency based multi-modal active learning approach for audiovisual event detection | |
Schels et al. | Multi-modal classifier-fusion for the classification of emotional states in woz scenarios | |
Zhao et al. | A review of the emotion recognition model of robots | |
Khorrami | How deep learning can help emotion recognition | |
Espino-Salinas et al. | Multimodal driver emotion recognition using motor activity and facial expressions | |
Sudhan et al. | Multimodal depression severity detection using deep neural networks and depression assessment scale | |
Schwenker et al. | Multimodal affect recognition in the context of human-computer interaction for companion-systems | |
Vala et al. | Analytical review and study on emotion recognition strategies using multimodal signals | |
Ding et al. | A Multimodal Driver Anger Recognition Method Based on Context-Awareness |