Sebe et al., 2005 - Google Patents
Multimodal approaches for emotion recognition: a surveySebe et al., 2005
View PDF- Document ID
- 12361233807337880680
- Author
- Sebe N
- Cohen I
- Gevers T
- Huang T
- Publication year
- Publication venue
- Internet Imaging VI
External Links
Snippet
Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are …
- 230000014509 gene expression 0 abstract description 103
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
- G06K9/00268—Feature extraction; Face representation
- G06K9/00281—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00335—Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/001—Teaching or communicating with blind persons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/009—Teaching or communicating with deaf persons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sebe et al. | Multimodal approaches for emotion recognition: a survey | |
Sebe et al. | Multimodal emotion recognition | |
Pantic et al. | Toward an affect-sensitive multimodal human-computer interaction | |
Marechal et al. | Survey on AI-Based Multimodal Methods for Emotion Detection. | |
Chen | Joint processing of audio-visual information for the recognition of emotional expressions in human-computer interaction | |
Lisetti et al. | Automatic facial expression interpretation: Where human-computer interaction, artificial intelligence and cognitive science intersect | |
Cohn et al. | Multimodal assessment of depression from behavioral signals | |
Lisetti et al. | MAUI: a multimodal affective user interface | |
Vinciarelli et al. | Social signal processing: Survey of an emerging domain | |
Al Osman et al. | Multimodal affect recognition: Current approaches and challenges | |
Wu et al. | Speaking effect removal on emotion recognition from facial expressions based on eigenface conversion | |
Burzo et al. | Multimodal deception detection | |
Paleari et al. | Toward multimodal fusion of affective cues | |
Caridakis et al. | User and context adaptive neural networks for emotion recognition | |
D'Mello et al. | Multimodal-multisensor affect detection | |
Spaulding et al. | Frustratingly easy personalization for real-time affect interpretation of facial expression | |
Elkobaisi et al. | Human emotion: a survey focusing on languages, ontologies, datasets, and systems | |
Zhu et al. | A Review of Key Technologies for Emotion Analysis Using Multimodal Information | |
Singh et al. | Multi-modal Expression Detection (MED): A cutting-edge review of current trends, challenges and solutions | |
Vinciarelli et al. | Multimodal analysis of social signals | |
Schuller | Multimodal user state and trait recognition: An overview | |
Virvou et al. | Emotion recognition: empirical studies towards the combination of audio-lingual and visual-facial modalities through multi-attribute decision making | |
Bigand et al. | Person identification based on sign language motion: Insights from human perception and computational modeling | |
McTear et al. | Affective conversational interfaces | |
Chanchal et al. | Progress in Multimodal Affective Computing: From Machine Learning to Deep Learning |