[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Automatic Analysis of Facial Expressions: The State of the Art

Published: 01 December 2000 Publication History

Abstract

Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, development of an automated system that accomplishes this task is rather difficult. There are several related problems: detection of an image segment as a face, extraction of the facial expression information, and classification of the expression (e.g., in emotion categories). A system that performs these operations accurately and in real time would form a big step in achieving a human-like interaction between man and machine. This paper surveys the past work in solving these problems. The capability of the human visual system with respect to these problems is discussed, too. It is meant to serve as an ultimate goal and a guide for determining recommendations for development of an automatic facial expression analyzer.

References

[1]
J.N. Bassili, “Facial Motion in the Perception of Faces and of Emotional Expression,” J. Experimental Psychology 4, pp. 373-379, 1978.
[2]
M.J. Black and Y. Yacoob, “Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion,” lnt'l J. Computer Vision, vol. 25, no. 1, pp. 23-48, 1997.
[3]
M.J. Black and Y. Yacoob, “Tracking and Recognizing Rigid and Non-Rigid Facial Motions Using Local Parametric Models of Image Motions,” Proc. Int'l Conf. Computer Vision, pp. 374-381, 1995.
[4]
E. Boyle A.H. Anderson and A. Newlands, “The Effects of Visibility on Dialogue in a Cooperative Problem Solving Task,” Language and Speech, vol. 37, no. 1, pp. 1-20, 1994.
[5]
V. Bruce, Recognizing Faces. Hove, East Sussex: Lawrence Erlbaum Assoc., 1986.
[6]
V. Bruce, “What the Human Face Tells the Human Mind: Some Challenges for the Robot-Human Interface,” Proc. Int'l Workshop Robot and Human Comm., pp. 44-51, 1992.
[7]
J. Buhmann J. Lange and C. von der Malsburg, “Distortion Invariant Object Recognition—Matching Hierarchically Labelled Graphs,” Proc. Int'l Joint Conf. Neural Networks, pp. 155-159, 1989.
[8]
F.W. Campbell, “How Much of the Information Falling on the Retina Reaches the Visual Cortex and How Much is Stored in the Memory?” Seminar at the Pontificae Academiae Scientiarium Scripta Varia, 1983.
[9]
L.S. Chen T.S. Huang T. Miyasato and R. Nakatsu, “Multimodal Human Emotion/Expression Recognition,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 366-371, 1998.
[10]
J.F. Cohn A.J. Zlochower J.J. Lien and T. Kanade, “Feature-Point Tracking by Optical Flow Discriminates Subtle Differences in Facial Expression,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 396-401, 1998.
[11]
T.F. Cootes C.J. Taylor D.H. Cooper and J. Graham, “Active Shape Models—Training and Application,” Computer Vision Image Understanding, vol. 61, no. 1, pp. 38-59, 1995.
[12]
T.F. Cootes G.J. Edwards and C.J. Taylor, “Active Appearance Models,” Proc. European Conf. Computer Vision, vol. 2, pp. 484-498, 1998.
[13]
G.W. Cottrell and J. Metcalfe, “EMPATH: Fface, Emotion, Gender Recognition Using Holons,” Advances in Neural Information Processing Systems 3, R.P. Lippman, ed., pp. 564-571, 1991.
[14]
D. DeCarlo D. Metaxas and M. Stone, “An Anthropometric Face Model Using Variational Techniques,” Proc. SIGGRAPH, pp. 67-74, 1998.
[15]
L.C. De Silva T. Miyasato and R. Nakatsu, “Facial Emotion Recognition Using Multimodal Information,” Proc. Information, Comm., and Signal Processing Conf., pp. 397-401, 1997.
[16]
G. Donato M.S. Bartlett J.C. Hager P. Ekman and T.J. Sejnowski, “Classifying Facial Actions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 10, pp. 974-989, Oct. 1999.
[17]
M.S. Bartlett J.C. Hager P. Ekman and T.J. Sejnowski, “Measuring Facial Expressions by Computer Image Analysis,” Psychophysiology, vol. 36, pp. 253-263, 1999.
[18]
G.J. Edwards T.F. Cootes and C.J. Taylor, “Face Recognition Using Active Appearance Models,” Proc. European Conf. Computer Vision, vol. 2, pp. 581-695, 1998.
[19]
P. Eisert and B. Girod, “Analysing Facial Expressions for Virtual Conferencing,” IEEE Trans Computer Graphics and Applications, vol. 18, no. 5, pp. 70-78, 1998.
[20]
P. Ekman and W.V. Friesen, Unmasking the Face. New Jersey: Prentice Hall, 1975.
[21]
P. Ekman and W.V. Friesen, Facial Action Coding System (FACS): Manual. Palo Alto: Consulting Psychologists Press, 1978.
[22]
P. Ekman, Emotion in the Human Face. Cambridge Univ. Press, 1982.
[23]
H.D. Ellis, “Process Underlying Face Recognition,” The Neuropsychology of Face Perception and Facial Expression, R. Bruyer, ed. pp. 1-27, New Jersey: Lawrence Erlbaum Assoc., 1986.
[24]
I. Essa and A. Pentland, “Coding, Analysis Interpretation, Recognition of Facial Expressions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 757-763, July 1997.
[25]
A.J. Fridlund P. Ekman and H. Oster, “Facial Expressions of Emotion: Review Literature 1970-1983,” Nonverbal Behavior and Communication, A.W. Siegman and S. Feldstein, eds., pp. 143-224. Hillsdale NJ: Lawrence Erlbaum Assoc., 1987.
[26]
A.J. Fridlund, “Evolution and Facial Action in Reflex, Social Motive, and Paralanguage,” Biological Psychology, vol. 32, pp. 3-100, 1991.
[27]
D.J. Hand, Discrimination and Classification. John Wiley and Sons, 1981.
[28]
F. Hara and H. Kobayashi, “State of the Art in Component Development for Interactive Communication with Humans,” Advanced Robotics, vol. 11, no. 6, pp. 585-604, 1997.
[29]
R.J. Holt T.S. Huang A.N. Netravali and R.J. Qian, “Determining Articulated Motion from Perspective Views,” Pattern Recognition, vol. 30, no. 9, pp. 1435-1,449, 1997.
[30]
H. Hong H. Neven and C. von der Malsburg, “Online Facial Expression Recognition Based on Personalized Galleries,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 354-359, 1998.
[31]
B. Horn and B. Schunck, “Determining Optical Flow,” Artificial Intelligence, vol. 17, pp. 185-203, 1981.
[32]
C.L. Huang and Y.M. Huang, “Facial Expression Recognition Using Model-Based Feature Extraction and Action Parameters Classification,” J. Visual Comm. and Image Representation, vol. 8, no. 3, pp. 278-290, 1997.
[33]
C.E. Izard, The Face of Emotion. New York: Appleton-Century-Crofts, 1971.
[34]
C.E. Izard, “Facial Expressions and the Regulation of Emotions,” J. Personality and Social Psychology, vol. 58, no. 3, pp. 487-498, 1990.
[35]
T. Johanstone R. Banse and K.S. Scherer, “Acoustic Profiles in Prototypical Vocal Expressions of Emotions,” Proc. Int'l Conf. Phonetic Science, vol. 4, pp. 2-5, 1995.
[36]
I. Kanter and H. Sompolinsky, “Associative Recall of Memory without Errors” Physical Review, vol. 35, no. 1, pp. 380-392, 1987.
[37]
M. Kass A. Witkin and D. Terzopoulos, “Snake: Active Contour Model,” Proc. Int'l Conf. Computer Vision, pp. 259-269, 1987.
[38]
M. Kato I. So Y. Hishinuma O. Nakamura and T. Minami, “Description and Synthesis of Facial Expressions Based on Isodensity Maps,” Visual Computing, T. Kunii, ed., pp. 39-56. Tokyo: Springer-Verlag, 1991.
[39]
F. Kawakami M. Okura H. Yamada H. Harashima and S. Morishima, “3D Emotion Space for Interactive Communication,” Proc. Computer Science Conf., pp. 471-478, 1995.
[40]
G.D. Kearney and S. McKenzie, “Machine Interpretation of Emotion: Design of Memory-Based Expert System for Interpreting Facial Expressions in Terms of Signaled Emotions (JANUS),” Cognitive Science, vol. 17, no. 4, pp. 589-622, 1993.
[41]
S. Kimura and M. Yachida, “Facial Expression Recognition and Its Degree Estimation,” Proc. Computer Vision and Pattern Recognition, pp. 295-300, 1997.
[42]
H. Kobayashi and F. Hara, “Facial Interaction between Animated 3D Face Robot and Human Beings,” Proc. Int'l Conf. Systems, Man, Cybernetics, pp. 3,732-3,737, 1997.
[43]
H. Kobayashi and F. Hara, “Recognition of Six Basic Facial Expressions and Their Strength by Neural Network,” Proc. Int'l Workshop Robot and Human Comm., pp. 381-386, 1992.
[44]
H. Kobayashi and F. Hara, “Recognition of Mixed Facial Expressions by Neural Network,” Proc. Int'l Workshop Robot and Human Comm., pp. 387-391, 1992.
[45]
K.M. Lam and H. Yan, “An Analytic-to-Holistic Approach for Face Recognition Based on a Single Frontal View,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 7, pp. 673-686, July 1998.
[46]
H.K. Lee and J.H. Kim, “An HMM-Based Threshold Model Approach for Gesture Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 10, pp. 961-973, Oct. 1999.
[47]
H. Li and P. Roivainen, “3D Motion Estimation in Model-Based Facial Image Coding,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 6, pp. 545-555, 1993.
[48]
J.J. Lien T. Kanade J.F. Cohn and C.C. Li, “Automated Facial Expression Recognition Based on FACS Action Units,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 390-395, 1998.
[49]
B. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. Joint Conf. Artificial Intelligence, pp. 674-680, 1981.
[50]
M.J. Lyons S. Akamatsu M. Kamachi and J. Gyoba, “Coding Facial Expressions with Gabor Wavelets,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 200-205, 1998.
[51]
M.J. Lyons J. Budynek and S. Akamatsu, “Automatic Classification of Single Facial Images,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 12, pp. 1,357-1,362, 1999.
[52]
K. Mase, “Recognition of Facial Expression from Optical Flow,” IEICE Trans., vol. E74, no. 10, pp. 3,474-3,483, 1991.
[53]
K. Matsumura Y. Nakamura and K. Matsui, ”Mathematical Representation and Image Generation of Human Faces by Metamorphosis,” Electronics and Comm. in Japan—3, vol. 80, no. 1, pp. 36-46, 1997.
[54]
K. Matsuno C.W. Lee and S. Tsuji, “Recognition of Facial Expression with Potential Net,” Proc. Asian Conf. Computer Vision, pp. 504-507, 1993.
[55]
A. Mehrabian, “Communication without Words,” Psychology Today, vol. 2, no. 4, pp. 53-56, 1968.
[56]
S. Morishima F. Kawakami H. Yamada and H. Harashima, “A Modelling of Facial Expression and Emotion for Recognition and Synthesis,” Symbiosis of Human and Artifact, Y. Anzai, K. Ogawa and H. Mori, eds., pp. 251-256, Amsterdam: Elsevier Science BV, 1995.
[57]
Y. Moses D. Reynard and A. Blake, “Determining Facial Expressions in Real Time,” Proc. Int'l Conf. Automatic Face and GestureRecognition, pp. 332-337, 1995.
[58]
R. Nakatsu, “Toward the Creation of a New Medium for the Multimedia Era,” Proc. IEEE, vol. 86, no. 5, pp. 825-836, 1998.
[59]
T. Otsuka and J. Ohya, “Recognition of Facial Expressions Using HMM with Continuous Output Probabilities,” Proc. Int'l Workshop Robot and Human Comm., pp. 323-328, 1996.
[60]
T. Otsuka and J. Ohya, “Spotting Segments Displaying Facial Expression from Image Sequences Using HMM,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 442-447, 1998.
[61]
C. Padgett and G.W. Cottrell, “Representing Face Images for Emotion Classification,” Proc. Conf. Advances in Neural Information Processing Systems, pp. 894-900, 1996.
[62]
M. Pantic and L.J.M. Rothkrantz, “Expert System for Automatic Analysis of Facial Expression,” Image and Vision Computing J., vol. 18, no. 11, pp. 881-905, 2000.
[63]
M. Pantic and L.J.M. Rothkrantz, “An Expert System for Multiple Emotional Classification of Facial Expressions” Proc. Int'l Conf. Tools with Artificial Intelligence, pp. 113-120, 1999.
[64]
V.I. Pavlovic R. Sharma and T.S. Huang, “Visual Interpretation of Hand Gestures for Human-Computer Interaction: Review,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 677-695, 1997.
[65]
A. Pentland B. Moghaddam and T. Starner, “View-Based and Modular Eigenspaces for Face Recognition,” Proc. Computer Vision and Pattern Recognition, pp. 84-91, 1994.
[66]
V.A. Petrushin, “Emotion in Speech: Recognition and Application to Call Centers,” Proc. Conf. Artificial Neural Networks in Eng., 1999.
[67]
R.W. Picard and E. Vyzas, “Offline and Online Recognition of Emotion Expression from Physiological Data,” Emotion-Based Agent Architectures Workshop Notes, Int'l Conf. Autonomous Agents, pp. 135-142, 1999.
[68]
T.S. Polzin and A.H. Waibel, “Detecting Emotions in Speech,” Proc. Conf. Cooperative Multimedia Comm., 1998.
[69]
W.H. Press S.A. Teukolsky W.T. Vetterling and B.P. Flannery, Numerical Recipes in C, Cambridge Univ. Press, 1992.
[70]
A. Rahardja A. Sowmya and W.H. Wilson, “A Neural Network Approach to Component versus Holistic Recognition of Facial Expressions in Images,” SPIE, Intelligent Robots and Computer Vision X: Algorithms and Techniques, vol. 1, 607, pp. 62-70, 1991.
[71]
A. Ralescu and R. Hartani, “Some Issues in Fuzzy and Linguistic Modeling” Proc. Conf. Fuzzy Systems, pp. 1,903-1,910, 1995.
[72]
M. Riedmiller and H. Braun, “A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm,” Proc. Int'l Conf. Neural Networks, pp. 586-591, 1993.
[73]
M. Rosenblum Y. Yacoob and L. Davis, “Human Emotion Recognition from Motion Using a Radial Basis Function Network Architecture,” Proc. IEEE Workshop on Motion of Non-Rigid and Articulated Objects, pp. 43-49, 1994.
[74]
H.A. Rowley S. Baluja and T. Kanade, “Neural Network-Based Face Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998.
[75]
The Psychology of Facial Expression, J.A. Russell and J.M. Fernandez-Dols, eds. Cambridge: Cambridge Univ. Press, 1997.
[76]
J.A. Russell, “Is There Universal Recognition of Emotion from Facial Expression?” Psychological Bulletin, vol. 115, no. 1, pp. 102-141, 1994.
[77]
P. Ekman, “Strong Evidence for Universals in Facial Expressions: A Reply to Russell's Mistaken Critique,” Psychological Bulletin, vol. 115, no. 2, pp. 268-287, 1994.
[78]
A. Samal, “Minimum Resolution for Human Face Detection and Identification,” SPIE Human Vision, Visual Processing, and Digital Display II, vol. 1, 453, pp. 81-89, 1991.
[79]
A. Samal and P.A. Iyengar, “Automatic Recognition and Analysis of Human Faces and Facial Expressions: A Survey,” Pattern Recognition, vol. 25, no. 1, pp. 65-77, 1992.
[80]
E. Simoncelli, “Distributed Representation and Analysis of Visual Motion,” PhD thesis, Massachusetts Inst. of Technology, 1993.
[81]
J. Steffens E. Elagin and H. Neven, “PersonSpotter—Fast and Robust System for Human Detection, Tracking, and Recognition,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 516-521, 1998.
[82]
G.M. Stephenson K. Ayling D.R. Rutter, “The Role of Visual Communication in Social Exchange,” Britain J. Social Clinical Psychology, vol. 15, pp. 113-120, 1976.
[83]
K.K. Sung and T. Poggio, “Example-Based Learning for View-Based Human Face Detection” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 39-51, Jan. 1998.
[84]
A. Takeuchi and K. Nagao, “Communicative Facial Displays as a New Conversational Modality,” Proc. ACM INTERCHI, pp. 187-193, 1993.
[85]
J.C. Terrillon M. David and S. Akamatsu, “Automatic Detection of Human Faces in Natural Scene Images by Use of a Skin Color Model of Invariant Moments” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 112-117, 1998.
[86]
D. Terzopoulos and K. Waters, “Analysis and Synthesis of Facial Image Sequences Using Physical and Anatomical Models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 6, pp. 569-579, June 1993.
[87]
N.M. Thalmann and D. Thalmann, “600 Indexed References on Computer Animation,” J. Visualisation and Computer Animation, vol. 3, pp. 147-174, 1992.
[88]
N.M. Thalmann P. Kalra and M. Escher, “Face to Virtual Face,” Proc. IEEE, vol. 86, no. 5, pp. 870-883, 1998.
[89]
N.M. Thalmann P. Kalra and I.S. Pandzic, “Direct Face-to-Face Communication between Real and Virtual Humans” Int'l J. Information Technology, vol. 1, no. 2, pp. 145-157, 1995.
[90]
N. Tosa and R. Nakatsu, “Life-Like Communication Agent—Emotion Sensing Character MIC and Feeling Session Character MUSE,” Proc. Conf. Multimedia Computing and Systems, pp. 12-19, 1996.
[91]
M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.
[92]
H. Ushida T. Takagi and T. Yamaguchi, “Recognition of Facial Expressions Using Conceptual Fuzzy Sets” Proc. Conf. Fuzzy Systems, vol. 1, pp. 594-599, 1993.
[93]
P. Vanger R. Honlinger and H. Haken, “Applications of Synergetics in Decoding Facial Expression of Eemotion,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 24-29, 1995.
[94]
J.M. Vincent D.J. Myers and R.A. Hutchinson, “Image Feature Location in Multi-Resolution Images Using a Hierarchy of Multi-Layer Preceptors,” Neural Networks for Speech, Vision, and Natural Language, pp. 13-29, Chapman & Hall, 1992.
[95]
M. Wang Y. Iwai and M. Yachida, “Expression Recognition from Time-Sequential Facial Images by Use of Expression Change Model,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 324-329, 1998.
[96]
D.J. Williams and M. Shah, “A Fast Algorithm for Active Contours and Curvature Estimation,” Computer Vision and Image Processing: Image Understanding, vol. 55, no. 1, pp. 14-26, 1992.
[97]
A.D. Wilson and A.F. Bobick, “Parametric Hidden Markov Models for Gesture Recognition” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 9, pp. 884-900, Sept. 1999.
[98]
L. Wiskott, “Labelled Graphs and Dynamic Link Matching for Face Recognition and Scene Analysis,” Reihe Physik, vol. 53, Frankfurt a.m. Main: Verlag Harri Deutsch, 1995.
[99]
H. Wu T. Yokoyama D. Pramadihanto and M. Yachida, “Face and Facial Feature Extraction from Color Image,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 345-350, 1996.
[100]
Y. Yacoob and L. Davis, “Recognizing Facial Expressions by Spatio-Temporal Analysis,” Proc. Int'l Conf. Pattern Recognition, vol. 1, pp. 747-749, 1994.
[101]
Y. Yacoob and L. Davis, “Computing Spatio-Temporal Representations of Human Faces,” Proc. Computer Vision and Pattern Recognition, pp. 70-75, 1994.
[102]
H. Yamada, “Visual Information for Categorizing Facial Expressions of Emotions,” Applied Cognitive Psychology, vol. 7, pp. 257-270, 1993.
[103]
J. Yang and A. Waibel, “A Real-Time Face Tracker,” Workshop Applications of Computer Vision, pp. pp. 142-147, 1996.
[104]
M. Yoneyama Y. Iwano A. Ohtake and K. Shirai, “Facial Expressions Recognition Using Discrete Hopfield Neural Networks,” Proc. Int'l Conf. Information Processing, vol. 3, pp. 117-120, 1997.
[105]
A.L. Yuille D.S. Cohen and P.W. Hallinan, “Feature Extraction from Faces Using Deformable Templates,” Proc. Computer Vision and Pattern Recognition, pp. 104-109, 1989.
[106]
Z. Zhang M. Lyons M. Schuster and S. Akamatsu, “Comparison between Geometry-Based and Gabor Wavelets-Based Facial Expression Recognition Using Multi-Layer Perceptron,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 454-459, 1998.
[107]
J. Zhao and G. Kearney, “Classifying Facial Emotions by Backpropagation Neural Networks with Fuzzy Inputs,” Proc. Conf. Neural Information Processing, vol. 1, pp. 454-457, 1996.

Cited By

View all
  • (2024)Cross-Task Inconsistency Based Active Learning (CTIAL) for Emotion RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2024.336676715:3(1659-1668)Online publication date: 1-Jul-2024
  • (2024)Real-time facial emotion recognition model based on kernel autoencoder and convolutional neural network for autism childrenSoft Computing - A Fusion of Foundations, Methodologies and Applications10.1007/s00500-023-09477-y28:9-10(6695-6708)Online publication date: 1-May-2024
  • (2024)A CNN-based multi-level face alignment approach for mitigating demographic bias in clinical populationsComputational Statistics10.1007/s00180-023-01395-939:5(2557-2579)Online publication date: 1-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence  Volume 22, Issue 12
December 2000
154 pages
ISSN:0162-8828
Issue’s Table of Contents

Publisher

IEEE Computer Society

United States

Publication History

Published: 01 December 2000

Author Tags

  1. Face detection
  2. facial action encoding
  3. facial expression emotional classification.
  4. facial expression information extraction

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Cross-Task Inconsistency Based Active Learning (CTIAL) for Emotion RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2024.336676715:3(1659-1668)Online publication date: 1-Jul-2024
  • (2024)Real-time facial emotion recognition model based on kernel autoencoder and convolutional neural network for autism childrenSoft Computing - A Fusion of Foundations, Methodologies and Applications10.1007/s00500-023-09477-y28:9-10(6695-6708)Online publication date: 1-May-2024
  • (2024)A CNN-based multi-level face alignment approach for mitigating demographic bias in clinical populationsComputational Statistics10.1007/s00180-023-01395-939:5(2557-2579)Online publication date: 1-Jul-2024
  • (2023)Multimodal Prediction of User's Performance in High-Stress Dialogue InteractionsCompanion Publication of the 25th International Conference on Multimodal Interaction10.1145/3610661.3617166(71-75)Online publication date: 9-Oct-2023
  • (2023)Data-driven Communicative Behaviour Generation: A SurveyACM Transactions on Human-Robot Interaction10.1145/360923513:1(1-39)Online publication date: 16-Aug-2023
  • (2023)MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression RecognitionProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612365(6110-6121)Online publication date: 26-Oct-2023
  • (2023)Optimal and Robust Category-Level Perception: Object Pose and Shape Estimation From 2-D and 3-D Semantic KeypointsIEEE Transactions on Robotics10.1109/TRO.2023.327727339:5(4131-4151)Online publication date: 1-Oct-2023
  • (2023)Knowledge Conditioned Variational Learning for One-Class Facial Expression RecognitionIEEE Transactions on Image Processing10.1109/TIP.2023.329377532(4010-4023)Online publication date: 1-Jan-2023
  • (2023)Applying Segment-Level Attention on Bi-Modal Transformer Encoder for Audio-Visual Emotion RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2023.325890014:4(3231-3243)Online publication date: 1-Oct-2023
  • (2023)Emotion recognition from unimodal to multimodal analysisInformation Fusion10.1016/j.inffus.2023.10184799:COnline publication date: 1-Nov-2023
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media