Abstract
Recognition of facial expressions is a challenging task in computer vision because of the complexity associated with individual facial features and social differences. Early studies classified human facial expressions into six basic categories which are anger, disgust, fear, happiness, sadness and surprise. The neutral expression is also taken into account. Furthermore, compound emotions are explored on human faces which are the representations of the expressions that entail the combination of more than a single basic facial expression. Including at least two expression categories, one is considered as the dominating expression and the other as the complementary expression. In this way, the categorization of compound facial expressions is done. In this study, a novel approach is proposed to recognize compound facial expressions. The main contribution of this paper is the proposed fusion of deep texture and geometric features. The texture features are the deep textures obtained from a deep learning model. The iCV-MEFED dataset is employed. It includes compound facial expressions consisting of all the combinations of basic facial expressions in the sense of dominating and complementary expressions. Therefore, 50 distinct classes of facial expressions are presented. The previous studies carried out on this dataset report high rates of misclassification due to the challenge of the complexity of facial expressions and correlations among the compound expressions. The proposed approach obtained encouraging results and has shown significant improvements in the recognition accuracy of compound facial expressions on the iCV-MEFED dataset.
Similar content being viewed by others
Data availability
Not applicable.
References
Ahmed S, Kallu KD, Ahmed S, Cho SH (2021) Hand gestures recognition using radar sensors for human-computer-interaction: a review. Remote Sens 13(3):527
Liu Y, Sivaparthipan CB, Shankar A (2022) Human–computer interaction based visual feedback system for augmentative and alternative communication. Int J Speech Technol 25(2):305–314
Rahman MM, Sarkar AK, Hossain MA, Hossain MS, Islam MR, Hossain MB, Moni MA (2021) Recognition of facial expressions using EEG signals: a review. Comput Biol Med 136:104696
Zhou J, Zhang S, Mei H, Wang D (2016) A method of facial expression recognition based on Gabor and NMF. Pattern Recognit Image Anal 26(1):119–124
Cohen I, Sebe N, Garg A, Chen LS, Huang TS (2003) Facial expression recognition from video sequences: temporal and static modeling. Comput Vis Image Underst 91(1–2):160–187
Nonis F, Dagnes N, Marcolin F, Vezzetti E (2019) 3D approaches and challenges in facial expression recognition algorithms—a literature review. Appl Sci 9(18):3904
Martinez B, Valstar MF (2016) Advances, challenges, and opportunities in automatic facial expression recognition. In: Kawulok M, Celebi M, Smolka B (eds) Advances in face detection and facial image analysis. Springer, Cham. https://doi.org/10.1007/978-3-319-25958-1_4
Ekman P (1992) An argument for basic emotions. Cognition Emot 6(3–4):169–200
Yu Z, Liu G, Liu Q, Deng J (2018) Spatio-temporal convolutional features with nested LSTM for facial expression recognition. Neurocomputing 317:50–57
Hu P, Cai D, Wang S, Yao A, Chen Y (2017), November Learning supervised scoring ensemble for emotion recognition in the wild. In Proceedings of the 19th ACM international conference on multimodal interaction (pp. 553–560)
Guo J, Zhou S, Wu J, Wan J, Zhu X, Lei Z, Li SZ (2017), May Multi-modality network with visual and geometrical information for micro emotion recognition. In 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017) (pp. 814–819). IEEE
Du S, Martinez AM (2022) Compound facial expressions of emotion: from basic research to clinical applications. Dialogues in clinical neuroscience
2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (pp. 833–838). IEEE
Levi G, Hassner T (2015), November Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In Proceedings of the 2015 ACM on international conference on multimodal interaction (pp. 503–510)
Zhao X, Liang X, Liu L, Li T, Han Y, Vasconcelos N, Yan S (2016), October Peak-piloted deep network for facial expression recognition. In European conference on computer vision (pp. 425–442). Springer, Cham
Grobova J, Colovic M, Marjanovic M, Njegus A, Demire H, Anbarjafari G (2017), May Automatic hidden sadness detection using micro-expressions. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (pp. 828–832). IEEE
2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (pp. 809–813). IEEE
IEEE transactions on affective computing, 12(2), 377–390
Ekman P, Friesen WV (1971) Constants across cultures in the face and emotion. J Personal Soc Psychol 17(2):124
Keltner D, Sauter D, Tracy J, Cowen A (2019) Emotional expression: advances in basic emotion theory. J Nonverbal Behav 43(2):133–160
Haamer RE, Rusadze E, Lsi I, Ahmed T, Escalera S, Anbarjafari G (2017) Review on emotion recognition databases. Hum Robot Interact Theor Appl 3:39–63
Noroozi F, Marjanovic M, Njegus A, Escalera S, Anbarjafari G (2017) Audio-visual emotion recognition in video clips. IEEE Trans Affect Comput 10(1):60–75
Maithri M, Raghavendra U, Gudigar A, Samanth J, Barua PD, Murugappan M, Chakole Y, Acharya UR (2022). Automated emotion recognition: current trends and future perspectives. Comput Methods Programs Biomed 215:106646
Milad A, Yurtkan K (2022) An integrated 3D model based face recognition method using synthesized facial expressions and poses for single image applications. Appl Nanosci, 1–11
Krithika LB, Priya GG (2021) Graph-based feature extraction and hybrid classification approach for facial expression recognition. J Ambient Intell Humaniz Comput 12(2):2131–2147
Krithika LB, Priya GL (2022) MAFONN-EP: a minimal angular feature oriented neural network based emotion prediction system in image processing. J King Saud University-Computer Inform Sci 34(1):1320–1329
Yu Z, Liu Q, Liu G (2018) Deeper cascaded peak-piloted network for weak expression recognition. Visual Comput 34(12):1691–1699
Ekundayo OS, Viriri S (2021) Facial expression recognition: a review of trends and techniques. Ieee Access 9:136944–136973
IEEE Access, 6, 26391–26403
Du S, Tao Y, Martinez AM (2014) ‘‘Compound facial expressions of emotion,’’ Proc. Nat. Acad. Sci. USA, vol. 111, no. 15, pp. E1454–E1462
Fabian Benitez-Quiroz C, Srinivasan R, Martinez AM (2016) Emotionet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5562–5570)
Benitez-Quiroz CF, Srinivasan R, Feng Q, Wang Y, Martinez AM (2017) Emotionet challenge: Recognition of facial expressions of emotion in the wild. arXiv preprint arXiv:1703.01210
Baltrušaitis T, Robinson P, Morency LP (2016), March Openface: an open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1–10). IEEE
Kamińska D, Aktas K, Rizhinashvili D, Kuklyanov D, Sham AH, Escalera S, Nasrollahi K, Moeslund TB, Anbarjafari G (2021) Two-stage recognition and beyond for compound facial emotion recognition. Electronics 10(22):2847
Jiddah SM, Yurtkan K (2023) Dominant and complementary emotion recognition using hybrid recurrent neural network. SIViP, 1–9
Appasaheb Borgalli R, Surve S (2023) Learning Framework for compound facial emotion recognition. Recent Adv Electr Electron Eng (Formerly Recent Pat Electr Electron Engineering) 16(6):664–676
Dong H, Song K, He Y, Xu J, Yan Y, Meng Q (2019) PGA-Net: pyramid feature fusion and global context attention network for automated surface defect detection. IEEE Trans Industr Inf 16(12):7448–7458
Dai Y, Gieseke F, Oehmcke S, Wu Y, Barnard K (2021) Attentional feature fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 3560–3569)
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius, R. (2022) Breast cancer classification from ultrasound images using probability-based optimal deep learning feature fusion. Sensors 22(3):807
Jiddah SM, Abushakra M, Yurtkan K Fusion of geometric and texture features for side-view face recognition using svm. Istatistik J Turkish Stat Association, 13(3), 108–119
Jiddah SM, Yurtkan K (2018), October Fusion of geometric and texture features for ear recognition. In 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (pp. 1–5). IEEE
Khan MA, Sarfraz MS, Alhaisoni M, Albesher AA, Wang S, Ashraf I (2020) StomachNet: optimal deep learning features fusion for stomach abnormalities classification. IEEE Access 8:197969–197981
Park SJ, Kim BG, Chilamkurti N (2021) A robust facial expression recognition algorithm based on multi-rate feature fusion scheme. Sensors 21(21):6954
Sagonas C, Antonakos E, Tzimiropoulos G, Zafeiriou S, Pantic M (2016) 300 faces in-the-wild challenge: database and results. Image Vis Comput 47:3–18
Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9)
McKeown G, Valstar MF, Cowie R, Pantic M (2010), July The SEMAINE corpus of emotionally coloured character interactions. In 2010 IEEE International Conference on Multimedia and Expo (pp. 1079–1084). IEEE
Mavadati SM, Mahoor MH, Bartlett K, Trinh P, Cohn JF (2013) Disfa: a spontaneous facial action intensity database. IEEE Trans Affect Comput 4(2):151–160
Image and Vision Computing, 32(10), 692–706
Baltrušaitis T, Mahmoud M, Robinson P (2015), May Cross-dataset learning and person-specific normalization for automatic action unit detection. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (Vol. 6, pp. 1–6). IEEE
Baltrusaitis T, Zadeh A, Lim YC, Morency LP (2018), May Openface 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018) (pp. 59–66). IEEE
Zhang X, Mahoor MH, Mavadati SM (2015) Facial expression recognition using lp-norm MKL multiclass-SVM. Mach Vis Appl 26(4):467–483
Nasr S, Bouallegue K, Shoaib M, Mekki H (2017) Face recognition system using bag of features and multi-class SVM for robot applications. In: 2017 international conference on control, automation and diagnosis (ICCAD), Hammamet, Tunisia, 2017, pp 263–268. https://doi.org/10.1109/CADIAG.2017.8075668
Wang Z, Xue X (2014). Multi-class support vector machine. In: Ma Y, Guo G (eds) Support vector machines applications. Springer, Cham. https://doi.org/10.1007/978-3-319-02300-7_2
Li J, Meng Q, Zhang G, Sun Y, Qiu L, Ma W (2017), December Automatic modulation classification using support vector machines and error correcting output codes. In 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) (pp. 60–63). IEEE
Uçar MK, Nour M, Sindi H, Polat K (2020) The effect of training and testing process on machine learning in biomedical datasets. Mathematical Problems in Engineering, 2020
Wong TT, Yeh PY (2019) Reliable accuracy estimates from k-fold cross validation. IEEE Trans Knowl Data Eng 32(8):1586–1594
Rahaman MM, Li C, Yao Y, Kulwa F, Wu X, Li X, Wang Q (2021) DeepCervix: a deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput Biol Med 136:104649
Acknowledgements
Not applicable.
Funding
This research received no external funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Jiddah, S.M., Yurtkan, K. Feature fusion for human compound emotion recognition: a fusion of facial expression texture and action unit data. Pattern Anal Applic 27, 149 (2024). https://doi.org/10.1007/s10044-024-01369-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10044-024-01369-7