[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Feature fusion for human compound emotion recognition: a fusion of facial expression texture and action unit data

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Recognition of facial expressions is a challenging task in computer vision because of the complexity associated with individual facial features and social differences. Early studies classified human facial expressions into six basic categories which are anger, disgust, fear, happiness, sadness and surprise. The neutral expression is also taken into account. Furthermore, compound emotions are explored on human faces which are the representations of the expressions that entail the combination of more than a single basic facial expression. Including at least two expression categories, one is considered as the dominating expression and the other as the complementary expression. In this way, the categorization of compound facial expressions is done. In this study, a novel approach is proposed to recognize compound facial expressions. The main contribution of this paper is the proposed fusion of deep texture and geometric features. The texture features are the deep textures obtained from a deep learning model. The iCV-MEFED dataset is employed. It includes compound facial expressions consisting of all the combinations of basic facial expressions in the sense of dominating and complementary expressions. Therefore, 50 distinct classes of facial expressions are presented. The previous studies carried out on this dataset report high rates of misclassification due to the challenge of the complexity of facial expressions and correlations among the compound expressions. The proposed approach obtained encouraging results and has shown significant improvements in the recognition accuracy of compound facial expressions on the iCV-MEFED dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

Not applicable.

References

  1. Ahmed S, Kallu KD, Ahmed S, Cho SH (2021) Hand gestures recognition using radar sensors for human-computer-interaction: a review. Remote Sens 13(3):527

    Article  Google Scholar 

  2. Liu Y, Sivaparthipan CB, Shankar A (2022) Human–computer interaction based visual feedback system for augmentative and alternative communication. Int J Speech Technol 25(2):305–314

    Article  Google Scholar 

  3. Rahman MM, Sarkar AK, Hossain MA, Hossain MS, Islam MR, Hossain MB, Moni MA (2021) Recognition of facial expressions using EEG signals: a review. Comput Biol Med 136:104696

    Article  Google Scholar 

  4. Zhou J, Zhang S, Mei H, Wang D (2016) A method of facial expression recognition based on Gabor and NMF. Pattern Recognit Image Anal 26(1):119–124

    Article  Google Scholar 

  5. Cohen I, Sebe N, Garg A, Chen LS, Huang TS (2003) Facial expression recognition from video sequences: temporal and static modeling. Comput Vis Image Underst 91(1–2):160–187

    Article  Google Scholar 

  6. Nonis F, Dagnes N, Marcolin F, Vezzetti E (2019) 3D approaches and challenges in facial expression recognition algorithms—a literature review. Appl Sci 9(18):3904

    Article  Google Scholar 

  7. Martinez B, Valstar MF (2016) Advances, challenges, and opportunities in automatic facial expression recognition. In: Kawulok M, Celebi M, Smolka B (eds) Advances in face detection and facial image analysis. Springer, Cham. https://doi.org/10.1007/978-3-319-25958-1_4

  8. Ekman P (1992) An argument for basic emotions. Cognition Emot 6(3–4):169–200

    Article  Google Scholar 

  9. Yu Z, Liu G, Liu Q, Deng J (2018) Spatio-temporal convolutional features with nested LSTM for facial expression recognition. Neurocomputing 317:50–57

    Article  Google Scholar 

  10. Hu P, Cai D, Wang S, Yao A, Chen Y (2017), November Learning supervised scoring ensemble for emotion recognition in the wild. In Proceedings of the 19th ACM international conference on multimodal interaction (pp. 553–560)

  11. Guo J, Zhou S, Wu J, Wan J, Zhu X, Lei Z, Li SZ (2017), May Multi-modality network with visual and geometrical information for micro emotion recognition. In 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017) (pp. 814–819). IEEE

  12. Du S, Martinez AM (2022) Compound facial expressions of emotion: from basic research to clinical applications. Dialogues in clinical neuroscience

  13. 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (pp. 833–838). IEEE

  14. Levi G, Hassner T (2015), November Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In Proceedings of the 2015 ACM on international conference on multimodal interaction (pp. 503–510)

  15. Zhao X, Liang X, Liu L, Li T, Han Y, Vasconcelos N, Yan S (2016), October Peak-piloted deep network for facial expression recognition. In European conference on computer vision (pp. 425–442). Springer, Cham

  16. Grobova J, Colovic M, Marjanovic M, Njegus A, Demire H, Anbarjafari G (2017), May Automatic hidden sadness detection using micro-expressions. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (pp. 828–832). IEEE

  17. 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (pp. 809–813). IEEE

  18. IEEE transactions on affective computing, 12(2), 377–390

  19. Ekman P, Friesen WV (1971) Constants across cultures in the face and emotion. J Personal Soc Psychol 17(2):124

    Article  Google Scholar 

  20. Keltner D, Sauter D, Tracy J, Cowen A (2019) Emotional expression: advances in basic emotion theory. J Nonverbal Behav 43(2):133–160

    Article  Google Scholar 

  21. Haamer RE, Rusadze E, Lsi I, Ahmed T, Escalera S, Anbarjafari G (2017) Review on emotion recognition databases. Hum Robot Interact Theor Appl 3:39–63

    Google Scholar 

  22. Noroozi F, Marjanovic M, Njegus A, Escalera S, Anbarjafari G (2017) Audio-visual emotion recognition in video clips. IEEE Trans Affect Comput 10(1):60–75

    Article  Google Scholar 

  23. Maithri M, Raghavendra U, Gudigar A, Samanth J, Barua PD, Murugappan M, Chakole Y, Acharya UR (2022). Automated emotion recognition: current trends and future perspectives. Comput Methods Programs Biomed 215:106646

  24. Milad A, Yurtkan K (2022) An integrated 3D model based face recognition method using synthesized facial expressions and poses for single image applications. Appl Nanosci, 1–11

  25. Krithika LB, Priya GG (2021) Graph-based feature extraction and hybrid classification approach for facial expression recognition. J Ambient Intell Humaniz Comput 12(2):2131–2147

    Article  Google Scholar 

  26. Krithika LB, Priya GL (2022) MAFONN-EP: a minimal angular feature oriented neural network based emotion prediction system in image processing. J King Saud University-Computer Inform Sci 34(1):1320–1329

    Google Scholar 

  27. Yu Z, Liu Q, Liu G (2018) Deeper cascaded peak-piloted network for weak expression recognition. Visual Comput 34(12):1691–1699

    Article  Google Scholar 

  28. Ekundayo OS, Viriri S (2021) Facial expression recognition: a review of trends and techniques. Ieee Access 9:136944–136973

    Article  Google Scholar 

  29. IEEE Access, 6, 26391–26403

  30. Du S, Tao Y, Martinez AM (2014) ‘‘Compound facial expressions of emotion,’’ Proc. Nat. Acad. Sci. USA, vol. 111, no. 15, pp. E1454–E1462

  31. Fabian Benitez-Quiroz C, Srinivasan R, Martinez AM (2016) Emotionet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5562–5570)

  32. Benitez-Quiroz CF, Srinivasan R, Feng Q, Wang Y, Martinez AM (2017) Emotionet challenge: Recognition of facial expressions of emotion in the wild. arXiv preprint arXiv:1703.01210

  33. Baltrušaitis T, Robinson P, Morency LP (2016), March Openface: an open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1–10). IEEE

  34. Kamińska D, Aktas K, Rizhinashvili D, Kuklyanov D, Sham AH, Escalera S, Nasrollahi K, Moeslund TB, Anbarjafari G (2021) Two-stage recognition and beyond for compound facial emotion recognition. Electronics 10(22):2847

  35. Jiddah SM, Yurtkan K (2023) Dominant and complementary emotion recognition using hybrid recurrent neural network. SIViP, 1–9

  36. Appasaheb Borgalli R, Surve S (2023) Learning Framework for compound facial emotion recognition. Recent Adv Electr Electron Eng (Formerly Recent Pat Electr Electron Engineering) 16(6):664–676

    Google Scholar 

  37. Dong H, Song K, He Y, Xu J, Yan Y, Meng Q (2019) PGA-Net: pyramid feature fusion and global context attention network for automated surface defect detection. IEEE Trans Industr Inf 16(12):7448–7458

    Article  Google Scholar 

  38. Dai Y, Gieseke F, Oehmcke S, Wu Y, Barnard K (2021) Attentional feature fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 3560–3569)

  39. Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius, R. (2022) Breast cancer classification from ultrasound images using probability-based optimal deep learning feature fusion. Sensors 22(3):807

  40. Jiddah SM, Abushakra M, Yurtkan K Fusion of geometric and texture features for side-view face recognition using svm. Istatistik J Turkish Stat Association, 13(3), 108–119

  41. Jiddah SM, Yurtkan K (2018), October Fusion of geometric and texture features for ear recognition. In 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (pp. 1–5). IEEE

  42. Khan MA, Sarfraz MS, Alhaisoni M, Albesher AA, Wang S, Ashraf I (2020) StomachNet: optimal deep learning features fusion for stomach abnormalities classification. IEEE Access 8:197969–197981

    Article  Google Scholar 

  43. Park SJ, Kim BG, Chilamkurti N (2021) A robust facial expression recognition algorithm based on multi-rate feature fusion scheme. Sensors 21(21):6954

    Article  Google Scholar 

  44. Sagonas C, Antonakos E, Tzimiropoulos G, Zafeiriou S, Pantic M (2016) 300 faces in-the-wild challenge: database and results. Image Vis Comput 47:3–18

    Article  Google Scholar 

  45. Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9)

  46. McKeown G, Valstar MF, Cowie R, Pantic M (2010), July The SEMAINE corpus of emotionally coloured character interactions. In 2010 IEEE International Conference on Multimedia and Expo (pp. 1079–1084). IEEE

  47. Mavadati SM, Mahoor MH, Bartlett K, Trinh P, Cohn JF (2013) Disfa: a spontaneous facial action intensity database. IEEE Trans Affect Comput 4(2):151–160

    Article  Google Scholar 

  48. Image and Vision Computing, 32(10), 692–706

  49. Baltrušaitis T, Mahmoud M, Robinson P (2015), May Cross-dataset learning and person-specific normalization for automatic action unit detection. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (Vol. 6, pp. 1–6). IEEE

  50. Baltrusaitis T, Zadeh A, Lim YC, Morency LP (2018), May Openface 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018) (pp. 59–66). IEEE

  51. Zhang X, Mahoor MH, Mavadati SM (2015) Facial expression recognition using lp-norm MKL multiclass-SVM. Mach Vis Appl 26(4):467–483

    Article  Google Scholar 

  52. Nasr S, Bouallegue K, Shoaib M, Mekki H (2017) Face recognition system using bag of features and multi-class SVM for robot applications. In: 2017 international conference on control, automation and diagnosis (ICCAD), Hammamet, Tunisia, 2017, pp 263–268. https://doi.org/10.1109/CADIAG.2017.8075668

  53. Wang Z, Xue X (2014). Multi-class support vector machine. In: Ma Y, Guo G (eds) Support vector machines applications. Springer, Cham. https://doi.org/10.1007/978-3-319-02300-7_2

  54. Li J, Meng Q, Zhang G, Sun Y, Qiu L, Ma W (2017), December Automatic modulation classification using support vector machines and error correcting output codes. In 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) (pp. 60–63). IEEE

  55. Uçar MK, Nour M, Sindi H, Polat K (2020) The effect of training and testing process on machine learning in biomedical datasets. Mathematical Problems in Engineering, 2020

  56. Wong TT, Yeh PY (2019) Reliable accuracy estimates from k-fold cross validation. IEEE Trans Knowl Data Eng 32(8):1586–1594

    Article  Google Scholar 

  57. Rahaman MM, Li C, Yao Y, Kulwa F, Wu X, Li X, Wang Q (2021) DeepCervix: a deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput Biol Med 136:104649

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This research received no external funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Salman Mohammed Jiddah.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiddah, S.M., Yurtkan, K. Feature fusion for human compound emotion recognition: a fusion of facial expression texture and action unit data. Pattern Anal Applic 27, 149 (2024). https://doi.org/10.1007/s10044-024-01369-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10044-024-01369-7

Keywords

Navigation