[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

MYFED: a dataset of affective face videos for investigation of emotional facial dynamics as a soft biometric for person identification

  • Research
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Psychological studies have demonstrated that the facial dynamics play a significant role in recognizing an individual’s identity. This study introduces a novel database (MYFED) and approach for person identification based on facial dynamics, to extract the identity-related information associated with the facial expressions of the six basic emotions (happiness, sadness, surprise, anger, disgust, and fear). Our contribution includes the collection of the MYFED database, featuring facial videos capturing both spontaneous and deliberate expressions of the six basic emotions. The database is uniquely tailored for person identification using facial dynamics of emotional expressions, ensuring an average of ten repetitions for each emotional expression per subject-a characteristic often absent in existing facial expression databases. Additionally, we present a novel person identification method leveraging dynamic features extracted from videos depicting the six basic emotions. Experimental results confirm that dynamic features of all emotional expressions contain identity-related information. Notably, surprise, happiness, and sadness expressions exhibit the highest levels of identity-related data in descending order. To our knowledge, this is the first research that comprehensively analyzes facial expressions of all six basic emotions for person identification. For further research and exploration, the MYFED database is made accessible to researchers via the MYFED database website.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

https://sites.google.com/view/MYFED

Notes

  1. https://sites.google.com/view/MYFED

References

  1. Wang, M., Deng, W.: Deep face recognition: a survey. Neurocomputing 429, 215–244 (2021). https://doi.org/10.1016/j.neucom.2020.10.081

    Article  Google Scholar 

  2. Kortli, Y., Jridi, M., Falou, A.A., Atri, M.: Face recognition systems: A survey. Sensors 20(2), 342

  3. Guo, G., Zhang, N.: A survey on deep learning based face recognition. Comput. Vis. Image Underst. 189, 102805 (2019). https://doi.org/10.1016/j.cviu.2019.102805

    Article  Google Scholar 

  4. Taskiran, M., Kahraman, N., Erdem, C.E.: Face recognition: Past, present and future (a review). Digital Signal Proc. (2020). https://doi.org/10.1016/j.dsp.2020.102809

    Article  Google Scholar 

  5. Yu, Z., Qin, Y., Li, X., Zhao, C., Lei, Z., Zhao, G.: Deep learning for face anti-spoofing: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 45(5), 5609–5631 (2023). https://doi.org/10.1109/TPAMI.2022.3215850

    Article  Google Scholar 

  6. Grm, K., Atruc, V., Artiges, A., Caron, M., Ekenel, H.: Strengths and weaknesses of deep learning models for face recognition against image degradations. IET Biometrics (2017). https://doi.org/10.1049/iet-bmt.2017.0083

    Article  Google Scholar 

  7. Pala, G., Eroglu Erdem, C.: Performance comparison of deep learning based face identification methods for video under adverse conditions. In: 2019 15th International Conference on Signal-Image Technology Internet-Based Systems (SITIS), pp. 90–97 (2019). https://doi.org/10.1109/SITIS.2019.00026

  8. Taskiran, M., Kahraman, N., Erdem, C.E.: Hybrid face recognition under adverse conditions using appearance-based and dynamic features of smile expression. IET Biometrics 10(1), 99–115 (2021). https://doi.org/10.1049/bme2.12006

    Article  Google Scholar 

  9. Dantcheva, A., Br Ãmond, F.: Gender estimation based on smile-dynamics. IEEE Trans. Inform. Forensics Security 12(3), 719–729 (2017). https://doi.org/10.1109/TIFS.2016.2632070

    Article  Google Scholar 

  10. Dibeklioğlu, H., Alnajar, F., Ali Salah, A., Gevers, T.: Combining facial dynamics with appearance for age estimation. IEEE Trans. Image Process. 24(6), 1928–1943 (2015). https://doi.org/10.1109/TIP.2015.2412377

    Article  MathSciNet  Google Scholar 

  11. Esmaeili, V., Mohassel Feghhi, M., Shahdi, S.O.: Spotting micro-movements in image sequence by introducing intelligent cubic-LBP. IET Image Proc. 16(14), 3814–3830 (2022). https://doi.org/10.1049/ipr2.12596

    Article  Google Scholar 

  12. Esmaeili, V., Shahdi, S.O.: Automatic micro-expression apex spotting using cubic-LBP. Multimed. Tools Appl. 79, 20221–20239 (2020)

    Article  Google Scholar 

  13. Esmaeili, V., Mohassel Feghhi, M., Shahdi, S.O.: A comprehensive survey on facial micro-expression: approaches and databases. Multimed. Tools Appl. (2022). https://doi.org/10.1007/s11042-022-13133-2

    Article  Google Scholar 

  14. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pp. 94–101 (2010). https://doi.org/10.1109/CVPRW.2010.5543262

  15. Mavadati, S., Mahoor, M., Bartlett, K., Trinh, P., Cohn, J.: Disfa: A spontaneous facial action intensity database. Aff. Comput. IEEE Trans. 4, 151–160 (2013). https://doi.org/10.1109/T-AFFC.2013.4

    Article  Google Scholar 

  16. Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: 2005 IEEE International Conference on Multimedia and Expo (2005)

  17. Zhang, X., Yin, L., Cohn, J., Canavan, S., Reale, M., Horowitz, A., Liu, P., Girard, J.: BP4D-spontaneous: A high-resolution spontaneous 3D dynamic facial expression database. Image Vis. Comput. 32, 692–706 (2014). https://doi.org/10.1016/j.imavis.2014.06.002

    Article  Google Scholar 

  18. Kaulard, K., Cunningham, D., Bülthoff, H., Wallraven, C.: The MPI facial expression database – a validated database of emotional and conversational facial expressions. PLoS One 7 (2012)

  19. Bänziger, T., Mortillaro, M., Scherer, K.: Introducing the geneva multimodal expression corpus for experimental research on emotion perception. Emotion 12, 1161–79 (2011). https://doi.org/10.1037/a0025827

    Article  Google Scholar 

  20. Dhall, A., Goecke, R., Lucey, S., Gedeon, T.: Acted facial expressions in the wild database. Technical report, ANU Computer Science Technical Report Series, TR-CS-11-02 (October 2011)

  21. Happy, S.L., Patnaik, P., Routray, A., Guha, R.: The indian spontaneous expression database for emotion recognition. IEEE Trans. Affect. Comput. 8, 1–1 (2015). https://doi.org/10.1109/TAFFC.2015.2498174

    Article  Google Scholar 

  22. Dibeklioglu, H., Salah, A., Gevers, T.: Are you really smiling at me? spontaneous versus posed enjoyment smiles, pp. 525–538 (2012). https://doi.org/10.1007/978-3-642-33712-3_38

  23. Wallhoff, F., Schuller, B., Hawellek, M., Rigoll, G.: Efficient recognition of authentic dynamic facial expressions on the feedtum database, pp. 493–496 (2006).https://doi.org/10.1109/ICME.2006.262433

  24. Zhalehpour, S., Onder, O., Akhtar, Z., Erdem, C.: Baum-1: A spontaneous audio-visual face database of affective and mental states. IEEE Trans. Aff. Comput. (2016). https://doi.org/10.1109/TAFFC.2016.2553038

    Article  Google Scholar 

  25. Erdem, C., Turan, C., Aydin, Z.: Baum-2: A multilingual audio-visual affective face database. Multimed. Tools Appl. (2014). https://doi.org/10.1007/s11042-014-1986-2

    Article  Google Scholar 

  26. Martin, O., Kotsia, I., Macq, B., Pitas, I.: The enterface’ 05 audio-visual emotion database. In: 22nd International Conference on Data Engineering Workshops (ICDEW’06), pp. 8–8 (2006). https://doi.org/10.1109/ICDEW.2006.145

  27. Taskiran, M., Killioglu, M., Kahraman, N., Erdem, C.E.: Face recognition using dynamic features extracted from smile videos, 1–6 (2019) https://doi.org/10.1109/INISTA.2019.8778400

  28. Thornton, I.M., Kourtzi, Z.: A matching advantage for dynamic human faces. Perception 31(1), 113–132 (2002). https://doi.org/10.1068/p3300

    Article  Google Scholar 

  29. Roark, D.A., Barrett, S.E., Spence, M.J., Abdi, H., O’Toole, A.J.: Memory for moving faces: Psychological and neural perspectives on the role of motion in face recognition. Behav. Cogn. Neurosci. Rev. 2(1), 15–46 (2003). https://doi.org/10.1177/1534582303002001002

    Article  Google Scholar 

  30. Calder, A., Young, A.: Understanding the recognition of facial identity and facial expression. Nat. Rev. Neurosci. 6, 641–651 (2005). https://doi.org/10.1038/nrn1724

    Article  Google Scholar 

  31. Schmidt, K.L., Cohn, J.F.: Dynamics of facial expression: normative characteristics and individual differences, 547–550 (2001) https://doi.org/10.1109/ICME.2001.1237778

  32. Cohn, J.F., Schmidt, K., Gross, R., Ekman, P.: Individual differences in facial expression: stability over time, relation to self-reported emotion, and ability to inform person identification. In: Proceedings. Fourth IEEE International Conference on Multimodal Interfaces, pp. 491–496 (2002). https://doi.org/10.1109/ICMI.2002.1167045

  33. Hadid, A., Pietikäinen, M.: An experimental investigation about the integration of facial dynamics in video-based face recognition. ELCVIA : Electronic Letters on Computer Vision and Image Analysis; Vol.: 5 Núm.: 1 5 (2005) https://doi.org/10.5565/rev/elcvia.80

  34. Tulyakov, S., Slowe, T., Zhang, Z., Govindaraju, V.: Facial expression biometrics using tracker displacement features. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–5 (2007)

  35. Paleari, M., Velardo, C., Huet, B., Dugelay, J.: Face dynamics for biometric people recognition. In: 2009 IEEE International Workshop on Multimedia Signal Processing, pp. 1–5 (2009). https://doi.org/10.1109/MMSP.2009.5293300

  36. Matta, F., Dugelay, J.: Person recognition using facial video information: A state of the art. J. Visual Langu. Comput. 20, 180–187 (2009). https://doi.org/10.1016/j.jvlc.2009.01.002

    Article  Google Scholar 

  37. Zafeiriou, S., Pantic, M.: Facial behaviometrics: The case of facial deformation in spontaneous smile/laughter. In: CVPR 2011 WORKSHOPS, pp. 13–19 (2011)

  38. Ning, Y., Sim, T.: Smile, you’re on identity camera. In: 2008 19th International Conference on Pattern Recognition, pp. 1–4 (2008). https://doi.org/10.1109/ICPR.2008.4761850

  39. Kim, S.T., Kim, D.H., Ro, Y.M.: Facial dynamic modelling using long short-term memory network: Analysis and application to face authentication. In: IEEE Int. Conf. Biometrics Theory, Appl. Syst., (2016)

  40. Haamer, R.E., Kulkarni, K., Imanpour, N., al.: Changes in facial expression as biometric: A database and benchmarks of identification. In: IEEE Int. Conf. Automatic Face and Gesture Recognition (FG), pp. 621–628 (2018)

  41. Usman, S.: Facial micro-expressions as a soft biometric for person recognition. Pattern Recogn. Lett. 143, 95–103 (2021). https://doi.org/10.1016/j.patrec.2020.12.021

    Article  Google Scholar 

  42. Kim, S.T., Ro, Y.M.: Attended relation feature representation of facial dynamics for facial authentication. IEEE Trans. Inf. Forensics Secur. 14(7), 1768–1778 (2019)

    Article  Google Scholar 

  43. Zuheng, M., Junshi, X., Muhammad, M.L., Jean-Christophe, B., Kaixing, Z.: Dynamic Multi-Task Learning for Face Recognition with Facial Expression (2019)

  44. Gavrilescu, M.: Study on using individual differences in facial expressions for a face recognition system immune to spoofing attacks. IET Biometrics 5, 236–242 (2016)

    Article  Google Scholar 

  45. Kashyap, A.L., Tulyakov, S., Govindaraju, V.: Facial behavior as a soft biometric. In: 2012 5th IAPR International Conference on Biometrics (ICB), pp. 147–151 (2012). https://doi.org/10.1109/ICB.2012.6199772

  46. Weber, R., Soladié, C., Seguier, R.: A survey on databases for facial expression analysis. In: Proc. of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, vol. 5, pp. 73–84 (2018). https://doi.org/10.5220/0006553900730084

  47. Pfister, T., Xiaobai, L., Zhao, G., Pietikainen, M.: Differentiating spontaneous from posed facial expressions within a generic facial expression recognition framework. In: Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference, pp. 868–875 (2011)

  48. Haq, S., Jackson, P.J.B.: Speaker-dependent audio-visual emotion recognition. In: Proc. Int’l Conf. on Auditory-Visual Speech Processing, pp. 53–58 (2009)

  49. Douglas-Cowie, E., Cowie, R., Schoder, M.: A new emotion database: Considerations, sources and scope. In: Proc. ISCA ITRW Speech Emotion, pp. 39–44 (2000)

  50. Mckeown, G., Valstar, M.F., Cowie, R., Pantic, M., Schroeder, M.: The semaine database: Annotated multimodal records of emotionally coloured conversations between a person and a limited agent. IEEE Trans. Affective Comput. 3(1), 5–17

  51. Busso, C., Bulut, M., Lee, V., Kazemzadeh, A., Mower, E., Kim, S., Chang, J.N., Lee, S., Narayanan, S.S.: Iemocap: Interactive emotional dyadic motion capture database. J. Language Resources Eval. 42(4), 335–359

  52. Design, B.: What’s New! https://www.blackmagicdesign.com/products/davinciresolve/

  53. Singh, S., Prasad, S.: Techniques and challenges of face recognition: A critical review. Procedia Comput. Sci. 143, 536–543 (2018). https://doi.org/10.1016/j.procs.2018.10.427

    Article  Google Scholar 

  54. Saragih, J.M., Lucey, S., Cohn, J.F.: Deformable model fitting by regularized landmark mean-shift. Int. J. Comput. Vision 91(2), 200–215 (2011). https://doi.org/10.1007/s11263-010-0380-4

    Article  MathSciNet  Google Scholar 

  55. Wu, Y., Ji, Q.: Facial landmark detection: A literature survey. Int. J. Comput. Vision (2018). https://doi.org/10.1007/s11263-018-1097-z

    Article  Google Scholar 

  56. Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1867–1874 (2014). https://doi.org/10.1109/CVPR.2014.241

  57. Madrigal, F., Lerasle, F.: Robust head pose estimation based on key frames for human-machine interaction. EURASIP J. Image Video Process. (2020). https://doi.org/10.1186/s13640-020-0492-x

    Article  Google Scholar 

  58. Kamarol, S.K.A., Jaward, M.H., Kälviäinen, H., Parkkinen, J., Parthiban, R.: Joint facial expression recognition and intensity estimation based on weighted votes of image sequences. Pattern Recognition Letters 92, 25–32 (2017). https://doi.org/10.1016/j.patrec.2017.04.003

    Article  Google Scholar 

  59. Verma, R., Davatzikos, C., Indersmitten, T., Hu, R., Kohler, C., Gur, R., Gur, R.: Quantification of facial expressions using high-dimensional shape transformations. J. Neurosci. Methods 141, 61–73 (2005). https://doi.org/10.1016/j.jneumeth.2004.05.016

    Article  Google Scholar 

  60. K. K. Lee, Y. Xu: Real-time estimation of facial expression intensity. In: 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), vol. 2, pp. 2567–25722 (2003). https://doi.org/10.1109/ROBOT.2003.1241979

  61. Wu, J., Xiao, S.: Quantitative intensity analysis of facial expressions using hmm and linear regression. In: Proceedings of the 13th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry. VRCAI ’14, pp. 247–250. Association for Computing Machinery, New York, NY, USA (2014)

  62. Rudovic, O., Pavlovic, V., Pantic, M.: Multi-output laplacian dynamic ordinal regression for facial expression recognition and intensity estimation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2634–2641 (2012). https://doi.org/10.1109/CVPR.2012.6247983

  63. Ekman, P., Friesen, W.: Facial action coding system: a technique for the measurement of facial movement. (1978)

  64. Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63, 3–42 (2006). https://doi.org/10.1007/s10994-006-6226-1

    Article  Google Scholar 

  65. Gilles, L., Wehenkel, L., Sutera, A., Geurts, P.: Understanding variable importances in forests of randomized trees. Advances in Neural Information Processing Systems 26 (2013)

  66. Nembrini, S., König, I., Wright, M.: The revival of the gini importance? Bioinformatics (Oxford, England) 34 (2018) https://doi.org/10.1093/bioinformatics/bty373

  67. Sariyanidi, E., Gunes, H., Cavallaro, A.: Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1113–1133 (2015). https://doi.org/10.1109/TPAMI.2014.2366127

    Article  Google Scholar 

  68. Tirunagari, S., Poh, N.D.W., Iorliam, A., Suki, N., Ho, A.T.S.: Detection of face spoofing using visual dynamics. IEEE Trans. Inf. Forensics Secur. 10, 762–777 (2015)

    Article  Google Scholar 

  69. Li, H., He, P., Wang, S., Rocha, A., Jiang, X., Kot, A.C.: Learning generalized deep feature representation for face anti-spoofing. IEEE Trans. Inf. Forensics Secur. 13, 2639–2652 (2018)

    Article  Google Scholar 

  70. Savage, N.: (2023). https://spie.org/news/photonics-focus/septoct-2023/exposing-deepfake-imagery

  71. Demir, I., Çiftçi, U.A.: How do deepfakes move? motion magnification for deepfake source detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 4780–4790 (2024)

Download references

Acknowledgements

This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) under project EEAG-116E088. We also acknowledge the Titan V GPU donation by the NVIDIA Corporation.

Author information

Authors and Affiliations

Authors

Contributions

Credit Author Statement Zeynep Nur Saraçbasi: Conceptualization, Methodology, Software, Investigation, Writing-Original Draft Çigdem Eroglu Erdem: Conceptualization, Methodology, Writing – Review & Editing, Supervision, Project Administration, Funding acquisition Murat Taskiran: Conceptualization, Methodology, Investigation, Writing – Review & Editing. Nihan Kahraman: Investigation, Supervision, Writing – Review & Editing

Corresponding author

Correspondence to Zeynep Nur Saracbasi.

Ethics declarations

Conflict of interest

The authors declare no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saracbasi, Z.N., Eroglu Erdem, C., Taskiran, M. et al. MYFED: a dataset of affective face videos for investigation of emotional facial dynamics as a soft biometric for person identification. Machine Vision and Applications 36, 8 (2025). https://doi.org/10.1007/s00138-024-01625-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-024-01625-0

Keywords

Navigation