Abstract
Dictionary learning based methods have achieved state-of-the-art performance in the task of conventional facial expression recognition (FER), where the distributions between training and testing data are implicitly assumed to be matched. But in the practical scenes this assumption is usually broken, especially when testing samples and training samples come from different databases, a.k.a. the cross-database FER problem. To address this problem, we propose a novel method called unsupervised domain adaptive dictionary learning (UDADL) to deal with the unsupervised case that all samples in target database are completely unlabeled. In UDADL, to obtain more robust representations of facial expressions and to reduce the time complexity in training and testing phases, we introduce a dual dictionary pair consisting of a synthesis one and an analysis one to mutually bridge the samples and their codes. Meanwhile, to relieve the distribution disparity of source and target samples, we further integrate the learning of unlabeled testing data into UDADL to adaptively adjust the misaligned distribution in an embedded space, where geometric structures of both domains are also encourage to be preserved. The UDADL model can be solved by an iterate optimization strategy with each sub-optimization in a closed analytic form. The extensive experiments on Multi-PIE and BU-3DFE databases demonstrate that the proposed UDADL is superior over most widely-used domain adaptation methods in dealing with cross-database FER, and achieves the state-of-the-art performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chu, W.S., Torre, F., Cohn, J.: Selective transfer machine for personalized facial action unit detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3515–3522 (2013)
Gross, R., Matthews, I., Cohn, J., Kanade, T., Baker, S.: Multi-pie. Image Vis. Comput. 28(5), 807–813 (2010)
Gu, S., Zhang, L., Zuo, W., Feng, X.: Projective dictionary pair learning for pattern classification. In: Advances in Neural Information Processing Systems, pp. 793–801 (2014)
Hassan, A., Damper, R., Niranjan, M.: On acoustic emotion recognition: compensating for covariate shift. IEEE Trans. Audio Speech Lang. Process. 21(7), 1458–1468 (2013)
Huang, J., Gretton, A., Borgwardt, K.M., Schölkopf, B., Smola, A.J.: Correcting sample selection bias by unlabeled data. In: Advances in Neural Information Processing Systems, pp. 601–608 (2006)
Kan, M., Wu, J., Shan, S., Chen, X.: Domain adaptation for face recognition: targetize source domain bridged by common subspace. Int. J. Comput. Vis. 109(1–2), 94–109 (2014)
Kanamori, T., Hido, S., Sugiyama, M.: A least-squares approach to direct importance estimation. J. Mach. Learn. Res. 10, 1391–1445 (2009)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)
Rubinstein, R., Bruckstein, A.M., Elad, M.: Dictionaries for sparse representation modeling. Proc. IEEE 98(6), 1045–1057 (2010)
Sangineto, E., Zen, G., Ricci, E., Sebe, N.: We are not all equal: personalizing models for facial expression analysis with transductive parameter transfer. In: Proceedings of the ACM International Conference on Multimedia, pp. 357–366. ACM (2014)
Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P.V., Kawanabe, M.: Direct importance estimation with model selection and its application to covariate shift adaptation. In: Advances in Neural Information Processing Systems, pp. 1433–1440 (2008)
Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3D facial expression database for facial behavior research. In: 7th International Conference on Automatic Face and Gesture Recognition, FGR 2006, pp. 211–216. IEEE (2006)
Zhang, C., Liu, J., Tian, Q., Xu, C., Lu, H., Ma, S.: Image classification by non-negative sparse coding, low-rank and sparse decomposition. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1673–1680. IEEE (2011)
Zheng, W., Tang, H., Lin, Z., Huang, T.S.: A novel approach to expression recognition from non-frontal face images. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1901–1908. IEEE (2009)
Zheng, W., Zhou, X.: Cross-pose color facial expression recognition using transductive transfer linear discriminat analysis. In: IEEE International Conference on Image Processing, pp. 1935–1939. IEEE (2015)
Acknowledgement
This work was supported in part by the National Basic Research Program of China under Grant 2015CB351704, in part by the National Natural Science Foundation of China (NSFC) under Grants 61231002 and 61572009, and in part by the Natural Science Foundation of Jiangsu Province under Grant BK20130020.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Yan, K., Zheng, W., Cui, Z., Zong, Y. (2016). Cross-Database Facial Expression Recognition via Unsupervised Domain Adaptive Dictionary Learning. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9948. Springer, Cham. https://doi.org/10.1007/978-3-319-46672-9_48
Download citation
DOI: https://doi.org/10.1007/978-3-319-46672-9_48
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46671-2
Online ISBN: 978-3-319-46672-9
eBook Packages: Computer ScienceComputer Science (R0)