Abstract
In this paper, we focus on the calibration possibilitiesó of a deep learning based gaze estimation process applying transfer learning, comparing its performance when using a general dataset versus when using a gaze specific dataset in the pretrained model. Subject calibration has demonstrated to improve gaze accuracy in high performance eye trackers. Hence, we wonder about the potential of a deep learning gaze estimation model for subject calibration employing fine-tuning procedures. A pretrained Resnet-18 network, which has great performance in many computer vision tasks, is fine-tuned using user’s specific data in a few shot adaptive gaze estimation approach. We study the impact of pretraining a model with a synthetic dataset, U2Eyes, before addressing the gaze estimation calibration in a real dataset, I2Head. The results of the work show that the success of the individual calibration largely depends on the balance between fine-tuning and the standard supervised learning procedures and that using a gaze specific dataset to pretrain the model improves the accuracy when few images are available for calibration. This paper shows that calibration is feasible in low resolution scenarios providing outstanding accuracies below 1.5\(^\circ \) of error.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abass, A., et al.: Positions of ocular geometrical and visual axes in Brazilian, Chinese and Italian populations. Curr. Eye Res. 43(11), 1404–1414 (2018). https://doi.org/10.1080/02713683.2018.1500609, pMID: 30009634
Cerrolaza, J.J., Villanueva, A., Cabeza, R.: Study of polynomial mapping functions in video-oculography eye trackers. ACM Trans. Comput.-Hum. Interact. 19(2), 10:1–10:25 (2012). https://doi.org/10.1145/2240156.2240158
Chen, Z., Shi, B.E.: Geddnet: a network for gaze estimation with dilation and decomposition (2020). https://arxiv.org/abs/2001.09284
Cheng, Y., Huang, S., Wang, F., Qian, C., Lu, F.: A coarse-to-fine adaptive network for appearance-based gaze estimation (2020). https://arxiv.org/abs/2001.00187
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR09 (2009)
Guestrin, E., Eizenman, M.: General theory of remote gaze estimation using pupil center and corneal reflections. IEEE Trans. Biomed. Eng. 53(6), 1124–1133 (2006)
Guo, T., et al.: A generalized and robust method towards practical gaze estimation on smart phone. In: 2019 International Conference on Computer Vision (ICCV) Workshops ICCV 2019, October 2019
He, J., et al.: On-device few-shot personalization for real-time gaze estimation. In: 2019 IEEE International Conference on Computer Vision (ICCV) Workshops ICCV 2019, October 2019
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
Hennessey, C., Noureddin, B., Lawrence, P.: A single camera eye-gaze tracking system with free head motion. In: Proceedings of the 2006 Symposium on Eye Tracking Research & Applications, ETRA 2006, pp. 87–94. Association for Computing Machinery, New York (2006). https://doi.org/10.1145/1117309.1117349
Krafka, K., et al.: Eye tracking for everyone. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2176–2184 (2016)
Linden, E., Sjostrand, J., Proutiere, A.: Learning to personalize in appearance-based gaze tracking. In: 2019 IEEE International Conference on Computer Vision (ICCV) Workshops, ICCV 2019, October 2019
Linden, E., Sjostrand, J., Proutiere, A.: Learning to personalize in appearance-based gaze tracking. In: The IEEE International Conference on Computer Vision (ICCV) Workshops, October 2019
Martinikorena, I., Cabeza, R., Villanueva, A., Porta, S.: Introducing I2Head database. In: 7th International Workshop on Pervasive Eye Tracking and Mobile Eye based Interaction PETMEI 2007 (2018)
Merchant, J., Morrissette, R., Porterfield, J.: Remote measurement of eye direction allowing subject motion over one cubic foot of space. IEEE Trans. Biomed. Eng. 21(4), 309–317 (1974)
Morimoto, C.H., Amir, A., Flickner, M.: Detecting eye position and gaze from a single camera and 2 light sources. In: Proceedings International Conference on Pattern Recognition, pp. 314–317 (2002)
Porta, S., Bossavit, B., Cabeza, R., Larumbe-Bergera, A., Garde, G., Villanueva, A.: U2eyes: a binocular dataset for eye tracking and gaze estimation. In: 2019 IEEE International Conference on Computer Vision (ICCV) ICCV 2019, October 2019
Shih, S.W., Liu, J.: A novel approach to 3-D gaze tracking using stereo cameras. IEEE Trans. Syst. Man Cybern. Part-B 34(1), 234–245 (2004)
Smith, L.N.: Cyclical learning rates for training neural networks. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) (2017)
Yu, Y., Liu, G., Odobez, J.M.: Improving few-shot user-specific gaze adaptation via gaze redirection synthesis. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Garde, G., Larumbe-Bergera, A., Porta, S., Cabeza, R., Villanueva, A. (2021). Synthetic Gaze Data Augmentation for Improved User Calibration. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12663. Springer, Cham. https://doi.org/10.1007/978-3-030-68796-0_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-68796-0_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-68795-3
Online ISBN: 978-3-030-68796-0
eBook Packages: Computer ScienceComputer Science (R0)