Driver Face Verification with Depth Maps
<p>Overview of the proposed fully-convolutional Siamese architecture. The network takes a couple of facial depth maps as input and provides a continuous value in the range <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math> as output, which is the probability that the input images belong to the same identity. Kernel size (<span class="html-italic">k</span>), number of filters (<span class="html-italic">n</span>), and stride size (<span class="html-italic">s</span>) are reported as the main parameters of each convolutional layer. With <span class="html-italic">concat</span> we refer to the concatenation on the feature channels.</p> "> Figure 2
<p>The graph on the left (<b>a</b>) shows the evolution of the loss function values during the training (blue) and validation (orange) steps. On the right, (<b>b</b>) reports the accuracy (expressed as a percentage in the range <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math>) obtained during the training and validation phases.</p> "> Figure 3
<p>Sample depth and RGB frames taken from the Pandora [<a href="#B17-sensors-19-03361" class="html-bibr">17</a>] dataset. Frames from subsets <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>3</mn> </msub> </mrow> </semantics></math> are reported in (<b>a</b>), while frames from subsets <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>5</mn> </msub> </mrow> </semantics></math> with garments and eye glasses are shown in (<b>b</b>). See <a href="#sec4dot1-sensors-19-03361" class="html-sec">Section 4.1</a> for further details about the subsets.</p> "> Figure 4
<p>Sample frames taken from the subsets <math display="inline"><semantics> <msub> <mi>A</mi> <mn>1</mn> </msub> </semantics></math> (<b>a</b>), <math display="inline"><semantics> <msub> <mi>A</mi> <mn>2</mn> </msub> </semantics></math> (<b>b</b>), and <math display="inline"><semantics> <msub> <mi>A</mi> <mn>3</mn> </msub> </semantics></math> (<b>c</b>) of the <span class="html-italic">Pandora</span> dataset, which contain frontal, non-frontal, and extreme head poses. See <a href="#sec4dot1-sensors-19-03361" class="html-sec">Section 4.1</a> for further details.</p> "> Figure 5
<p>Sample depth images from the <span class="html-italic">High-Resolution Range-based Face Database</span> (HRRFaceD) dataset. As shown, heads were acquired under different poses.</p> "> Figure 6
<p>RGB and depth samples taken from the <span class="html-italic">CurtinFaces</span> dataset. As shown, different head poses, expressions, light variations, and garment occlusions are included.</p> "> Figure 7
<p>Sample output of the framework. From the top, the first block represents correct predictions, while wrong predictions are shown at the bottom. Depth maps are contrast stretched for a better visualization. Images were taken from the <span class="html-italic">Pandora</span> dataset.</p> "> Figure 8
<p>In graph (<b>a</b>), the ROC curves obtained by the proposed network computed on different splits of the <span class="html-italic">Pandora</span> dataset are reported (for further details, see <a href="#sec4dot1-sensors-19-03361" class="html-sec">Section 4.1</a>). In (<b>b</b>), the ROC curves of the proposed method and the JanusNet architecture [<a href="#B21-sensors-19-03361" class="html-bibr">21</a>] are reported.</p> ">
Abstract
:1. Introduction
- We propose a fully-convolutional Siamese network to tackle the face verification task. Thanks to this architecture, the framework is able to deal with input images with different sizes;
- The presented method relies only on depth maps and achieves state-of-the-art results, not requiring any pre-processing step, like facial landmark localization, nose tip detection, or face frontalization;
- The face verification and identification accuracy are tested on three different depth-based datasets, namely Pandora, High-Resolution Range-based Face Database (HRRFaceD), and Curtinfaces;
- We design the Siamese network in order to have low memory requirements and real-time performance even on embedded platforms. We conduct an extensive speed performance evaluation of the proposed system on three different GPUs: the NVidia 1080 Ti, the NVidia Quadro k2200, and the embedded board NVidia TX2;
- Finally, through an experimental analysis that exploits the annotations of the Pandora dataset, we investigate the capabilities of the proposed approach and the influence of the head pose and visual occlusions on the face verification task.
2. Related Work
2.1. Face Recognition on Intensity Images
2.2. Face Recognition on Depth Maps
3. Method
3.1. Siamese Architecture
3.2. Training Procedure
4. Datasets
4.1. Pandora Dataset
4.2. High-Resolution Range-Based Face Database
4.3. CurtinFaces Database
5. Results
5.1. Face Verification on Depth Maps
5.2. Face Identification on Depth Maps
5.2.1. HRRFaceD Dataset
5.2.2. CurtinFaces Dataset
5.3. How the Head Pose Influences the Face Verification Task
5.4. Execution Time
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Masi, I.; Wu, Y.; Hassner, T.; Natarajan, P. Deep Face Recognition: A Survey. In Proceedings of the Conference on Graphics, Patterns and Images (SIBGRAPI), Paraná, Brazil, 29 October–1 November 2018. [Google Scholar]
- Zafeiriou, S.; Zhang, C.; Zhang, Z. A survey on face detection in the wild: Past, present and future. Comput. Vis. Image Underst. 2015, 138, 1–24. [Google Scholar] [CrossRef] [Green Version]
- Palazzi, A.; Abati, D.; Calderara, S.; Solera, F.; Cucchiara, R. Predicting the Driver’s Focus of Attention: The DR(eye)VE Project. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1720–1733. [Google Scholar] [CrossRef] [PubMed]
- Piechulla, W.; Mayser, C.; Gehrke, H.; König, W. Reducing drivers’ mental workload by means of an adaptive man–machine interface. Transp. Res. Part F Traffic Psychol. Behav. 2003, 6, 233–248. [Google Scholar] [CrossRef]
- Izquierdo-Reyes, J.; Ramirez-Mendoza, R.A.; Bustamante-Bello, M.R.; Navarro-Tuch, S.; Avila-Vazquez, R. Advanced driver monitoring for assistance system (ADMAS). Int. J. Interact. Des. Manuf. (IJIDeM) 2018, 12, 187–197. [Google Scholar] [CrossRef]
- Borghi, G.; Fabbri, M.; Vezzani, R.; Calderara, S.; Cucchiara, R. Face-from-Depth for Head Pose Estimation on Depth Images. IEEE Trans. Pattern Anal. Mach. Intell. 2019. [Google Scholar] [CrossRef] [PubMed]
- Alioua, N.; Amine, A.; Rogozan, A.; Bensrhair, A.; Rziza, M. Driver head pose estimation using efficient descriptor fusion. EURASIP J. Image Video Process. 2016, 2016, 1–14. [Google Scholar] [CrossRef]
- Huang, G.B.; Ramesh, M.; Berg, T.; Learned-Miller, E. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments; Technical Report 07-49; University of Massachusetts: Amherst, MA, USA, 2007. [Google Scholar]
- Wolf, L.; Hassner, T.; Maoz, I. Face recognition in unconstrained videos with matched background similarity. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011. [Google Scholar]
- Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep Learning Face Attributes in the Wild. In Proceedings of the International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Guo, Y.; Zhang, L.; Hu, Y.; He, X.; Gao, J. MS-Celeb-1M: A Dataset and Benchmark for Large Scale Face Recognition. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016. [Google Scholar]
- Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014. [Google Scholar]
- Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep Face Recognition. In Proceedings of the British Machine Vision Conference, Swansea, UK, 7–10 September 2015. [Google Scholar]
- Mogelmose, A.; Bahnsen, C.; Moeslund, T.; Clapes, A.; Escalera, S. Tri-modal person re-identification with rgb, depth and thermal features. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
- Nikisins, O.; Nasrollahi, K.; Greitans, M.; Moeslund, T.B. RGB-DT based face recognition. In Proceedings of the International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014. [Google Scholar]
- Borghi, G.; Venturelli, M.; Vezzani, R.; Cucchiara, R. POSEidon: Face-from-Depth for Driver Pose Estimation. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Mantecon, T.; del Bianco, C.R.; Jaureguizar, F.; García, N. Depth-based face recognition using local quantized patterns adapted for range data. In Proceedings of the IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014. [Google Scholar]
- Frigieri, E.; Borghi, G.; Vezzani, R.; Cucchiara, R. Fast and accurate facial landmark localization in depth images for in-car applications. In International Conference on Image Analysis and Processing; Springer: Cham, Switzerland, 2017; pp. 539–549. [Google Scholar]
- Yuffa, A.J.; Gurton, K.P.; Videen, G. Three-dimensional facial recognition using passive long-wavelength infrared polarimetric imaging. Appl. Opt. 2014, 53, 8514–8521. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Borghi, G.; Pini, S.; Grazioli, F.; Vezzani, R.; Cucchiara, R. Face Verification from Depth using Privileged Information. In Proceedings of the 29th British Machine Vision Conference (BMVC), Newcastle, UK, 3–6 September 2018. [Google Scholar]
- Medioni, G.; Choi, J.; Kuo, C.H.; Fidaleo, D. Identifying noncooperative subjects at a distance using face images and inferred three-dimensional face models. IEEE Trans. Syst. Man Cybern. 2009, 39, 12–24. [Google Scholar] [CrossRef]
- Choi, J.; Sharma, A.; Medioni, G. Comparing strategies for 3D face recognition from a 3D sensor. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Gyeongju, Korea, 26–29 August 2013. [Google Scholar]
- Anith, S.; Vaithiyanathan, D.; Seshasayanan, R. Face recognition system based on feature extration. In Proceedings of the IEEE International Conference on Information Communication and Embedded Systems, Chennai, India, 21–22 February 2013. [Google Scholar]
- Jun, B.; Choi, I.; Kim, D. Local transform features and hybridization for accurate face and human detection. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1423–1436. [Google Scholar] [CrossRef] [PubMed]
- Kayım, G.; Sarı, C.; Akgül, C.B. Facial feature selection for gender recognition based on random decision forests. In Proceedings of the 21st Signal Processing and Communications Applications Conference, Haspolat, Turkey, 24–26 April 2013. [Google Scholar]
- Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cognit. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef] [PubMed]
- Zhang, W.; Shan, S.; Gao, W.; Chen, X.; Zhang, H. Local Gabor binary pattern histogram sequence (LGBPHS): A novel non-statistical model for face representation and recognition. In Proceedings of the IEEE International Conference on Computer Vision, San Diego, CA, USA, 20–25 June 2005. [Google Scholar]
- Ahonen, T.; Rahtu, E.; Ojansivu, V.; Heikkila, J. Recognition of blurred faces using local phase quantization. In Proceedings of the International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008. [Google Scholar]
- Kannala, J.; Rahtu, E. Bsif: Binarized statistical image features. In Proceedings of the International Conference on Pattern Recognition, Tsukuba, Japan, 11–15 November 2012. [Google Scholar]
- Sun, Y.; Liang, D.; Wang, X.; Tang, X. Deepid3: Face recognition with very deep neural networks. arXiv 2015, arXiv:1502.00873. [Google Scholar]
- Kumar, N.; Berg, A.C.; Belhumeur, P.N.; Nayar, S.K. Attribute and simile classifiers for face verification. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009. [Google Scholar]
- Chen, J.C.; Patel, V.M.; Chellappa, R. Unconstrained face verification using deep cnn features. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Lake Placid, NY, USA, 7–9 March 2016. [Google Scholar]
- Ghiassirad, H.; Teshnehlab, M. Similarity measurement in convolutional space. In Proceedings of the IEEE International Conference on Intelligent Systems, Sofia, Bulgaria, 6–8 September 2012. [Google Scholar]
- Khalil-Hani, M.; Sung, L.S. A convolutional neural network approach for face verification. In Proceedings of the International Conference on High Performance Computing & Simulation, Bologna, Italy, 21–25 July 2014. [Google Scholar]
- Chopra, S.; Hadsell, R.; LeCun, Y. Learning a similarity metric discriminatively, with application to face verification. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005. [Google Scholar]
- Deng, J.; Guo, J.; Xue, N.; Zafeiriou, S. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–21 June 2019; pp. 4690–4699. [Google Scholar]
- Huang, C.; Li, Y.; Chen, C.L.; Tang, X. Deep imbalanced learning for face recognition and attribute prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2019. [Google Scholar] [CrossRef] [PubMed]
- Zhao, K.; Xu, J.; Cheng, M.M. RegularFace: Deep Face Recognition via Exclusive Regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 1136–1144. [Google Scholar]
- Sarbolandi, H.; Lefloch, D.; Kolb, A. Kinect range sensing: Structured-light versus time-of-flight kinect. Comput. Vis. Image Underst. 2015, 139, 1–20. [Google Scholar] [CrossRef]
- Shalev-Shwartz, S.; Singer, Y.; Srebro, N.; Cotter, A. Pegasos: Primal estimated sub-gradient solver for svm. Math. Program. 2011, 127, 3–30. [Google Scholar] [CrossRef]
- Mantecón, T.; del Blanco, C.R.; Jaureguizar, F.; García, N. Visual face recognition using bag of dense derivative depth patterns. IEEE Signal Process. Lett. 2016, 23, 771–775. [Google Scholar] [CrossRef]
- Kim, D.; Hernandez, M.; Choi, J.; Medioni, G. Deep 3D face identification. In Proceedings of the International Joint Conference on Biometrics, Denver, CO, USA, 1–4 October 2017. [Google Scholar]
- Lee, Y.C.; Chen, J.; Tseng, C.W.; Lai, S.H. Accurate and robust face recognition from RGB-D images with a deep learning approach. In Proceedings of the British Machine Vision Conference, York, UK, 19–22 September 2016. [Google Scholar]
- Neto, J.B.C.; Marana, A.N. Utilizing Deep Learning and 3DLBP for 3D Face Recognition. In Iberoamerican Congress on Pattern Recognition; Springer: Berlin, Germany, 2017; pp. 135–142. [Google Scholar]
- Zhu, X.; Lei, Z.; Liu, X.; Shi, H.; Li, S.Z. Face alignment across large poses: A 3d solution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 146–155. [Google Scholar]
- Hassner, T.; Harel, S.; Paz, E.; Enbar, R. Effective face frontalization in unconstrained images. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4295–4304. [Google Scholar]
- Ranjan, R.; Patel, V.M.; Chellappa, R. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 41, 121–135. [Google Scholar] [CrossRef] [PubMed]
- Hahnloser, R.H.; Seung, H.S. Permitted and forbidden sets in symmetric threshold-linear networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2001; pp. 217–223. [Google Scholar]
- Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Stefano, P.; Filippo, G.; Guido, B.; Roberto, V.; Rita, C. Learning to Generate Facial Depth Maps. In Proceedings of the International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018. [Google Scholar]
- Diego, B.; Borghi, G.; Vezzani, R.; Cucchiara, R. Fully Convolutional Network for Head Detection with Depth Images. In Proceedings of the 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018. [Google Scholar]
- Fabbri, M.; Borghi, G.; Lanzi, F.; Vezzani, R.; Calderara, S.; Cucchiara, R. Domain Translation with Conditional GANs: From Depth to RGB Face-to-Face. In Proceedings of the 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018. [Google Scholar]
- Li, B.Y.; Mian, A.S.; Liu, W.; Krishna, A. Using kinect for face recognition under varying poses, expressions, illumination and disguise. In Proceedings of the IEEE Workshop on Applications of Computer Vision, Tampa, FL, USA, 15–17 January 2013. [Google Scholar]
- Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G. Pytorch, Computer Software; Version 0.3; Exxact Corporation: Fremont, CA, USA, 2017. [Google Scholar]
Name | Year | FV | FI | Data Type | Input Size | Pre-proc. | Method (Features) |
---|---|---|---|---|---|---|---|
[12] | 2014 | ✓ | RGB | FA + FF | CNN | ||
[13] | 2015 | ✓ | ✓ | RGB | - | Inception | |
[33] | 2016 | ✓ | RGB | FL + FA | CNN | ||
[38] | 2019 | ✓ | RGB | - | CNN + CLMLE | ||
[39] | 2019 | ✓ | RGB | FL + FA | ResNet20 | ||
[18] | 2014 | ✓ | Depth | - | Peg. SVM (DLQP) | ||
[42] | 2016 | ✓ | Depth | - | Peg. SVM (Bag-D3P) | ||
[45] | 2017 | ✓ | Depth | FS + FA | CNN (3DLBP) | ||
[21] | 2018 | ✓ | RGB + Depth | - | Siamese CNN | ||
Ours | 2019 | ✓ | Depth | variable | - | Siamese CNN |
Method | Input Images | Model | Embedded Implementation | ||||||
---|---|---|---|---|---|---|---|---|---|
Input (Train) | Input (Test) | Input Size (∀ Branch) | #params () | #GFlops | Acc. | Weight Size | fps (CPU) | fps (TX2) | |
JanusNet [21] | RGB + Depth (paired) | Depth | 4.8 | 0.59 | 81.4 | 18 MB | 43 | 48 | |
Ours | Depth | Depth | variable | 1.8 | 0.73 | 85.3 | 7 MB | 58 | 87 |
Model | Data Type | Accuracy | #params () | #GFlops | fps | fps | fps |
---|---|---|---|---|---|---|---|
JanusNet [21] | RGB | 0.763 | 1.6 | 0.20 | 587 | 154 | 143 |
JanusNet [21] | Depth | 0.795 | 1.6 | 0.20 | 587 | 154 | 143 |
JanusNet [21] | P.I. | 0.814 | 4.8 | 0.59 | 202 | 50 | 48 |
Ours | Depth | 0.853 | 1.8 | 1.73 | 604 | 160 | 87 |
FaceNet [13] | RGB | 0.823 | 28.5 | 0.41 | 61 | 16 | 15 |
DenseNet [58] | - | - | 28.7 | 7.82 | 45 | 13 | 11 |
Pegasos SVM | JanusNet | Ours (FI) | Ours (FV) | ||||
---|---|---|---|---|---|---|---|
DLQP | Bag-D3P | avg | max | avg | Voting | - | |
Accuracy | 0.735 | 0.943 | 0.987 | 0.989 | 0.985 | 0.959 | 0.981 |
Improvement | - | +20.9 | +25.3 | +25.5 | +25.1 | +22.5 | - |
# Training Images | SRC [56] | Ours (FI) | Ours (FV) | |||
---|---|---|---|---|---|---|
18 | 79 | (Depth Only) | max | avg | Voting | - |
✓ | - | 0.887 | 0.899 | 0.857 | 0.817 | 0.860 |
- | ✓ | - | 0.997 | 0.997 | 0.998 | 0.987 |
JanusNet 21] | Ours | |||||||
---|---|---|---|---|---|---|---|---|
Train/Test | ||||||||
0.802 | 0.660 | 0.618 | 0.689 | 0.863 | 0.745 | 0.709 | 0.772 | |
0.834 | 0.786 | 0.766 | 0.795 | 0.873 | 0.841 | 0.810 | 0.851 | |
0.505 | 0.503 | 0.504 | 0.500 | 0.752 | 0.706 | 0.672 | 0.717 | |
0.798 | 0.751 | 0.727 | 0.762 | 0.885 | 0.842 | 0.812 | 0.853 |
JanusNet [21] | Ours | |||||
---|---|---|---|---|---|---|
Train/Test | ||||||
0.844 | 0.746 | 0.773 | 0.888 | 0.780 | 0.817 | |
0.792 | 0.713 | 0.743 | 0.868 | 0.805 | 0.829 | |
0.805 | 0.732 | 0.762 | 0.905 | 0.825 | 0.853 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Borghi, G.; Pini, S.; Vezzani, R.; Cucchiara, R. Driver Face Verification with Depth Maps. Sensors 2019, 19, 3361. https://doi.org/10.3390/s19153361
Borghi G, Pini S, Vezzani R, Cucchiara R. Driver Face Verification with Depth Maps. Sensors. 2019; 19(15):3361. https://doi.org/10.3390/s19153361
Chicago/Turabian StyleBorghi, Guido, Stefano Pini, Roberto Vezzani, and Rita Cucchiara. 2019. "Driver Face Verification with Depth Maps" Sensors 19, no. 15: 3361. https://doi.org/10.3390/s19153361
APA StyleBorghi, G., Pini, S., Vezzani, R., & Cucchiara, R. (2019). Driver Face Verification with Depth Maps. Sensors, 19(15), 3361. https://doi.org/10.3390/s19153361