Abstract
This paper proposes a d-vector approach for extracting robust biometrics from inertial signals recorded with wearable sensors. The d-vector approach generates identity representations using a deep learning architecture composed of Convolutional Neural Networks. This architecture includes two convolutional layers for learning features from the inertial signal spectrum. These layers were pretrained using data from 154 subjects. After that, additional fully connected layers were attached to perform user identification and verification, considering 36 new subjects. This paper compares the proposed d-vector approach with previous proposed algorithms using in-the-wild recordings in different scenarios. The results demonstrated the robustness of the proposed d-vector approach for in-the-wild conditions: 97.69% and 94.16% accuracies (for user identification) and 99.89% and 99.67% Areas Under the Curve (for user verification) were obtained using one (walking) or several activities (walking, jogging and stairs) respectively. These results were also verified in laboratory conditions improving the performance reported in previous works. All the analyses were carried out using public datasets recorded at the Wireless Sensor Data Mining laboratory.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Angulo J, Wästlund E (2012) Exploring touch-screen biometrics for user identification on smart phones. Springer, Berlin, pp 130–143
Matsumoto T, Matsumoto H, Yamada K, Hoshino S (2002) Impact of artificial “Gummy” fingers on fingerprint systems. In: 4th Conference on optical security and counterfeit deterrence techniques, San Jose, CA, vol 4677, pp 275–289
San-Segundo R, Echeverry-Correa JD, Salamea-Palacios C, Lutfi SL, Pardo JM (2017) I-vector analysis for Gait-based Person Identification using smartphone inertial signals. Pervasive Mob Comput 38:140–153
Variani E, Lei X, McDermott E, Moreno IL, Gonzalez-Dominguez J (2014) IEEE, Deep neural networks for small footprint text-dependent speaker verification. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), Florence, Italy
Lockhart JW, Weiss GM, Xue JC, Gallagher ST, Grosner AB, Pulickal TT (2011) Design considerations for the WISDM smart phone-based sensor mining architecture. In: Presented at the proceedings of the fifth international workshop on knowledge discovery from sensor data, San Diego, CA. https://doi.org/10.1145/2003653.2003656
Weiss GM, Lockhart JW, Pulickal TT, McHugh PT, Ronan IH, Timko JL (2016) Actitracker: a smartphone-based activity recognition system for improving health and well-being. In: 2016 IEEE international conference on data science and advanced analytics (DSAA), pp 682–688
Weiss GM, Yoneda K, Hayajneh T (2019) Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access 7:133190–133202
Li Z, Zuo W, Li S (2020) Zeroing dynamics method for motion control of industrial upper-limb exoskeleton system with minimal potential energy modulation. Measurement 163:107964
Li Z, Li C, Li S, Cao X (2019) A fault-tolerant method for motion planning of industrial redundant manipulator. In: IEEE transactions on industrial informatics, p 1
Chen D, Li S, Wu Q (2020) A novel supertwisting zeroing neural network with application to mobile robot manipulators. In: IEEE transactions on neural networks and learning systems, 2020-May-11 2020
San-Segundo R, Cordoba R, Ferreiros J, D’Haro-Enriquez LF (2016) Frequency features and GMM-UBM approach for gait-based person identification using smartphone inertial signals’. Pattern Recognit Lett 73:60–67
Sun FM, Zang WL, Gravina R, Fortino G, Li Y (2020) Gait-based identification for elderly users in wearable healthcare systems. Information Fusion 53:134–144
Pasero E, Balzanelli E, Caffarelli F (2015) Intruder recognition using ECG signal. In: 2015 International joint conference on neural networks (IJCNN), pp 1–8
Plataniotis KN, Hatzinakos D, Lee JKM, IEEE (2006) ECG biometric recognition without fiducial detection. In: 2006 Biometrics symposium: special session on research at the biometric consortium conference, p 90
Wang M, El-Fiqi H, Hu J, Abbass HA (2019) Convolutional Neural networks using dynamic functional connectivity for EEG-based person identification in diverse human states. IEEE Trans Inf Forensics Secur 14(12):3259–3272
Wahid R, Ghali NI, Own HS, Kim TH, Hassanien AE (2012) A Gaussian mixture models approach to human heart signal verification using different feature extraction algorithms. Comput Appl Bio-Technol Multim Ubiquitous City 353:16
Zhao Z, Shen Q (2011) A human identification system based on Heart sounds and Gaussian Mixture Models. In: 2011 4th international conference on biomedical engineering and informatics (BMEI), vol 2, pp 597–601
Fatemian SZ, Agrafioti F, Hatzinakos D (2010) HeartID: Cardiac biometric recognition. In: 2010 Fourth IEEE international conference on biometrics: theory, applications and systems (BTAS), pp 1–5
Prabhakar S, Pankanti S, Jain AK (2003) Biometric recognition: security and privacy concerns. IEEE Secur Priv 1(2):33–42
Jain AK, Ross A, Prabhakar S (2004) An introduction to biometric recognition. IEEE Trans Circuits Syst Video Technol 14(1):4–20
Roman VY, Venu G (2010) Taxonomy of behavioural biometrics. Behavioral biometrics for human identification: intelligent applications. IGI Global, Hershey, pp 1–43
Maghsoudi J, Tappert CC (2016) A behavioral biometrics user authentication study using motion data from android smartphones. In: Conference on European intelligence and security informatics conference (EISIC), Uppsala, Sweden, pp 184–187
Dehzangi O, Taherisadr M, ChangalVala R (2017) IMU-based gait recognition using convolutional neural networks and multi-sensor fusion. Sensors 17(12): Art. no. 2735
Zhao Y, Zhou S (2017) Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network. Sensors 17(3): Art. no. 478
Giorgi G, Martinelli F, Saracino A, Sheikhalishahi M (2017) Try walking in my shoes, if you can: accurate gait recognition through deep learning. Springer, Cham, pp 384–395
Gadaleta M, Rossi M (2018) IDNet: smartphone-based gait recognition with convolutional neural networks. Pattern Recognit 74:25–37
Jung J, Heo H, Yang I, Yoon S, Shim H, Yu H (2017) D-vector based speaker verification system using Raw Waveform CNN. In: Proceedings of the 2017 international seminar on artificial intelligence, networking and information technology (Anit 2017), vol 150, pp 126–131
Baldwin J, Burnham R, Meyer A, Dora R, Wright R, Aaai (2019) Beyond speech: generalizing d-vectors for biometric verification. In: Thirty-third Aaai conference on artificial intelligence/thirty-first innovative applications of artificial intelligence conference/ninth Aaai symposium on educational advances in artificial intelligence, pp 842–849
Hermansky H (1990) Perceptual linear predictive (PLP) analysis of speech. J Acoust Soc Am 87(4):1738–1752
San-Segundo R, Manuel Montero J, Barra-Chicote R, Fernandez F, Manuel Pardo J (2016) Feature extraction from smartphone inertial signals for human activity segmentation. Signal Process 120:359–372
Reynolds DA, Quatieri TF, Dunn RB (2000) Speaker verification using adapted Gaussian mixture models. Digital Signal Process 10(1):19–41
Kenny P, Ouellet P, Dehak N, Gupta V, Dumouchel P (2008) A study of interspeaker variability in speaker verification. IEEE Trans Audio Speech Lang Process 16(5):980–988
Dehak N, Kenny PJ, Dehak R, Dumouchel P, Ouellet P (2011) Front-end factor analysis for speaker verification. IEEE Trans Audio Speech Lang Process 19(4):788–798
Weiss NA (2017) Introductory statistics. Pearson
Lockhart JW, Weiss G (2013) WISDM Actitracker Dataset, ed. http://www.cis.fordham.edu/wisdm/dataset.php#actitracker
Weiss G (2019) WISDM smartphone and smartwatch activity and biometrics dataset data set
Fawcett T (2006) An introduction to ROC analysis. Pattern Recognit Lett 27(8):861–874
Funding
The work leading to these results has been supported by AMIC (MINECO, TIN2017-85854-C4-4-R), and CAVIAR (MINECO, TEC2017-84593-C2-1-R) projects partially funded by the European Union. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
Author information
Authors and Affiliations
Contributions
Manuel Gil-Martín: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Roles/Writing - original draft. Rubén San-Segundo: Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Writing - review & editing. Ricardo de Córdoba: Conceptualization, Funding acquisition, Project administration, Supervision, Validation, Writing - review & editing. José Manuel Pardo: Conceptualization, Funding acquisition, Project administration, Supervision, Validation, Writing - review & editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gil-Martín, M., San-Segundo, R., de Córdoba, R. et al. Robust Biometrics from Motion Wearable Sensors Using a D-vector Approach. Neural Process Lett 52, 2109–2125 (2020). https://doi.org/10.1007/s11063-020-10339-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-020-10339-z