Gait-Based Implicit Authentication Using Edge Computing and Deep Learning for Mobile Devices
<p>The framework of authentication.</p> "> Figure 2
<p>The framework of authentication.</p> "> Figure 3
<p>Comparison of gait signal before and after filtering.</p> "> Figure 4
<p>Human gait cycle.</p> "> Figure 5
<p>All minimal value points.</p> "> Figure 6
<p>Minimal value after initial screening.</p> "> Figure 7
<p>Gait cycle detection.</p> "> Figure 8
<p>Sliding window.</p> "> Figure 9
<p>The process of converting.</p> "> Figure 10
<p>Gait feature image: (<b>a</b>–<b>d</b>) the feature images of u018; (<b>e</b>–<b>h</b>) the feature images of u034.</p> "> Figure 11
<p>The structure of CNN-LSTM.</p> "> Figure 12
<p>The structure of an LSTM cell.</p> "> Figure 13
<p>ROC curves, FRR curves and EER curves. (<b>a</b>) is the FRR curves and FAR curves (<b>b</b>) is the ROC curves and EER curves.</p> "> Figure 14
<p>Accuracy curves and loss curves of ID2 and ID5 during training session: (<b>a</b>) the accuracy curves; (<b>b</b>) the loss curves.</p> "> Figure 15
<p>Accuracy achieved on the validation set by the model obtained after training with different number of data sets.</p> "> Figure 16
<p>Accuracy curves and loss curves achieved by the model on the validation set after training with different numbers of data sets: (<b>a</b>–<b>c</b>) the accuracy curves; (<b>d</b>–<b>f</b>) the loss curves.</p> "> Figure 17
<p>ROC curves: (<b>a</b>) the ROC curves of the three types of models when the number of training sets is 2000; (<b>b</b>) the ROC curves of the three types of models when the number of training sets is 8000.</p> "> Figure 18
<p>Accuracy of CNN models and CNN-LSTM models on datasets with different amounts of data.</p> "> Figure 19
<p>Training accuracy and loss curves of CNN and CNN-LSTM on small data size datasets.</p> ">
Abstract
:1. Introduction
- We propose an edge computing-based implicit authentication architecture, EDIA, which is designed to attain high efficiency and computing resources optimization based on the edge computing paradigm;
- We develop a hybrid model, based on concatenation of CNN and LSTM accommodated to the optimized process of gait data from build-in sensors.
- We present a data preprocessing method, which extracts the features of a gait signal in two-dimension domain by converting the signal into an image. By this way, the influence of noise on classification results is reduced and then the authentication accuracy can be improved.
- We implement and evaluate the authentication performance of EDIA in different situations on a dataset which is collected by Matteo Gadaleta et al [11]. The experiment results show that EDIA achieve an accuracy of 97.77% with the 2% false positive rate and demonstrate the effectiveness and robustness of EDIA.
2. Related Work
3. The Methodology
3.1. Data Preprocessing
3.1.1. Filter and Gait Cycle Extraction
Algorithm 1: Gait cycle detection algorithm. |
Input: Gait acceleration x-axis signal Output: Start and end points of each gait cycle
|
3.1.2. Signal-to-Image Convert
Algorithm 2: Gait feature image generation algorithm. |
Input: Gait signal Output: Gait feature image
|
3.2. Proposed Architecture
4. Experiments and Result
4.1. Division of the Dataset
Result and Precision Test
4.2. Impact of the Number of Datasets on the Authentication Model
4.3. Performance Comparison of Three Different Methods
- SVM: Support vector machines are a class of generalized linear classifiers that perform binary classification of data in a supervised learning fashion. A decision boundary is a maximum margin hyperplane solved for the learned samples. In the implicit gait-based authentication task using SVM, instead of converting the gait signal to an image, we directly intercept the gait signal in a time window of length 150 and then input the intercepted samples directly into the model for training.
- CNN: In using CNN for gait-based implicit authentication, we convert the gait signal into an image and then input the image into the network for training altogether. The difference is that the CNN network uses two fully connected layers as the classifier of the network, while the CNN-LSTM network proposed in this paper uses two LSTM layers as the classifier of the model.
4.4. Complexity Analysis
4.5. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kim, Y.; Oh, T.; Kim, J. Analyzing user awareness of privacy data leak in mobile applications. Mob. Inf. Syst. 2015, 2015, 369489. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Xue, F.; Fan, X.; Qu, Z.; Zhou, G. Pedestrian walking safety system based on smartphone built-in sensors. IET Commun. 2018, 12, 751–758. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Li, X. Chaotic hash function based on circular shifts with variable parameters. Chaos Solitons Fractals 2016, 91, 639–648. [Google Scholar] [CrossRef] [Green Version]
- Patel, V.M.; Chellappa, R.; Chandra, D.; Barbello, B. Continuous User Authentication on Mobile Devices: Recent progress and remaining challenges. IEEE Signal Process. Mag. 2016, 33, 49–61. [Google Scholar] [CrossRef]
- De Luca, A.; Hang, A.; Brudy, F.; Lindner, C.; Hussmann, H. Touch me once and i know it’s you! implicit authentication based on touch screen patterns. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 987–996. [Google Scholar]
- Jakobsson, M.; Shi, E.; Golle, P.; Chow, R. Implicit authentication for mobile devices. In Proceedings of the 4th USENIX Conference on Hot Topics in Security, USENIX Association, Montreal, QC, Canada, 10–14 August 2009; Volume 1, pp. 25–27. [Google Scholar]
- Muaaz, M.; Mayrhofer, R. Smartphone-Based Gait Recognition: From Authentication to Imitation. IEEE Trans. Mob. Comput. 2017, 16, 3209–3221. [Google Scholar] [CrossRef]
- Peinado-Contreras, A.; Munoz-Organero, M. Gait-Based Identification Using Deep Recurrent Neural Networks and Acceleration Patterns. Sensors 2020, 20, 6900. [Google Scholar] [CrossRef] [PubMed]
- Shiraga, K.; Makihara, Y.; Muramatsu, D.; Echigo, T.; Yagi, Y. GEINet: View-invariant gait recognition using a convolutional neural network. In Proceedings of the 2016 International Conference on Biometrics (ICB), Halmstad, Sweden, 13–16 June 2016; pp. 1–8. [Google Scholar] [CrossRef]
- Cao, S.; Wen, L.; Li, X.; Gao, L. Application of Generative Adversarial Networks for Intelligent Fault Diagnosis. In Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany, 20–24 August 2018; pp. 711–715. [Google Scholar] [CrossRef]
- Gadaleta, M.; Rossi, M. IDNet: Smartphone-based Gait Recognition with Convolutional Neural Networks. Pattern Recognit. 2016, 74, 25–37. [Google Scholar] [CrossRef] [Green Version]
- Frank, M.; Biedert, R.; Ma, E.; Martinovic, I.; Song, D. Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication. IEEE Trans. Inf. Forensics Secur. 2013, 8, 136–148. [Google Scholar] [CrossRef] [Green Version]
- Li, F.; Clarke, N.; Papadaki, M.; Dowland, P. Behaviour Profiling for Transparent Authentication for Mobile Devices. In Proceedings of the 10th European Conference on Information Warfare and Security 2011 (ECIW), Tallinn, Estonia, 7–8 July 2011; pp. 307–314. [Google Scholar]
- Li, F.; Clarke, N.; Papadaki, M.; Dowland, P. Active authentication for mobile devices utilising behaviour profiling. Int. J. Inf. Secur. 2014, 13, 229–244. [Google Scholar] [CrossRef] [Green Version]
- Bassu, D.; Cochinwala, M.; Jain, A. A new mobile biometric based upon usage context. In Proceedings of the 2013 IEEE International Conference on Technologies for Homeland Security (HST), Waltham, MA, USA, 12–14 November 2013; pp. 441–446. [Google Scholar] [CrossRef]
- Eagle, N.; Pentland, A.S. Reality mining: Sensing complex social systems. Pers. Ubiquitous Comput. 2006, 10, 255–268. [Google Scholar] [CrossRef]
- Lee, H.; Hwang, J.Y.; Lee, S.; Kim, D.I.; Lee, S.H.; Lee, J.; Shin, J.S. A parameterized model to select discriminating features on keystroke dynamics authentication on smartphones. Pervasive Mob. Comput. 2019, 54, 45–57. [Google Scholar] [CrossRef]
- Peng, G.; Zhou, G.; Nguyen, D.T.; Qi, X.; Yang, Q.; Wang, S. Continuous authentication with touch behavioral biometrics and voice on wearable glasses. IEEE Trans. Hum. Mach. Syst. 2016, 47, 404–416. [Google Scholar] [CrossRef]
- Yang, L.; Guo, Y.; Ding, X.; Han, J.; Liu, Y.; Wang, C.; Hu, C. Unlocking Smart Phone through Handwaving Biometrics. IEEE Trans. Mob. Comput. 2015, 14, 1044–1055. [Google Scholar] [CrossRef]
- Mantyjarvi, J.; Lindholm, M.; Vildjiounaite, E.; Makela, S.; Ailisto, H.A. Identifying users of portable devices from gait pattern with accelerometers. In Proceedings of the (ICASSP’05), IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 23 March 2005; Volume 2, pp. ii/973–ii/976. [Google Scholar] [CrossRef]
- Thang, H.M.; Viet, V.Q.; Thuc, N.D.; Choi, D. Gait identification using accelerometer on mobile phone. In Proceedings of the 2012 International Conference on Control, Automation and Information Sciences (ICCAIS), Saigon, Vietnam, 26–29 November 2012; pp. 344–348. [Google Scholar] [CrossRef]
- Muaaz, M.; Mayrhofer, R. An analysis of different approaches to gait recognition using cell phone based accelerometers. In Proceedings of the International Conference on Advances in Mobile Computing & Multimedia, Vienna, Austria, 2–4 December 2013; pp. 293–300. [Google Scholar]
- Nickel, C.; Busch, C.; Rangarajan, S.; Möbius, M. Using hidden markov models for accelerometer-based biometric gait recognition. In Proceedings of the 2011 IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, Malaysia, 4–6 March 2011; pp. 58–63. [Google Scholar]
- Zhong, Y.; Deng, Y.; Meltzner, G. Pace independent mobile gait biometrics. In Proceedings of the 2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS), Arlington, VA, USA, 8–11 September 2015; pp. 1–8. [Google Scholar] [CrossRef]
- Giorgi, G.; Saracino, A.; Martinelli, F. Using recurrent neural networks for continuous authentication through gait analysis. Pattern Recognit. Lett. 2021, 147, 157–163. [Google Scholar] [CrossRef]
- Kašys, K.; Dundulis, A.; Vasiljevas, M.; Maskeliūnas, R.; Damaševičius, R. BodyLock: Human Identity Recogniser App from Walking Activity Data. In Lecture Notes in Computer Science, Proceedings of the International Conference on Computational Science and Its Applications, Cagliari, Italy, 1–4 July 2020; Springer: Cham, Switzerland, 2020; pp. 307–319. [Google Scholar]
- Xu, W.; Shen, Y.; Luo, C.; Li, J.; Li, W.; Zomaya, A.Y. Gait-Watch: A Gait-based context-aware authentication system for smart watch via sparse coding. Ad Hoc Netw. 2020, 107, 102218. [Google Scholar] [CrossRef]
- El-Soud, M.W.A.; Gaber, T.; AlFayez, F.; Eltoukhy, M.M. Implicit authentication method for smartphone users based on rank aggregation and random forest. Alex. Eng. J. 2021, 60, 273–283. [Google Scholar] [CrossRef]
- Papavasileiou, I.; Qiao, Z.; Zhang, C.; Zhang, W.; Bi, J.; Han, S. GaitCode: Gait-based continuous authentication using multimodal learning and wearable sensors. Smart Health 2021, 19, 100162. [Google Scholar] [CrossRef]
- Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
- Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 221–231. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
- Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. Closing the gap to human-level performance in face verification. deepface. In Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; Volume 5, p. 6. [Google Scholar]
- Molchanov, P.; Tyree, S.; Karras, T.; Aila, T.; Kautz, J. Pruning Convolutional Neural Networks for Resource Efficient Inference. In Proceedings of the 5th International Conference on Learning Representations, ICLR, Toulon, France, 24–26 April 2017. [Google Scholar]
Behavioral Biometric Approach | Limitations |
---|---|
Touchscreen Interactions | (1) Requires the users to actively interact with the device’s touchscreen for authentication |
(2) Holding the phone in a different orientation makes a big difference in the way the user touches the phone | |
(3) Users interact with the touch screen in different activity states such as walking, running, standing, sitting, etc. in very different ways | |
Behavioral Profiling | (1) Users’ moods and emotions affect the way they interact with different applications and services |
(2) Behavioral profiling data are difficult to access | |
Keystroke Dynamics | (1) Requires the user to actively interact with the device’s keyboard for authentication |
(2) User authentication is only possible when the user types on the keyboard | |
Hand waving Patterns | (1) Requires the user to actively interact with the device |
(2) Need special hand waving patterns to authenticate users | |
(3) Multiple users may have the same hand waving patterns |
Research | Performance |
---|---|
Mäntyjärvi et al. [20] | EER 7% |
Thang et al. [21] | Accuracy 79.1% (in time domain) |
Accuracy 92.7% (in frequency domain) | |
Muaaz et al. [22] | EER 22.49–33.30% |
Nickel et al. [23] | FNMR:10.42%/FMR: 10.29% |
Zhong et al. [24] | EER2.88% 7.22% |
Damaševicius et al. [25] | EER 5.7% |
Kašys et al. [26] | Accuracy 97% (correct identification) |
F-score 94% | |
Xu et al. [27] | Recognition Accuracy 32%; EER 3.5% |
Abo El-Soud et al. [28] | Accuracy 97.8%; ERR 1.04% |
FAR 2.03%; FRR 0.04% | |
Papavasileiou et al. [29] | EER 0.01% 0.16%; |
FAR 0.54% 1.96% |
Layer Name | Kernel Size | Kernel Num | Padding | Stride |
---|---|---|---|---|
Conv1 | 11 | 48 | 2 | 4 |
Maxpooling1 | 3 | None | 0 | 2 |
Conv2 | 5 | 128 | 2 | 1 |
Maxpooling2 | 3 | None | 0 | 2 |
Conv3 | 3 | 192 | 1 | 1 |
User ID | Number | User ID | Number | User ID | Number | User ID | Number |
---|---|---|---|---|---|---|---|
u001 | 572 | u026 | 607 | u014 | 264 | u039 | 736 |
u002 | 2653 | u027 | 692 | u015 | 200 | u040 | 1221 |
u003 | 1478 | u028 | 2937 | u016 | 875 | u041 | 384 |
u004 | 1014 | u029 | 1327 | u017 | 328 | u042 | 510 |
u005 | 195 | u030 | 556 | u018 | 8889 | u043 | 1228 |
u006 | 376 | u031 | 799 | u019 | 977 | u044 | 240 |
u007 | 1165 | u032 | 141 | u020 | 1014 | u045 | 196 |
u008 | 380 | u033 | 2863 | u021 | 637 | u046 | 1174 |
u009 | 270 | u034 | 1176 | u022 | 193 | u047 | 422 |
u010 | 907 | u035 | 1097 | u023 | 6064 | u048 | 130 |
u011 | 588 | u036 | 1051 | u024 | 2569 | u049 | 304 |
u012 | 241 | u037 | 299 | u025 | 645 | u050 | 190 |
u013 | 382 | u038 | 449 |
User 1 | User 2 | User 3 | User 4 | User 5 | User 6 | User 7 | User 8 | User 9 | User 10 | |
---|---|---|---|---|---|---|---|---|---|---|
Accuracy | 0.974 | 0.979 | 0.984 | 0.984 | 0.979 | 1 | 0.982 | 1 | 0.989 | 0.984 |
FRR | 0.039 | 0.013 | 0.010 | 0.020 | 0 | 0 | 0.020 | 0 | 0.020 | 0.029 |
FAR | 0.010 | 0.026 | 0.019 | 0.010 | 0.038 | 0 | 0.015 | 0 | 0 | 0 |
EER | 0.960 | 0.973 | 0.980 | 0.985 | 0.980 | 1 | 0.980 | 1 | 1 | 0.990 |
User 11 | User 12 | User 13 | User 14 | User 15 | User 16 | User 17 | User 18 | User 19 | User 20 | |
Accuracy | 0.974 | 0.979 | 0.949 | 0.979 | 0.979 | 0.974 | 0.959 | 0.984 | 0.982 | 0.987 |
FRR | 0.003 | 0 | 0.076 | 0.020 | 0 | 0.010 | 0 | 0.004 | 0.020 | 0.010 |
FAR | 0.010 | 0.038 | 0.021 | 0.020 | 0.038 | 0.038 | 0.074 | 0.025 | 0.015 | 0.014 |
EER | 0.990 | 0.980 | 0.940 | 0.980 | 0.960 | 0.970 | 0.980 | 0.980 | 0.980 | 0.980 |
User 21 | User 22 | User 23 | User 24 | User 25 | User 26 | User 27 | User 28 | User 29 | User 30 | |
Accuracy | 0.989 | 0.979 | 0.995 | 0.937 | 0.989 | 0.979 | 0.934 | 0.956 | 0.989 | 0.974 |
FRR | 0.010 | 0 | 0.002 | 0.052 | 0.010 | 0 | 0,070 | 0.017 | 0.005 | 0.030 |
FAR | 0.010 | 0.038 | 0.005 | 0.071 | 0.01 | 0.038 | 0.060 | 0.066 | 0.014 | 0.022 |
EER | 0.990 | 0.980 | 0.996 | 0.926 | 0.990 | 0.980 | 0.930 | 0.936 | 0.985 | 0.980 |
User 31 | User 32 | User 33 | User 34 | User 35 | User 36 | User 37 | User 38 | User 39 | User 40 | |
Accuracy | 0.969 | 0.898 | 0.959 | 0.989 | 0.977 | 0.979 | 0.939 | 0.959 | 0.984 | 0.952 |
FRR | 0.020 | 0.145 | 0.027 | 0.005 | 0.025 | 0.034 | 0.042 | 0.075 | 0.010 | 0.041 |
FAR | 0.039 | 0.045 | 0.051 | 0.014 | 0.020 | 0.005 | 0.076 | 0 | 0.019 | 0.0546 |
EER | 0.960 | 0.840 | 0.956 | 0.980 | 0.975 | 0.985 | 0.940 | 0.940 | 0.980 | 0.945 |
User 41 | User 42 | User 43 | User 44 | User 45 | User 46 | User 47 | User 48 | User 49 | User 50 | |
Accuracy | 0.959 | 0.994 | 0.987 | 0.929 | 1 | 0.989 | 0.949 | 0.838 | 1 | 0.949 |
FRR | 0.058 | 0.010 | 0.010 | 0.096 | 0 | 0.005 | 0.076 | 0.210 | 0 | 0.041 |
FAR | 0.020 | 0 | 0.014 | 0.042 | 0 | 0.014 | 0.021 | 0.095 | 0 | 0.058 |
EER | 0.980 | 1 | 0.990 | 0.940 | 1 | 0.990 | 0.960 | 0.80 | 1 | 0.940 |
Accruacy | <90% | 90–97% | 97–99% | 100% |
---|---|---|---|---|
Number | 2 | 14 | 30 | 4 |
Model ID | Number of LSTM Layers | Memory Units in LSTM Layer | Accuracy | Training Time/(seconds) |
---|---|---|---|---|
1 | 1 | 50 | 0.969 | 667.246 |
2 | 1 | 100 | 0.972 | 680.387 |
3 | 1 | 150 | 0.964 | 713.444 |
4 | 2 | 150 and 50 | 0.975 | 926.419 |
5 | 2 | 150 and 100 | 0.977 | 927.310 |
6 | 2 | 150 and 150 | 0.972 | 934.193 |
Num | 100 | 500 | 1000 | 2000 | 3000 | 4000 | 5000 | 6000 | 7000 | 8000 |
---|---|---|---|---|---|---|---|---|---|---|
Accuracy | 0.760 | 0.907 | 0.932 | 0.950 | 0.960 | 0.967 | 0.969 | 0.974 | 0.973 | 0.977 |
2000 | 8000 | |||||||
---|---|---|---|---|---|---|---|---|
Accuracy | FRR | FAR | EER | Accuracy | FRR | FAR | EER | |
SVM | 0.783 | 0.23 | 0.31 | 0.864 | 0.862 | 0.14 | 0.15 | 0.855 |
CNN | 0.883 | 0.153 | 0.053 | 0.908 | 0.942 | 0.032 | 0.034 | 0.944 |
CNN+LSTM | 0.969 | 0.026 | 0.032 | 0.966 | 0.977 | 0.021 | 0.0274 | 0.976 |
Complexity | Number |
---|---|
Total params | 346,210 |
Total Memory | 2.31 M |
Total Flops | 130.35 MFlops |
Total MenR + W | 5.42 MB |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zeng, X.; Zhang, X.; Yang, S.; Shi, Z.; Chi, C. Gait-Based Implicit Authentication Using Edge Computing and Deep Learning for Mobile Devices. Sensors 2021, 21, 4592. https://doi.org/10.3390/s21134592
Zeng X, Zhang X, Yang S, Shi Z, Chi C. Gait-Based Implicit Authentication Using Edge Computing and Deep Learning for Mobile Devices. Sensors. 2021; 21(13):4592. https://doi.org/10.3390/s21134592
Chicago/Turabian StyleZeng, Xin, Xiaomei Zhang, Shuqun Yang, Zhicai Shi, and Chihung Chi. 2021. "Gait-Based Implicit Authentication Using Edge Computing and Deep Learning for Mobile Devices" Sensors 21, no. 13: 4592. https://doi.org/10.3390/s21134592
APA StyleZeng, X., Zhang, X., Yang, S., Shi, Z., & Chi, C. (2021). Gait-Based Implicit Authentication Using Edge Computing and Deep Learning for Mobile Devices. Sensors, 21(13), 4592. https://doi.org/10.3390/s21134592