[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Multi-Sensor Fusion Approach for Improving Map-Based Indoor Pedestrian Localization
Next Article in Special Issue
Recurrent Neural Network for Inertial Gait User Recognition in Smartphones
Previous Article in Journal
WHSP-Net: A Weakly-Supervised Approach for 3D Hand Shape and Pose Recovery from a Single Depth Image
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole

1
Department of Computer Science and Engineering, Dankook University, Yongin 16890, Korea
2
Department of Computer Engineering and Computer Science, California State University Long Beach, Long Beach, CA 90840, USA
3
Department of Internal Medicine, Chung-Ang University, Seoul 06984, Korea
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(17), 3785; https://doi.org/10.3390/s19173785
Submission received: 31 July 2019 / Revised: 21 August 2019 / Accepted: 29 August 2019 / Published: 31 August 2019
(This article belongs to the Special Issue Sensors for Gait Biometrics)

Abstract

:
Recent studies indicate that individuals can be identified by their gait pattern. A number of sensors including vision, acceleration, and pressure have been used to capture humans’ gait patterns, and a number of methods have been developed to recognize individuals from their gait pattern data. This study proposes a novel method of identifying individuals using null-space linear discriminant analysis on humans’ gait pattern data. The gait pattern data consists of time series pressure and acceleration data measured from multi-modal sensors in a smart insole used while walking. We compare the identification accuracies from three sensing modalities, which are acceleration, pressure, and both in combination. Experimental results show that the proposed multi-modal features identify 14 participants with high accuracy over 95% from their gait pattern data of walking.

1. Introduction

Gait patterns contain much information about human physical activity. Problems in gait can be entail not only musculoskeletal disorders, such as joint deformation [1], but also mental disorders, such as intellectual disabilities [2], dementia [3], and depression [4]. Given its insightful outcomes, the analysis of gait patterns has received abundant attention in various fields including health care, sports performance analysis, and behavior analysis [5,6,7].
Gait pattern analysis comprises a sensor module for acquiring data and an application module for analyzing the data [8]. Different types of sensors are utilized in gait analysis, for instance video recorders [9], electromyography sensors [10], pressure sensors [11], accelerometers [12,13], and gyroscopes [14,15]. Initially, the gait pattern analyses were conducted in restricted environments because of the size of sensors, the inconvenience of installing sensors, and other limitations. However, these days, such restrictions are alleviated using embedded sensors in wearable devices such as smart watches, fitness trackers, and smart insoles [16].
Several methods for analyzing gait patterns using data from diverse sensors have been proposed. In [17], straight and curved walking patterns were distinguished using a pressure sensor and a gyroscope. In [18], gait data for walking, sideways walking, and running were collected using only an accelerometer, and in-plane displacement was estimated.
Gait pattern analysis using machine learning approach has also been investigated. In [19], inertial measurement units (IMUs) attached to thigh and knee were used to measure kinematic data. In [20,21], spatiotemporal gait features, such as stride length, cadence, stance time, and double support time, were estimated using pressure-sensitive GaitRite walkways or foot switches. Then, gait patterns of patients with Parkinson’s disease were analyzed using support vector machine, random forest [20], or a mixture model [21]. In [22], gait types and behaviors were classified by applying a decision tree and an artificial neural network [23] to data collected by attaching different kinds of sensors such as accelerometers, gyroscopes, and humidity sensors to eight body parts.
Statistical and probability-based methods have been proposed to analyze walking patterns as well. In [24], gait phase classification was performed by applying a hidden Markov model to IMU data acquired from the legs and switches attached to the sole. Likewise, in [25,26], hidden Markov models were used for identifying users and determining the walking style from IMU data, respectively. Overall, the analysis of gait patterns, including the above-mentioned methods, has mainly been carried out for classifying gait types or for diagnosing diseases such as Parkinson’s disease or strokes by identifying abnormal gait patterns.
As gait patterns exhibit specific characteristics according to the individual, they can also be used for user identification if used along with other biometric techniques, such as face or fingerprint recognition. Existing gait analyses for biometrics have mainly been conducted using video sequences [9]. However, such approaches require the user to be the only individual in front of the camera, and their accuracy may vary depending on the relative position of the camera. Therefore, these methods provide limited user identification in real-world measurement environments. Besides motion analysis, wearable sensors have been utilized for user identification. In [27], data were collected from five IMUs placed on the chest, lower back, right-hand wrist, knee, and ankle of users. Identification was achieved using a predictive model based on a convolutional neural network with time- and frequency-domain data. In [28], IMU data were gathered using the sensors embedded in smartphones, which were carried by users in their front trouser pockets. Users were recognized using a mixture model based on a convolutional neural network and a support vector machine. In [29], besides IMU data from sensors within the shoes, pressure and flexion data were collected from insole sensors, and users were identified by a cascade neural network. However, these methods use few types of sensors, place sensors at multiple body parts, or require a long period of time for gathering data.
In this paper, we propose a method to identify users by using multi-modal sensor data acquired through a smart insole. For data collection, we used the pressure sensors and accelerometers of the FootLogger smart insole (Figure 1) [8]. The data acquired from each sensor in the insole during walking were transmitted to a smartphone via Bluetooth. While existing gait analyses using wearable sensors identify gait types, we attempted to perform user identification using gait data through discriminant analysis. Since the proposed method uses wearable sensors, it can be applicable to any type of user environment, for instance multiple users in a public place. In addition, the wearable sensor data (i.e., pressure and accelerometer) demand a low computational cost compared to video processing and thus can achieve real-time operation.
The proposed method consists of a preprocessing stage for extracting discriminant features and a classification stage for identifying users. During preprocessing, the measured data are converted into a form suitable for discriminant analysis to conduct user identification. Gait patterns can vary even for the same user depending on several factors, for example walking speed is typically dependent on the user’s mental and physical condition. The high variability of intrapersonal gait patterns may hinder feature extraction for user identification. Thus, during data preprocessing, we segmented the series of gait data into individual steps, then they were normalized in terms of their length to eliminate speed variability [30]. Thus, during data preprocessing, we segmented the series of gait data into individual steps, then they were normalized in terms of their length to eliminate speed variability [30]. In addition, random noises were added to the normalized data to prevent rank deficiency during feature extraction.
Since the proposed method is intended to be used with wearable devices, such as a smart insole, gait pattern features were extracted using a dimensionality reduction method with low computational resource requirements such that it was applicable to mobile systems. As insoles for both feet generate 16 pressure and six acceleration signals in real time, the obtained walking data were high-dimensional. Therefore, we extracted discriminant features for user identification by using the null-space linear discriminant analysis (NLDA) method [31], which effectively handles high-dimensional data, such as images. We applied NLDA to pressure and acceleration data to construct feature spaces and obtained single-modal feature vectors for each data type. Then, we evaluated the discriminative information of each feature based on the Laplacian score [32] and constructed multi-modal features for user identification by rearranging the features according to their discriminative information. Experimental results using measurements from 14 participants during walking demonstrated the high user identification performance of the proposed method.
The remainder of this paper is organized as follows. In Section 2, we detail the smart insole for walking data acquisition and the preprocessing stage. In Section 3, we describe the extraction of the single-modal features for each sensor data type and construction of the multi-modal feature vector for identification. In Section 4, we present the experimental results regarding user identification. Finally, we draw conclusions in Section 5.

2. Data Acquisition and Preprocessing

2.1. Gait Data Acquisition

We used the FootLogger smart insole for gait data collection (Figure 1). The insole is equipped with eight pressure sensors and a triaxial accelerometer [30]. Three pressure sensors are placed on the front left side, three others on the front right side, and the remaining two on the heel. Each pressure sensor retrieves values of 0, 1, and 2 depending on intensity, where 0 indicates no pressure, that is, the foot is off the ground, whereas values of 1 and 2 indicate increasing pressure at the location of foot contact with the ground. The sensors in both feet synchronously acquire data at a sampling rate of 100 Hz. These measured data are transmitted to a database server through a Bluetooth application using an Android smartphone.

2.2. Data Normalization and Regularization

Gait data are time series signals that reflect characteristic repetitive patterns. Hence, we extracted the features of gait patterns from the gait cycles, which corresponded to the minimum period of repetition. In general, a gait cycle [33] comprises the movement from the moment one foot touches the ground to the moment where it leaves the ground and returns to the ground. The gait cycle is usually divided into two stages, namely the stance phase, where the foot touches the ground, and the swing phase, where the foot leaves the ground. More detailed models consider seven stages, namely heel strike, foot flat, mid-stance, heel off, toe off, mid-swing, and late swing.
We first detected the starting and ending points of the gait cycle according to the swing phase onset, in which all the pressure sensors on the insole of one foot retrieve a value of zero. Then, the continuously-measured gait data were divided into individual steps, each corresponding to one gait cycle. For each of the pressure sensors and accelerometers, the data points of individual steps were stored in matrix form by arranging the sensor values of both feet side by side along time axis l. Hence, each column represents a sensor, and the rows indicate time (Figure 2). As a result, pressure data from the eight sensors and triaxial acceleration data of both feet were stored in matrices with 16 and six columns, respectively.
Although the walking speed may be a distinguishing the characteristics of each person, it can also be a factor that increases within-class data variability, because one person can walk at a varying pace according to different conditions. Therefore, we normalized the gait data to a fixed period l = 63 per individual step to eliminate the variability of gait cycle length [30]. Hence, the normalized values of the pressure and acceleration sensor arrays per step were given by matrices of 63 × 16 and 63 × 6 , respectively.
Most statistical-based feature extraction methods have specific scattering matrices resembling the data covariance matrix to define their objective functions. Therefore, to utilize these methods, we converted the sensor data from matrices into vectors of pressure ( 1008 × 1 ) and acceleration ( 378 × 1 ) per step using lexicographic ordering. On the other hand, as every step was divided according to the swing phase, some elements of the vector became zero in all the samples, which may lead to rank deficiency during calculation of the covariance matrix. To prevent this instability problem related to eigenvalue decomposition, we performed regularization [34] by adding random numbers between zero and 0.1 to the data values. Figure 2 shows the original and preprocessed data for gait pattern analysis.

3. Multi-Modal Features for Identification

3.1. Discriminant Feature Extraction

As the FootLogger sensors measure data every 0.01 s, the resulting gait data were a high-dimensional vector. Therefore, we extracted features of gait data using NLDA, which avoids the small sample size problem [35] that occurs when dealing with high-dimensional data in supervised machine learning for classification. NLDA is a variant of the linear discriminant analysis (LDA) [35] and proceeds as follows. By projecting samples into the null space of the within-class scatter matrix, NLDA aggregates samples from the same class into one place and distributes the distance between the means of samples in different classes to create the feature space. NLDA effectively handles high-dimensional data due to securing the null space of the within-class scatter matrix.
Pressure and acceleration gait data exhibit different properties in terms of content and format. Besides the different physical factors being measured, the pressure sensor retrieves three discrete quantification levels, whereas the acceleration data have a continuous property (in spite of being sampled). Thus, we separately applied NLDA to the pressure and acceleration data to extract single-modal features and then evaluated the discriminative power of each feature to construct a multi-modal feature vector for identification.
Let C and n be the number of users to be classified and the dimension of the preprocessed data samples, respectively. The sensor data can be represented as x S R n , with S being P for pressure and A for acceleration. If the number of samples belonging to each class is N i , S W S = i = 1 C x j S c i ( x j S μ i S ) ( x j S μ i S ) T , where x j S is the j th sample belonging to class c i and μ i S is the sample mean of x j S . In addition, inter-class scatter matrix S B S = i = 1 C N i ( μ i S - μ S ) ( μ i S μ S ) T , where μ S is the mean of the total samples. In discriminant analysis using S W S and S B S , the null space of S W S has a very high discriminative power because it gathers the samples belonging to the same class into one point. To maximize discrimination between classes, NLDA projects the samples in the null space and finds the feature space where the variance between the means across classes is maximized through the following objective function:
W O p t S = a r g m a x W T S W S W = 0 W T S B S W ,
where W O p t S is a projection matrix composed of n projection vectors w n S , and feature vector y S for sample x S can be obtained as:
y S ( R n × 1 ) = W O p t S T x S , y S ( = [ y 1 S , y 2 S , , y n S ] ) = W O p t S T x S .

3.2. Multi-Modal Feature Vector Construction

Feature vector y S extracted from each sensor is composed of C 1 features, but not all of them evenly contribute to classification. The discriminative power of each feature is reflected by the corresponding eigenvalue of the projection vector, and projection matrix W O p t S generally has a projection vector with a large eigenvalue. However, feature evaluation based on eigenvalue comparison is valid only in the same sensing mode. Therefore, we determined the discriminant power of all the features extracted from each sensor’s data by using feature selection and constructed a multi-modal feature vector with the most representative features from each sensing mode.
There are various ways to evaluate feature contribution, from which we selected the Laplacian score [32], as it measures the discernibility of features in a supervised way by determining discriminability based on local geometric structures. We first merged all features y t P and y t A (t = 1, …, C - 1 ) for each sensor into a candidate vector y c a n d i = { y 1 P , y C 1 P , y 1 A , , y C 1 A } for multi-modal feature vector and calculated the Laplacian score of each feature. To to this, we defined nearest-neighbor graph G with the number of training data (N) and weight matrix M W [32]. If two candidate vectors y i c a n d i and y j c a n d i corresponding to the i th and j th nodes, respectively, belong to the same class, an edge is placed between them. For linked nodes, M i j W is e | | y i c a n d i y j c s n d i | | 2 m (where m is a user parameter set to two), and M i j W is zero otherwise. By letting f r , D, and 1 be [ f r 1 , f r 2 , , f r N ] , d i a g ( M W 1 ) , and [ 1 , , 1 ] T , respectively, the Laplacian score L S r for the r th feature is calculated as [32]
L S r = f ˜ r T L f ˜ r f ˜ r T D f ˜ r ,
where f ˜ r = f r f r T D 1 1 D 1 1 and L = D M W . Features retrieving larger Laplacian scores were selected to construct multi-modal feature vector y M u l , which was used as the input to the classifier for user identification.
The complete procedure of the proposed method is summarized as follows (Figure 3):
  • Data measured from pressure sensors and accelerometers corresponding to continuous walking were divided into individual steps based on the swing phase determined from pressure data.
  • Data normalization was performed for every individual step to have the same time length, and regularization was performed for discriminant analysis.
  • For each type of sensor, single-modal features were extracted using NLDA from the preprocessed data.
  • The Laplacian score of each feature was calculated to evaluate its discriminative power, and a multi-modal feature vector was constructed by sequentially selecting highly-discriminant features.
  • The resultant multi-modal features were employed for user identification.

4. Experimental Results

To evaluate the performance of the proposed method, we measured gait data using the FootLogger insole from 14 adults aged between 20 and 30 years. The data were measured while participants walked for three minutes. The gait data from the 14 subjects retrieved 2295 individual steps by following the preprocessing presented in Section 2. From the samples, 700 steps were randomly selected, and three samples for each subject were used for training (total of 42 samples), while the remaining 658 samples were used for testing. As a result, we obtained a total of 2295 samples of individual steps for 14 subjects. To determine the number of steps required to obtain information that could be used to distinguish each user, we investigated the classification rate using gait samples composed of steps in amounts ranging from one step (k = 1) to three steps (k = 3). Table 1 presents the total number of gait data samples according to the value of k with the number of training samples and test samples. When composing a data sample with an individual step (k = 1), the total number of data samples was 2295. When k = 2 and k = 3, the total number of data samples was 1144 and 759, respectively. To examine the identification performance as the value of k changed, we randomly selected 700 samples among all experiments for different k values, of which 42 training data samples were used to construct the NLDA feature space, and the remaining 658 samples were used for testing. To increase statistical confidence, we repeated the above procedure 25 times and calculated the average identification rate. From the samples, 700 samples were randomly selected; 42 samples (three per subject) were used for training; and the remaining 658 samples were used for testing. To ensure reliability, we repeated the above-mentioned process 25 times and used the average identification results. The one-nearest-neighborhood rule from the single- and multi-modal features was used as a classifier for user identification considering the Euclidean distance [36].
Figure 4 shows the two-dimensional distribution of data samples from individual steps in the input data space ( x P and x A ) and the multi-modal feature space ( y M u l ) for the 14 subjects. To visualize the high-dimensional data in a plane, we used the t-distributed stochastic neighbor embedding [37], which performs nonlinear dimensionality reduction and is widely used in machine learning applications. In the sub-figures, each color represents an individual subject, and the points represent the data samples of individuals. Figure 4a,b shows that samples were clearly clustered by subject in the multi-modal feature space compared to the input data space. The clustering improvement by feature extraction was especially prominent in the acceleration data. In the multi-modal feature space, the variance of a subject cluster was much smaller than that in the input space of acceleration data.
Figure 5 shows the user identification performance for each step. The multi-modal features ( y M u l ) provided better identification performance than the single-modal features ( y P and y A [38]). Moreover, the identification rate using y M u l increased gradually with the dimension of the feature space, but it saturated at around 20 dimensions. Hence, sequential feature selection from discriminability evaluation was effective at constructing a multi-modal feature vector for user identification. On the other hand, the single-modal features obtained from pressure data ( y P ) retrieved better identification performance than that obtained from acceleration data ( y A ). Hence, individual gait patterns distinguishing persons were better represented by the distribution of contact points of the soles during walking.
To determine the minimum number of steps necessary to extract gait information for accurate user identification, we evaluated the identification performance when the gait sample was constructed with one ( k = 1 ), two ( k = 2 ), and three ( k = 3 ) consecutive steps. When k = 1 , the number of samples was 2295, and when k = 2 and k = 3 , the numbers of samples was 1144 and 759, respectively. Figure 6 shows the identification rate according to k, where the identification performance improved with k, as expected, for both single- and multi-modal features. Therefore, the more steps a single sample contained, the higher the discriminability of the feature. For every k, the multi-modal features provided better identification performance than the single-modal features, reaching above 93% identification accuracy at the lowest k = 1 . This may be given by the complementarity between the characteristics of gait data from different sensors, producing a synergetic effect that provides richer features for user identification, even from few available data samples.

5. Discussion

Since gait patterns have characteristics that are unique to each individual, gait pattern analysis can be used as a biometric to identify an individual. The contribution of our work is proposing a method for constructing a multi-modal feature space that is effective for user identification from gait data obtained from various wearable sensors. Most existing studies on the walking patterns of individuals for the purpose of user identification have taken gait videos with cameras and have analyzed them using a computer vision technology. However, video-based analysis methods have limitations on data acquisition, such as being limited to a specific space with an installed camera or requiring an uncrowded space to prevent occlusion. In addition, these methods for gait analysis require the cooperation of the user, as the user should walk in front of the camera for a while. Due to these constraints, video-based gait analysis methods have limited applicability in various fields outside of specific uses. Meanwhile, gait analysis methods using wearable devices, such as IMU sensors and smart insoles, have also been proposed. However, they have attempted a basic classification of several types of walking, and some methods still require the cooperation of the user, such as attaching the sensor to a specific part of the body.
The proposed method effectively extracted individual gait characteristics from the gait data measured using the wearable sensors and showed excellent user identification performance even with a small amount of computation. In particular, the proposed method used sensors mounted on an insole used in everyday life; hence, it did not require special cooperation from users for data acquisition. In addition, the data could be easily measured at any time while wearing shoes, allowing analysis of accumulated data over time. This can improve the reliability of security when applied to security systems, such as door control, because it can prevent being deceived by an impersonated gait pattern where the user’s walking style changes instantaneously.
Many methods for classifying data have also been developed, including deep learning-based approaches that have received much attention recently. However, although deep learning methods have shown excellent classification performance in various fields, massive datasets should be obtained for training. In addition, although lightweight deep learning methods [39,40,41] are being studied, their computational burden is still too high to be used in mobile/wearable devices. Therefore, in this paper, we used the NLDA method, one of the discriminant analysis techniques, which has shown good performance in the classification of high-dimensional data. The NLDA method is especially effective when the data dimension is large compared to the number of data samples where sufficient null space of the covariance matrix is secured. As insoles for both feet generated 16 pressure and six acceleration signals in real time, the obtained walking data were high-dimensional, and thus, we extracted discriminant features for user identification by using NLDA. The proposed gait classification using NLDA can even work on mobile devices without a graphics processing unit. The flexibility of the proposed method for applicable use environments and available devices is a significant advantage not only for the use of biometrics, but also for a wide range of applications, such as behavioral analysis through long-term observation and the diagnosis of neurologic disorders and musculoskeletal diseases.

6. Conclusions

We proposed a method for user identification based on discriminant analysis from gait data measured by multi-modal sensing on a smart insole. As the proposed method used a wearable device, it can be applied with less environmental constraints and lower computational burden than methods relying on video processing. In addition, as acquiring data through insoles was not limited by the activities of the users, our method had high scalability. The proposed method consisted of data preprocessing, discriminant analysis for single-modal data, construction of multi-modal feature vector, and user identification. Single-modal features were extracted using NLDA.The multi-modal feature vector was constructed by evaluating the discriminative power of each feature based on its Laplacian score. We used a commercial smart insole, FootLogger, for data acquisition. The user identification results on walking data acquired by pressure sensors and accelerometers from 14 adults confirmed that identification using multi-modal features integrating the sensing modalities outperformed identification using single-modal features. Although deep learning methods have shown excellent classification performance in various fields, massive datasets should be obtained for training. In future developments, we will measure walking data from more people and study more advanced user identification techniques based on multi-modal deep neural networks. We will also evaluate user identification for various gait types such as running and climbing, besides further investigating walking. Furthermore, we will aim to improve the user identification performance by considering data measured in various environments during execution of activities of daily living and combining our analysis and methods with gait type classification.

Author Contributions

S.-I.C. and S.T.C. designed the experiments and drafted the manuscript. J.M. and H.-C.P. provided useful suggestions and edited the draft. All authors approved the final version of the manuscript.

Funding

The present research was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2018R1A2B6001400) and the Human Resources Program in Energy Technology of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry and Energy, Republic of Korea (20174030201740).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pirker, W.; Katzenschlager, R. Gait disorders in adults and the elderly. Wien. Klin. Wochenschr. 2017, 129, 81–95. [Google Scholar] [CrossRef] [PubMed]
  2. Haynes, C.A.; Lockhart, T.E. Evaluation of gait and slip parameters for adults with intellectual disability. J. Biomech. 2012, 45, 2337–2341. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Verghese, J.; Lipton, R.; Hall, C.B.; Kuslansky, G.; Katz, M.J.; Buschke, H. Abnormality of gait as a predictor of non-Alzheimer’s dementia. N. Engl. J. Med. 2002, 347, 1761–1768. [Google Scholar] [CrossRef] [PubMed]
  4. Brandler, T.C.; Wang, C.; Oh-Park, M.; Holtzer, R.; Verghese, J. Depressive symptoms and gait dysfunction in the elderly. Am. J. Geriatr. Psychiatry 2012, 20, 425–432. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, B.; Jiang, S.; Wei, D.; Marschollek, M.; Zhang, W. State of the art in gait analysis using wearable sensors for healthcare applications. In Proceedings of the 2012 IEEE/ACIS 11th International Conference on Computer and Information Science (ICIS), Shanghai, China, 30 May–1 June 2012; pp. 213–218. [Google Scholar]
  6. Mendes, J., Jr.; José, J.A.; Vieira, M.E.M.; Pires, M.B.; Stevan, S.L., Jr. Sensor fusion and smart sensor in sports and biomedical applications. Sensors 2016, 16, 1569. [Google Scholar] [CrossRef] [PubMed]
  7. Gouwanda, D.; Senanayake, S.M.N.A. Emerging trends of body-mounted sensors in sports and human gait analysis. In Proceedings of the 4th Kuala Lumpur International Conference on Biomedical Engineering, Kuala Lumpur, Malaysia, 25–28 June 2008; Springer: New York, NY, USA, 2008; pp. 715–718. [Google Scholar]
  8. Choi, S.I.; Lee, S.S.; Park, H.C.; Kim, H. Gait type classification using smart insole sensors. In Proceedings of the TENCON 2018-2018 IEEE Region 10 Conference, Jeju, Korea, 28–31 October 2018; pp. 1903–1906. [Google Scholar]
  9. Zhang, Z.; Tran, L.; Yin, X.; Atoum, Y.; Liu, X.; Wan, J.; Wang, N. Gait recognition via disentangled representation learning. In Proceedings of the 2019 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 15–21 June 2019; Volume 1, p. I. [Google Scholar]
  10. Huang, H.; Kuiken, T.A.; Lipschutz, R.D. A strategy for identifying locomotion modes using surface electromyography. IEEE Trans. Biomed. Eng. 2009, 56, 65–73. [Google Scholar] [CrossRef] [PubMed]
  11. Min, S.D.; Kwon, C.K. Step Counts and Posture Monitoring System using Insole Type Textile Capacitive Pressure Sensor for Smart Gait Analysis. J. Korea Soc. Comput. Inf. 2012, 17, 107–114. [Google Scholar] [CrossRef]
  12. Kim, S.Y.; Kwon, G.I. Gravity Removal and Vector Rotation Algorithm for Step counting using a 3-axis MEMS accelerometer. J. Korea Soc. Comput. Inf. 2014, 19, 43–52. [Google Scholar] [CrossRef]
  13. Wu, W.; Dasgupta, S.; Ramirez, E.E.; Peterson, C.; Norman, G.J. Classification accuracies of physical activities using smartphone motion sensors. J. Med Internet Res. 2012, 14, e130. [Google Scholar] [CrossRef]
  14. Ngo, T.T.; Makihara, Y.; Nagahara, H.; Mukaigawa, Y.; Yagi, Y. Similar gait action recognition using an inertial sensor. Pattern Recognit. 2015, 48, 1289–1301. [Google Scholar] [CrossRef]
  15. Zhang, T.; Venture, G. Individual recognition from gait using feature value method. Cybern. Inf. Technol. 2012, 12, 86–95. [Google Scholar] [CrossRef]
  16. El Achkar, C.M.; Lenoble-Hoskovec, C.; Paraschiv-Ionescu, A.; Major, K.; Büla, C.; Aminian, K. Instrumented shoes for activity classification in the elderly. Gait Posture 2016, 44, 12–17. [Google Scholar] [CrossRef] [PubMed]
  17. Tong, K.; Granat, M.H. A practical gait analysis system using gyroscopes. Med. Eng. Phys. 1999, 21, 87–94. [Google Scholar] [CrossRef]
  18. Yun, X.; Bachmann, E.R.; Moore, H.; Calusdian, J. Self-contained position tracking of human movement using small inertial/magnetic sensor modules. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 2526–2533. [Google Scholar]
  19. Farah, J.D.; Baddour, N.; Lemaire, E.D. Gait phase detection from thigh kinematics using machine learning techniques. In Proceedings of the 2017 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Rochester, MN, USA, 7–10 May 2017; pp. 263–268. [Google Scholar]
  20. Wahid, F.; Begg, R.K.; Hass, C.J.; Halgamuge, S.; Ackland, D.C. Classification of Parkinson’s disease gait using spatial-temporal gait features. IEEE J. Biomed. Health Inform. 2015, 19, 1794–1802. [Google Scholar] [CrossRef] [PubMed]
  21. Dolatabadi, E.; Mansfield, A.; Patterson, K.K.; Taati, B.; Mihailidis, A. Mixture-model clustering of pathological gait patterns. IEEE J. Biomed. Health Inform. 2016, 21, 1297–1305. [Google Scholar] [CrossRef] [PubMed]
  22. Parkka, J.; Ermes, M.; Korpipaa, P.; Mantyjarvi, J.; Peltola, J.; Korhonen, I. Activity classification using realistic data from wearable sensors. IEEE Trans. Inf. Technol. Biomed. 2006, 10, 119–128. [Google Scholar] [CrossRef] [PubMed]
  23. Manap, H.H.; Tahir, N.M.; Yassin, A.I.M. Statistical analysis of parkinson disease gait classification using Artificial Neural Network. In Proceedings of the 2011 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, Spain, 14–17 December 2011; pp. 60–65. [Google Scholar]
  24. Taborri, J.; Rossi, S.; Palermo, E.; Patanè, F.; Cappa, P. A novel HMM distributed classifier for the detection of gait phases by means of a wearable inertial sensor network. Sensors 2014, 14, 16212–16234. [Google Scholar] [CrossRef] [PubMed]
  25. Kale, A.; Rajagopalan, A.; Cuntoor, N.; Kruger, V. Gait-based recognition of humans using continuous HMMs. In Proceedings of the Fifth IEEE International Conference on Automatic Face Gesture Recognition, Washington, DC, USA, 21 May 2002; pp. 336–341. [Google Scholar]
  26. Panahandeh, G.; Mohammadiha, N.; Leijon, A.; Händel, P. Continuous hidden Markov model for pedestrian activity classification and gait analysis. IEEE Trans. Instrum. Meas. 2013, 62, 1073–1083. [Google Scholar] [CrossRef]
  27. Dehzangi, O.; Taherisadr, M.; ChangalVala, R. IMU-based gait recognition using convolutional neural networks and multi-sensor fusion. Sensors 2017, 17, 2735. [Google Scholar] [CrossRef]
  28. Gadaleta, M.; Rossi, M. Idnet: Smartphone-based gait recognition with convolutional neural networks. Pattern Recognit. 2018, 74, 25–37. [Google Scholar] [CrossRef]
  29. Huang, B.; Chen, M.; Huang, P.; Xu, Y. Gait modeling for human identification. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 4833–4838. [Google Scholar]
  30. Lee, S.S.; Choi, S.T.; Choi, S.I. Classification of gait type based on deep learning using various sensors with smart insole. Sensors 2019, 19, 1757. [Google Scholar] [CrossRef] [PubMed]
  31. Cevikalp, H.; Neamtu, M.; Wilkes, M.; Barkana, A. Discriminative common vectors for face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 4–13. [Google Scholar] [CrossRef] [PubMed]
  32. He, X.; Cai, D.; Niyogi, P. Laplacian score for feature selection. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2006; pp. 507–514. [Google Scholar]
  33. Charalambous, C.P. Walking patterns of normal men. In Classic Papers in Orthopaedics; Springer: New York, NY, USA, 2014; pp. 393–395. [Google Scholar]
  34. Zhou, X.S.; Huang, T.S. Small sample learning during multimedia retrieval using biasmap. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1, p. I. [Google Scholar]
  35. Fukunaga, K. Introduction to Statistical Pattern Recognition, 2nd ed.; Academic Press Professional, Inc.: San Diego, CA, USA, 1990. [Google Scholar]
  36. Choi, S.I.; Jeon, H.M.; Jeong, G.M. Data reconstruction using subspace analysis for gas classification. IEEE Sens. J. 2017, 17, 5954–5962. [Google Scholar] [CrossRef]
  37. Maaten, L.V.D.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  38. Lee, S.S.; Chang, S.H.; Choi, S.I. Gait type classification based on deep learning using smart insole. J. Korean Inst. Commun. Inf. Sci. 2018, 43, 1378–1381. [Google Scholar]
  39. Tan, M.; Le, Q.V. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, CA, USA, 10–15 June 2019. [Google Scholar]
  40. Gholami, A.; Kwon, K.; Wu, B.; Tai, Z.; Yue, X.; Jin, P.; Zhao, S.; Keutzer, K. SqueezeNext: Hardware-aware neural network design. In Proceedings of the 2018 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018; Volume 1. [Google Scholar]
  41. Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In Proceedings of the 2016 International Conference on Learning and Representations, ICLR 2016, San Juan, Puerto Rico, 2–4 May 2016; Volume 1. [Google Scholar]
Figure 1. Sensor structure of the smart insole, “FootLogger”.
Figure 1. Sensor structure of the smart insole, “FootLogger”.
Sensors 19 03785 g001
Figure 2. Preprocessing for gait pattern analysis.
Figure 2. Preprocessing for gait pattern analysis.
Sensors 19 03785 g002
Figure 3. Procedure of the proposed method for user identification.
Figure 3. Procedure of the proposed method for user identification.
Sensors 19 03785 g003
Figure 4. Distribution of individual step samples in each vector space: (a) input space of pressure data, (b) input space of acceleration data, and (c) multi-modal feature space.
Figure 4. Distribution of individual step samples in each vector space: (a) input space of pressure data, (b) input space of acceleration data, and (c) multi-modal feature space.
Sensors 19 03785 g004
Figure 5. Identification rates for various dimensions of the feature space.
Figure 5. Identification rates for various dimensions of the feature space.
Sensors 19 03785 g005
Figure 6. Identification rates for different k.
Figure 6. Identification rates for different k.
Sensors 19 03785 g006
Table 1. The total number of gait data samples according to the value of k with the number of training and test samples.
Table 1. The total number of gait data samples according to the value of k with the number of training and test samples.
kTotal Number of Gait SamplesNumber of Training SampleNumber of Test Sample
1229542658
2114442658
375942658

Share and Cite

MDPI and ACS Style

Choi, S.-I.; Moon, J.; Park, H.-C.; Choi, S.T. User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole. Sensors 2019, 19, 3785. https://doi.org/10.3390/s19173785

AMA Style

Choi S-I, Moon J, Park H-C, Choi ST. User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole. Sensors. 2019; 19(17):3785. https://doi.org/10.3390/s19173785

Chicago/Turabian Style

Choi, Sang-Il, Jucheol Moon, Hee-Chan Park, and Sang Tae Choi. 2019. "User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole" Sensors 19, no. 17: 3785. https://doi.org/10.3390/s19173785

APA Style

Choi, S. -I., Moon, J., Park, H. -C., & Choi, S. T. (2019). User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole. Sensors, 19(17), 3785. https://doi.org/10.3390/s19173785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop