[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (274)

Search Parameters:
Keywords = gait recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1939 KiB  
Article
Multi-Class Classification of Human Activity and Gait Events Using Heterogeneous Sensors
by Tasmiyah Javed, Ali Raza, Hafiz Farhan Maqbool, Saqib Zafar, Juri Taborri and Stefano Rossi
J. Sens. Actuator Netw. 2024, 13(6), 85; https://doi.org/10.3390/jsan13060085 - 10 Dec 2024
Viewed by 694
Abstract
The control of active prostheses and orthoses requires the precise classification of instantaneous human activity and the detection of specific events within each activity. Furthermore, such classification helps physiotherapists, orthopedists, and neurologists in kinetic/kinematic analyses of patients’ gaits. To address this need, we [...] Read more.
The control of active prostheses and orthoses requires the precise classification of instantaneous human activity and the detection of specific events within each activity. Furthermore, such classification helps physiotherapists, orthopedists, and neurologists in kinetic/kinematic analyses of patients’ gaits. To address this need, we propose an innovative deep neural network (DNN)-based approach with a two-step hyperparameter optimization scheme for classifying human activity and gait events, specific for different motor activities, by using the ENABL3S dataset. The proposed architecture sets the baseline accuracy to 93% with a single hidden layer and offers further improvement by adding more layers; however, the corresponding number of input neurons remains a crucial hyperparameter. Our two-step hyperparameter-tuning strategy is employed which first searches for an appropriate number of hidden layers and then carefully modulates the number of neurons within these layers using 10-fold cross-validation. This multi-class classifier significantly outperforms prior machine learning algorithms for both activity and gait event recognition. Notably, our proposed scheme achieves impressive accuracy rates of 98.1% and 99.96% for human activity and gait events per activity, respectively, potentially leading to significant advancements in prosthetic/orthotic controls, patient care, and rehabilitation programs’ definition. Full article
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed framework of the DNN-based approach, highlighting the two-phase hyperparameter optimization scheme and its application on multi-activity and multi-gait event classification.</p>
Full article ">Figure 2
<p>Three hidden layered DeMulHAC architectures for activity and gait event classification.</p>
Full article ">Figure 3
<p>DNN accuracy for activity recognition in phase 2.</p>
Full article ">Figure 4
<p>DNN accuracy for gait event detection.</p>
Full article ">Figure 5
<p>Comparison of DeMulHAC with ML techniques (on the ENABLE3S dataset).</p>
Full article ">
11 pages, 1202 KiB  
Article
The Interplay Between Muscular Activity and Pattern Recognition of Electro-Stimulated Haptic Cues During Normal Walking: A Pilot Study
by Yoosun Kim, Sejun Park, Seungtae Yang, Alireza Nasirzadeh and Giuk Lee
Bioengineering 2024, 11(12), 1248; https://doi.org/10.3390/bioengineering11121248 - 9 Dec 2024
Viewed by 476
Abstract
This pilot study explored how muscle activation influences the pattern recognition of tactile cues delivered using electrical stimulation (ES) during each 10% window interval of the normal walking gait cycle (GC). Three healthy adults participated in the experiment. After identifying the appropriate threshold, [...] Read more.
This pilot study explored how muscle activation influences the pattern recognition of tactile cues delivered using electrical stimulation (ES) during each 10% window interval of the normal walking gait cycle (GC). Three healthy adults participated in the experiment. After identifying the appropriate threshold, ES as the haptic cue was applied to the gastrocnemius lateralis (GL) and biceps brachii (BB) of participants walking on a treadmill. Findings revealed variable recognition patterns across participants, with the BB showing more variability during walking due to its minimal activity compared to the actively engaged GL. Dynamic time warping (DTW) was used to assess the similarity between muscle activation and electro-stimulated haptic perception. The DTW distance between electromyography (EMG) signals and muscle recognition patterns was significantly smaller for the GL (4.87 ± 0.21, mean ± SD) than the BB (8.65 ± 1.36, mean ± SD), showing a 78.6% relative difference, indicating that higher muscle activation was generally associated with more consistent haptic perception. However, individual differences and variations in recognition patterns were observed, suggesting personal variability influenced the perception outcomes. The study underscores the complexity of human neuromuscular responses to artificial sensory stimuli and suggests a potential link between muscle activity and haptic perception. Full article
(This article belongs to the Special Issue Robotic Assisted Rehabilitation and Therapy)
Show Figures

Figure 1

Figure 1
<p>(<b>Top</b>): Instrument setup for current study. (<b>Bottom</b>): Participant holding switch while sensors are attached (<b>left</b>) and electrode pads’ locations for biceps brachii (BB) and gastrocnemius lateralis (GL) (<b>right</b>).</p>
Full article ">Figure 2
<p>Left: Electrical stimulation (ES) output after controller’s command as conceptual illustration. Right: Example of single pulse used during ES, characterized by biphasic waveform.</p>
Full article ">Figure 3
<p>Diagram of threshold identification and pattern recognition.</p>
Full article ">Figure 4
<p>Normalized electromyography (EMG) signals (mean ± SD) of BB and GL during normal walking for three participants of current study.</p>
Full article ">Figure 5
<p>DTW distance and distance matrix between EMG signals (purple line) and recognition patterns (green line). The orange line indicates the optimal warping path derived from the DTW algorithm.</p>
Full article ">
15 pages, 1999 KiB  
Article
Multi-Biometric Feature Extraction from Multiple Pose Estimation Algorithms for Cross-View Gait Recognition
by Ausrukona Ray, Md. Zasim Uddin, Kamrul Hasan, Zinat Rahman Melody, Prodip Kumar Sarker and Md Atiqur Rahman Ahad
Sensors 2024, 24(23), 7669; https://doi.org/10.3390/s24237669 - 30 Nov 2024
Viewed by 477
Abstract
Gait recognition is a behavioral biometric technique that identifies individuals based on their unique walking patterns, enabling long-distance identification. Traditional gait recognition methods rely on appearance-based approaches that utilize background-subtracted silhouette sequences to extract gait features. While effective and easy to compute, these [...] Read more.
Gait recognition is a behavioral biometric technique that identifies individuals based on their unique walking patterns, enabling long-distance identification. Traditional gait recognition methods rely on appearance-based approaches that utilize background-subtracted silhouette sequences to extract gait features. While effective and easy to compute, these methods are susceptible to variations in clothing, carried objects, and illumination changes, compromising the extraction of discriminative features in real-world applications. In contrast, model-based approaches using skeletal key points offer robustness against these covariates. Advances in human pose estimation (HPE) algorithms using convolutional neural networks (CNNs) have facilitated the extraction of skeletal key points, addressing some challenges of model-based approaches. However, the performance of skeleton-based methods still lags behind that of appearance-based approaches. This paper aims to bridge this performance gap by introducing a multi-biometric framework that extracts features from multiple HPE algorithms for gait recognition, employing feature-level fusion (FLF) and decision-level fusion (DLF) by leveraging a single-source multi-sample technique. We utilized state-of-the-art HPE algorithms, OpenPose, AlphaPose, and HRNet, to generate diverse skeleton data samples from a single source video. Subsequently, we employed a residual graph convolutional network (ResGCN) to extract features from the generated skeleton data. In the FLF approach, the features extracted from ResGCN and applied to the skeleton data samples generated by multiple HPE algorithms are aggregated point-wise for gait recognition, while in the DLF approach, the decisions of ResGCN applied to each skeleton data sample are integrated using majority voting for the final recognition. Our proposed method demonstrated state-of-the-art skeleton-based cross-view gait recognition performance on a popular dataset, CASIA-B. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Example of a gait sequence for a subject (every fifth frame of a sequence): (<b>a</b>) RGB image sequence, (<b>b</b>) pose sequence with OpenPose, (<b>c</b>) pose sequence with AlphaPose, (<b>d</b>) pose sequence with HRNet. HRNet consistently generates accurate skeletons, while AlphaPose and OpenPose struggle with keypoint detection during self-occlusions, especially at the right shoulder in side views, with OpenPose occasionally losing body segments during leg swings.</p>
Full article ">Figure 2
<p>Overview of the proposed gait recognition framework. Given a raw RGB video sequence, the skeleton key points are extracted using a state-of-the-art pose estimation algorithm, and skeleton key points are preprocessed to generate input features. FC and ⊕ indicate the fully connected layer and element-wise addition, respectively.</p>
Full article ">Figure 3
<p>The average recognition rates of DLF and FLF along with the baseline algorithms on the CASIA-B dataset for normal walking (NM), carrying bags (BG), and wearing coats (CL) sequences.</p>
Full article ">Figure 4
<p>Comparison of the proposed DLF and FLF with baseline pose estimator algorithms results on the CASIA-B dataset. This consists of 11 subgraphs, each denoting a probe view angle against all gallery view angles for normal walking (NM) sequences. Best viewed in color.</p>
Full article ">Figure 5
<p>Comparison of the proposed DLF and FLF with baseline pose estimator algorithms results on the CASIA-B dataset. This consists of 11 subgraphs, each denoting a probe view angle against all gallery view angles for carrying bags (BG) sequences. Best viewed in color.</p>
Full article ">Figure 6
<p>Comparison of the proposed DLF and FLF with baseline pose estimator algorithms results on the CASIA-B dataset. This consists of 11 subgraphs, each denoting a probe view angle against all gallery view angles for wearing coats (CL) sequences. Best viewed in color.</p>
Full article ">
18 pages, 1642 KiB  
Article
Crouch Gait Recognition in the Anatomical Space Using Synthetic Gait Data
by Juan-Carlos Gonzalez-Islas, Omar Arturo Dominguez-Ramirez, Omar Lopez-Ortega and Jonatan Pena Ramirez
Appl. Sci. 2024, 14(22), 10574; https://doi.org/10.3390/app142210574 - 16 Nov 2024
Viewed by 533
Abstract
Crouch gait, also referred to as flexed knee gait, is an abnormal walking pattern, characterized by an excessive flexion of the knee, and sometimes also with anomalous flexion in the hip and/or the ankle, during the stance phase of gait. Due to the [...] Read more.
Crouch gait, also referred to as flexed knee gait, is an abnormal walking pattern, characterized by an excessive flexion of the knee, and sometimes also with anomalous flexion in the hip and/or the ankle, during the stance phase of gait. Due to the fact that the amount of clinical data related to crouch gait are scarce, it is difficult to find studies addressing this problem from a data-based perspective. Consequently, in this paper we propose a gait recognition strategy using synthetic data that have been obtained using a polynomial based-generator. Furthermore, though this study, we consider datasets that correspond to different levels of crouch gait severity. The classification of the elements of the datasets into the different levels of abnormality is achieved by using different algorithms like k-nearest neighbors (KNN) and Naive Bayes (NB), among others. On the other hand, to evaluate the classification performance we consider different metrics, including accuracy (Acc) and F measure (FM). The obtained results show that the proposed strategy is able to recognize crouch gait with an accuracy of more than 92%. Thus, it is our belief that this recognition strategy may be useful during the diagnosis phase of crouch gait disease. Finally, the crouch gait recognition approach introduced here may be extended to identify other gait abnormalities. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

Figure 1
<p>Framework for crouch gait recognition in anatomical space using synthetic gait data.</p>
Full article ">Figure 2
<p>8 DoF gait kinematics model. (<b>Right</b>) (skeletal model), (<b>left</b>) (open kinematic chain).</p>
Full article ">Figure 3
<p>Workflow stages of the crouch gait recognition in the anatomical space.</p>
Full article ">Figure 4
<p>Gait joint angles for the 8 movements for the 5 gaits. Normal (green), crouch-1 (blue), crouch-2 (red), crouch-3 (black), and crouch-4 (magenta). Limit between the stance phase and swing phase of the gait cycle (cyan).</p>
Full article ">Figure 5
<p>Flowchart of the synthetic gait joint angles generator algorithm.</p>
Full article ">Figure 6
<p>Statistical distribution of the dataset of 5 gait classes for each joint angle.</p>
Full article ">Figure 7
<p>Joint angles of hip flexo-extension (<math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mn>3</mn> <mi>R</mi> </mrow> </msub> </semantics></math>), knee flexo-extension (<math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mn>4</mn> <mi>R</mi> </mrow> </msub> </semantics></math>), and ankle dorsi-plantar flexion (<math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mn>5</mn> <mi>R</mi> </mrow> </msub> </semantics></math>). (<b>First column</b>): joint reference angle (black) and real joint angle (blue); (<b>second column</b>): joint reference angle (black) and synthetic joint angle (blue) and (<b>third column</b>); joint reference angle (black) and average of dataset of this joint angle (blue). Limit between the stance phase and swing phase of the gait cycle (cyan).</p>
Full article ">
15 pages, 4606 KiB  
Article
Lower Limb Motion Recognition Based on sEMG and CNN-TL Fusion Model
by Zhiwei Zhou, Qing Tao, Na Su, Jingxuan Liu, Qingzheng Chen and Bowen Li
Sensors 2024, 24(21), 7087; https://doi.org/10.3390/s24217087 - 4 Nov 2024
Viewed by 676
Abstract
To enhance the classification accuracy of lower limb movements, a fusion recognition model integrating a surface electromyography (sEMG)-based convolutional neural network, transformer encoder, and long short-term memory network (CNN-Transformer-LSTM, CNN-TL) was proposed in this study. By combining these advanced techniques, significant improvements in [...] Read more.
To enhance the classification accuracy of lower limb movements, a fusion recognition model integrating a surface electromyography (sEMG)-based convolutional neural network, transformer encoder, and long short-term memory network (CNN-Transformer-LSTM, CNN-TL) was proposed in this study. By combining these advanced techniques, significant improvements in movement classification were achieved. Firstly, sEMG data were collected from 20 subjects as they performed four distinct gait movements: walking upstairs, walking downstairs, walking on a level surface, and squatting. Subsequently, the gathered sEMG data underwent preprocessing, with features extracted from both the time domain and frequency domain. These features were then used as inputs for the machine learning recognition model. Finally, based on the preprocessed sEMG data, the CNN-TL lower limb action recognition model was constructed. The performance of CNN-TL was then compared with that of the CNN, LSTM, and SVM models. The results demonstrated that the accuracy of the CNN-TL model in lower limb action recognition was 3.76%, 5.92%, and 14.92% higher than that of the CNN-LSTM, CNN, and SVM models, respectively, thereby proving its superior classification performance. An effective scheme for improving lower limb motor function in rehabilitation and assistance devices was thus provided. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Diagram of the experimental scenario.</p>
Full article ">Figure 2
<p>Elimination of extraneous information signals from sEMG.</p>
Full article ">Figure 3
<p>Sliding window segmentation diagram of single-channel sEMG signal.</p>
Full article ">Figure 4
<p>1DCNN model structure.</p>
Full article ">Figure 5
<p>LSTM model structure.</p>
Full article ">Figure 6
<p>Transformer encoder model structure.</p>
Full article ">Figure 7
<p>CNN-TL overall architecture.</p>
Full article ">Figure 8
<p>The loss curve of the CNN-TL model.</p>
Full article ">Figure 9
<p>Comparison of the performance of the four classification models.</p>
Full article ">Figure 10
<p>ROC curves for four classification models.</p>
Full article ">Figure 11
<p>Confusion matrices for four models.</p>
Full article ">Figure 11 Cont.
<p>Confusion matrices for four models.</p>
Full article ">
14 pages, 3343 KiB  
Article
Development and Assessment of Artificial Intelligence-Empowered Gait Monitoring System Using Single Inertial Sensor
by Jie Zhou, Qian Mao, Fan Yang, Jun Zhang, Menghan Shi and Zilin Hu
Sensors 2024, 24(18), 5998; https://doi.org/10.3390/s24185998 - 16 Sep 2024
Cited by 1 | Viewed by 1509
Abstract
Gait instability is critical in medicine and healthcare, as it has associations with balance disorder and physical impairment. With the development of sensor technology, despite the fact that numerous wearable gait detection and recognition systems have been designed to monitor users’ gait patterns, [...] Read more.
Gait instability is critical in medicine and healthcare, as it has associations with balance disorder and physical impairment. With the development of sensor technology, despite the fact that numerous wearable gait detection and recognition systems have been designed to monitor users’ gait patterns, they commonly spend a lot of time and effort to extract gait metrics from signal data. This study aims to design an artificial intelligence-empowered and economic-friendly gait monitoring system. A pair of intelligent shoes with a single inertial sensor and a smartphone application were developed as a gait monitoring system to detect users’ gait cycle, stand phase time, swing phase time, stride length, and foot clearance. We recruited 30 participants (24.09 ± 1.89 years) to collect gait data and used the Vicon motion capture system to verify the accuracy of the gait metrics. The results show that the gait monitoring system performs better on the assessment of the gait metrics. The accuracy of stride length and foot clearance is 96.17% and 92.07%, respectively. The artificial intelligence-empowered gait monitoring system holds promising potential for improving gait analysis and monitoring in the medical and healthcare fields. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>The design of the gait monitoring system: (<b>a</b>) smart shoes; (<b>b</b>) application interfaces.</p>
Full article ">Figure 2
<p>The position of the markers.</p>
Full article ">Figure 3
<p>ELM topological structure.</p>
Full article ">Figure 4
<p>Enhanced models for gait spatial parameters.</p>
Full article ">Figure 5
<p>A gait cycle and lower limb features.</p>
Full article ">Figure 6
<p>Gait cycle characteristics (X: anterior–posterior direction; Y: medio-lateral direction; Z: vertical direction).</p>
Full article ">Figure 7
<p>The trajectory of the feet in the X-Y plane.</p>
Full article ">Figure 8
<p>A normal distribution test and difference analysis: (<b>a</b>) a Q-Q plot of stride length; (<b>b</b>) a Q-Q plot of foot clearance; (<b>c</b>) the difference in stride length; (<b>d</b>) the difference in foot clearance. (<b><span class="html-italic">****</span></b> in the (<b>c</b>,<b>d</b>) indicates that the <span class="html-italic">p</span>-value is less than 0.0001 and the difference between the two sets of data is very high).</p>
Full article ">Figure 9
<p>The predicted value of stride length.</p>
Full article ">Figure 10
<p>The predicted value of foot clearance.</p>
Full article ">
11 pages, 547 KiB  
Article
GaitAE: A Cognitive Model-Based Autoencoding Technique for Gait Recognition
by Rui Li, Huakang Li, Yidan Qiu, Jinchang Ren, Wing W. Y. Ng and Huimin Zhao
Mathematics 2024, 12(17), 2780; https://doi.org/10.3390/math12172780 - 8 Sep 2024
Viewed by 973
Abstract
Gait recognition is a long-distance biometric technique with significant potential for applications in crime prevention, forensic identification, and criminal investigations. Existing gait recognition methods typically introduce specific feature refinement modules on designated models, leading to increased parameter volume and computational complexity while lacking [...] Read more.
Gait recognition is a long-distance biometric technique with significant potential for applications in crime prevention, forensic identification, and criminal investigations. Existing gait recognition methods typically introduce specific feature refinement modules on designated models, leading to increased parameter volume and computational complexity while lacking flexibility. In response to this challenge, we propose a novel framework called GaitAE. GaitAE efficiently learns gait representations from large datasets and reconstructs gait sequences through an autoencoder mechanism, thereby enhancing recognition accuracy and robustness. In addition, we introduce a horizontal occlusion restriction (HOR) strategy, which introduces horizontal blocks to the original input sequences at random positions during training to minimize the impact of confounding factors on recognition performance. The experimental results demonstrate that our method achieves high accuracy and is effective when applied to existing gait recognition techniques. Full article
(This article belongs to the Special Issue Mathematical Methods for Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Common low-quality gait sequences.</p>
Full article ">Figure 2
<p>The GaitAE framework. The components within the red dotted lines represent the primary innovations of GaitAE. <math display="inline"><semantics> <msup> <mi>X</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msup> </semantics></math> represents the <span class="html-italic">n</span>th input sample. <span class="html-italic">h</span> is the sequence encoder for extracting frame-level features, <math display="inline"><semantics> <mi mathvariant="script">G</mi> </semantics></math> aggregates video features across frames, and <math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math> learns a discriminative representation of the training data. <math display="inline"><semantics> <mi mathvariant="script">G</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math> together refine the gait features. ⊕ is addition.</p>
Full article ">Figure 3
<p>Horizontal occlusion restriction (HOR). <math display="inline"><semantics> <msup> <mi>W</mi> <mo>′</mo> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>H</mi> <mo>′</mo> </msup> </semantics></math> represent the width and height of the frames in gait sequences, respectively. <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>H</mi> <mn>2</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>H</mi> <mn>3</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>H</mi> <mn>4</mn> </msub> </semantics></math> are parameters that control the position and size of the horizontal occlusion. (<b>a</b>) Complete gait silhouette. (<b>b</b>–<b>e</b>) Schematic diagrams in HOR with different numbers of occlusions and occlusion heights.</p>
Full article ">Figure 4
<p>The image restoration effect of the proposed method. The input is on the left of the arrow, and the output is on the right.</p>
Full article ">Figure 5
<p>Square occlusions of side lengths 4, 8, and 16.</p>
Full article ">Figure 6
<p>Comparison between the original model and the model with the introduction of GaitAE under three walking conditions in CASIA-B: NM, BG, and CL.</p>
Full article ">
15 pages, 999 KiB  
Article
Phasor-Based Myoelectric Synergy Features: A Fast Hand-Crafted Feature Extraction Scheme for Boosting Performance in Gait Phase Recognition
by Andrea Tigrini, Rami Mobarak, Alessandro Mengarelli, Rami N. Khushaba, Ali H. Al-Timemy, Federica Verdini, Ennio Gambi, Sandro Fioretti and Laura Burattini
Sensors 2024, 24(17), 5828; https://doi.org/10.3390/s24175828 - 8 Sep 2024
Viewed by 1024
Abstract
Gait phase recognition systems based on surface electromyographic signals (EMGs) are crucial for developing advanced myoelectric control schemes that enhance the interaction between humans and lower limb assistive devices. However, machine learning models used in this context, such as Linear Discriminant Analysis (LDA) [...] Read more.
Gait phase recognition systems based on surface electromyographic signals (EMGs) are crucial for developing advanced myoelectric control schemes that enhance the interaction between humans and lower limb assistive devices. However, machine learning models used in this context, such as Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM), typically experience performance degradation when modeling the gait cycle with more than just stance and swing phases. This study introduces a generalized phasor-based feature extraction approach (PHASOR) that captures spatial myoelectric features to improve the performance of LDA and SVM in gait phase recognition. A publicly available dataset of 40 subjects was used to evaluate PHASOR against state-of-the-art feature sets in a five-phase gait recognition problem. Additionally, fully data-driven deep learning architectures, such as Rocket and Mini-Rocket, were included for comparison. The separability index (SI) and mean semi-principal axis (MSA) analyses showed mean SI and MSA metrics of 7.7 and 0.5, respectively, indicating the proposed approach’s ability to effectively decode gait phases through EMG activity. The SVM classifier demonstrated the highest accuracy of 82% using a five-fold leave-one-trial-out testing approach, outperforming Rocket and Mini-Rocket. This study confirms that in gait phase recognition based on EMG signals, novel and efficient muscle synergy information feature extraction schemes, such as PHASOR, can compete with deep learning approaches that require greater processing time for feature extraction and classification. Full article
Show Figures

Figure 1

Figure 1
<p>Mean SI obtained in testing for the 40 subjects analyzed. PHASOR feature set obtained the best performance when used with SVM among all the feature sets and models employed.</p>
Full article ">Figure 2
<p>Mean MSI obtained in testing for the 40 subjects analyzed. PHASOR feature set obtained the best performance when used with RBF-SVM among all the feature sets and models employed.</p>
Full article ">Figure 3
<p>Mean accuracy (ACC) obtained in testing for the 40 subjects analyzed. PHASOR feature set obtained the best performance when used with SVM among all the feature sets and models employed.</p>
Full article ">Figure 4
<p>Mean processing time in ms for computing the feature sets in testing and performing the classification output. PHASORS, RMS-PHASORS and WL-PHASORS provided processing times comparable with other hand-crafted feature sets as HTD and Du, and showed better computational performance with respect to TDPSD and TDAR sets, for both LDA and SVM classifiers. Rocket and Mini-Rocket showed a consistently higher computational demand with respect to the hand-crafted features.</p>
Full article ">Figure 5
<p>Confusion matrices for Rocket, Mini-Rocket, and PHASOR averaged among subjects. PHASOR exhibits a dominant principal diagonal with low misclassification rates outside the principal diagonal. In contrast, Rocket shows the worst performance, while Mini-Rocket performs well but not as well as PHASOR. Overall, the confusion matrices confirm the analysis performed using SI and MSA metrics.</p>
Full article ">
14 pages, 2122 KiB  
Article
Deep Learning-Based Obesity Identification System for Young Adults Using Smartphone Inertial Measurements
by Gou-Sung Degbey, Eunmin Hwang, Jinyoung Park and Sungchul Lee
Int. J. Environ. Res. Public Health 2024, 21(9), 1178; https://doi.org/10.3390/ijerph21091178 - 4 Sep 2024
Viewed by 973
Abstract
Obesity recognition in adolescents is a growing concern. This study presents a deep learning-based obesity identification framework that integrates smartphone inertial measurements with deep learning models to address this issue. Utilizing data from accelerometers, gyroscopes, and rotation vectors collected via a mobile health [...] Read more.
Obesity recognition in adolescents is a growing concern. This study presents a deep learning-based obesity identification framework that integrates smartphone inertial measurements with deep learning models to address this issue. Utilizing data from accelerometers, gyroscopes, and rotation vectors collected via a mobile health application, we analyzed gait patterns for obesity indicators. Our framework employs three deep learning models: convolutional neural networks (CNNs), long-short-term memory network (LSTM), and a hybrid CNN–LSTM model. Trained on data from 138 subjects, including both normal and obese individuals, and tested on an additional 35 subjects, the hybrid model achieved the highest accuracy of 97%, followed by the LSTM model at 96.31% and the CNN model at 95.81%. Despite the promising outcomes, the study has limitations, such as a small sample and the exclusion of individuals with distorted gait. In future work, we aim to develop more generalized models that accommodate a broader range of gait patterns, including those with medical conditions. Full article
Show Figures

Figure 1

Figure 1
<p>CDC BMI percentile calculator for children and teens (Courtesy of [<a href="#B31-ijerph-21-01178" class="html-bibr">31</a>]).</p>
Full article ">Figure 2
<p>Data segmentation process. The orange frame highlights the overlapping area used in data segmentation, with the arrow indicating that this overlap is applied consistently throughout the dataset.</p>
Full article ">Figure 3
<p>CNN model architecture diagram.</p>
Full article ">Figure 4
<p>LSTM model architecture diagram.</p>
Full article ">Figure 5
<p>Hybrid model architecture diagram.</p>
Full article ">Figure 6
<p>Validation accuracy per model.</p>
Full article ">
20 pages, 3825 KiB  
Article
A Lightweight Pathological Gait Recognition Approach Based on a New Gait Template in Side-View and Improved Attention Mechanism
by Congcong Li, Bin Wang, Yifan Li and Bo Liu
Sensors 2024, 24(17), 5574; https://doi.org/10.3390/s24175574 - 28 Aug 2024
Viewed by 660
Abstract
As people age, abnormal gait recognition becomes a critical problem in the field of healthcare. Currently, some algorithms can classify gaits with different pathologies, but they cannot guarantee high accuracy while keeping the model lightweight. To address these issues, this paper proposes a [...] Read more.
As people age, abnormal gait recognition becomes a critical problem in the field of healthcare. Currently, some algorithms can classify gaits with different pathologies, but they cannot guarantee high accuracy while keeping the model lightweight. To address these issues, this paper proposes a lightweight network (NSVGT-ICBAM-FACN) based on the new side-view gait template (NSVGT), improved convolutional block attention module (ICBAM), and transfer learning that fuses convolutional features containing high-level information and attention features containing semantic information of interest to achieve robust pathological gait recognition. The NSVGT contains different levels of information such as gait shape, gait dynamics, and energy distribution at different parts of the body, which integrates and compensates for the strengths and limitations of each feature, making gait characterization more robust. The ICBAM employs parallel concatenation and depthwise separable convolution (DSC). The former strengthens the interaction between features. The latter improves the efficiency of processing gait information. In the classification head, we choose to employ DSC instead of global average pooling. This method preserves the spatial information and learns the weights of different locations, which solves the problem that the corner points and center points in the feature map have the same weight. The classification accuracies for this paper’s model on the self-constructed dataset and GAIT-IST dataset are 98.43% and 98.69%, which are 0.77% and 0.59% higher than that of the SOTA model, respectively. The experiments demonstrate that the method achieves good balance between lightweightness and performance. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of the proposed algorithm.</p>
Full article ">Figure 2
<p>(<b>a</b>) Image with detection frame; (<b>b</b>) cropped image; (<b>c</b>) binary contour image; (<b>d</b>) human silhouette with the minimum external rectangle.</p>
Full article ">Figure 3
<p>Width-to-height ratio change curve of a human body contour in a gait sequence.</p>
Full article ">Figure 4
<p>Structure of the inverted residual block.</p>
Full article ">Figure 5
<p>Improved convolutional block attention module.</p>
Full article ">Figure 6
<p>Experimental environment.</p>
Full article ">Figure 7
<p>Sample image. The first row is in order from left to right: (<b>a</b>) festinating gait; (<b>b</b>) scissor gait; and (<b>c</b>) hemiparetic gait. The second row is in order from left to right: (<b>d</b>) shuffling gait; (<b>e</b>) normal gait; and (<b>f</b>) normal gait in the CASIA-B dataset.</p>
Full article ">Figure 8
<p>(<b>a</b>) Curve of change in accuracy for each set of experiments; (<b>b</b>) curve of change in loss value for each set of experiments.</p>
Full article ">Figure 9
<p>Confusion matrix of the model on the self-constructed dataset. (A: festinating gait; B: scissor gait; C: normal gait; D: hemiparetic gait; E: shuffling gait).</p>
Full article ">Figure 10
<p>Confusion matrix of the model on the GAIT-IST dataset.</p>
Full article ">
16 pages, 1816 KiB  
Article
MFCF-Gait: Small Silhouette-Sensitive Gait Recognition Algorithm Based on Multi-Scale Feature Cross-Fusion
by Chenyang Song, Lijun Yun and Ruoyu Li
Sensors 2024, 24(17), 5500; https://doi.org/10.3390/s24175500 - 24 Aug 2024
Viewed by 912
Abstract
Gait recognition based on gait silhouette profiles is currently a major approach in the field of gait recognition. In previous studies, models typically used gait silhouette images sized at 64 × 64 pixels as input data. However, in practical applications, cases may arise [...] Read more.
Gait recognition based on gait silhouette profiles is currently a major approach in the field of gait recognition. In previous studies, models typically used gait silhouette images sized at 64 × 64 pixels as input data. However, in practical applications, cases may arise where silhouette images are smaller than 64 × 64, leading to a loss in detail information and significantly affecting model accuracy. To address these challenges, we propose a gait recognition system named Multi-scale Feature Cross-Fusion Gait (MFCF-Gait). At the input stage of the model, we employ super-resolution algorithms to preprocess the data. During this process, we observed that different super-resolution algorithms applied to larger silhouette images also affect training outcomes. Improved super-resolution algorithms contribute to enhancing model performance. In terms of model architecture, we introduce a multi-scale feature cross-fusion network model. By integrating low-level feature information from higher-resolution images with high-level feature information from lower-resolution images, the model emphasizes smaller-scale details, thereby improving recognition accuracy for smaller silhouette images. The experimental results on the CASIA-B dataset demonstrate significant improvements. On 64 × 64 silhouette images, the accuracies for NM, BG, and CL states reached 96.49%, 91.42%, and 78.24%, respectively. On 32 × 32 silhouette images, the accuracies were 94.23%, 87.68%, and 71.57%, respectively, showing notable enhancements. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensor-Based Gait Recognition)
Show Figures

Figure 1

Figure 1
<p>The Takemura method workflow: (<b>a</b>) begin with the original image, (<b>b</b>) trim excess background from the top and bottom, (<b>c</b>) resize height to 64 pixels and find the horizontal center of the contour image, and (<b>d</b>) crop the image width to 64 pixels.</p>
Full article ">Figure 2
<p>Some images removed during the preprocessing of the CASIA-B dataset.</p>
Full article ">Figure 3
<p>The method for calculating recognition rates in gait recognition algorithms.</p>
Full article ">Figure 4
<p>The overall architecture of the MFCF-Gait network.</p>
Full article ">Figure 5
<p>Detailed structure of SHPP.</p>
Full article ">Figure 6
<p>The gait silhouette images processed by different super-resolution algorithms. The first row shows images at a resolution of 64 × 64 pixels, while the second row displays images at a resolution of 32 × 32 pixels. (<b>a</b>) Nearest neighbor interpolation, (<b>b</b>) bilinear interpolation, (<b>c</b>) bicubic interpolation.</p>
Full article ">
26 pages, 501 KiB  
Article
In-Depth Analysis of GAF-Net: Comparative Fusion Approaches in Video-Based Person Re-Identification
by Moncef Boujou, Rabah Iguernaissi, Lionel Nicod, Djamal Merad and Séverine Dubuisson
Algorithms 2024, 17(8), 352; https://doi.org/10.3390/a17080352 - 11 Aug 2024
Viewed by 1316
Abstract
This study provides an in-depth analysis of GAF-Net, a novel model for video-based person re-identification (Re-ID) that matches individuals across different video sequences. GAF-Net combines appearance-based features with gait-based features derived from skeletal data, offering a new approach that diverges from traditional silhouette-based [...] Read more.
This study provides an in-depth analysis of GAF-Net, a novel model for video-based person re-identification (Re-ID) that matches individuals across different video sequences. GAF-Net combines appearance-based features with gait-based features derived from skeletal data, offering a new approach that diverges from traditional silhouette-based methods. We thoroughly examine each module of GAF-Net and explore various fusion methods at the both score and feature levels, extending beyond initial simple concatenation. Comprehensive evaluations on the iLIDS-VID and MARS datasets demonstrate GAF-Net’s effectiveness across scenarios. GAF-Net achieves state-of-the-art 93.2% rank-1 accuracy on iLIDS-VID’s long sequences, while MARS results (86.09% mAP, 89.78% rank-1) reveal challenges with shorter, variable sequences in complex real-world settings. We demonstrate that integrating skeleton-based gait features consistently improves Re-ID performance, particularly with long, more informative sequences. This research provides crucial insights into multi-modal feature integration in Re-ID tasks, laying a foundation for the advancement of multi-modal biometric systems for diverse computer vision applications. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

Figure 1
<p>A schematic representation of Improved GAF-Net illustrating its three main modules, namely the appearance feature module (with various backbones), the gait feature module, and the fusion module.</p>
Full article ">Figure 2
<p>Impact of the fusion factor (<math display="inline"><semantics> <mi>λ</mi> </semantics></math>; varying from 0 to 1) on the rank-1 accuracy.</p>
Full article ">Figure 3
<p>Impact of the fusion factor value (<math display="inline"><semantics> <mi>α</mi> </semantics></math>; varying from 0 to 1) on the rank-1 accuracy.</p>
Full article ">
23 pages, 1980 KiB  
Article
GaitSTAR: Spatial–Temporal Attention-Based Feature-Reweighting Architecture for Human Gait Recognition
by Muhammad Bilal, He Jianbiao, Husnain Mushtaq, Muhammad Asim, Gauhar Ali and Mohammed ElAffendi
Mathematics 2024, 12(16), 2458; https://doi.org/10.3390/math12162458 - 8 Aug 2024
Viewed by 884
Abstract
Human gait recognition (HGR) leverages unique gait patterns to identify individuals, but the effectiveness of this technique can be hindered due to various factors such as carrying conditions, foot shadows, clothing variations, and changes in viewing angles. Traditional silhouette-based systems often neglect the [...] Read more.
Human gait recognition (HGR) leverages unique gait patterns to identify individuals, but the effectiveness of this technique can be hindered due to various factors such as carrying conditions, foot shadows, clothing variations, and changes in viewing angles. Traditional silhouette-based systems often neglect the critical role of instantaneous gait motion, which is essential for distinguishing individuals with similar features. We introduce the ”Enhanced Gait Feature Extraction Framework (GaitSTAR)”, a novel method that incorporates dynamic feature weighting through the discriminant analysis of temporal and spatial features within a channel-wise architecture. Key innovations in GaitSTAR include dynamic stride flow representation (DSFR) to address silhouette distortion, a transformer-based feature set transformation (FST) for integrating image-level features into set-level features, and dynamic feature reweighting (DFR) for capturing long-range interactions. DFR enhances contextual understanding and improves detection accuracy by computing attention distributions across channel dimensions. Empirical evaluations show that GaitSTAR achieves impressive accuracies of 98.5%, 98.0%, and 92.7% under NM, BG, and CL conditions, respectively, with the CASIA-B dataset; 67.3% with the CASIA-C dataset; and 54.21% with the Gait3D dataset. Despite its complexity, GaitSTAR demonstrates a favorable balance between accuracy and computational efficiency, making it a powerful tool for biometric identification based on gait patterns. Full article
Show Figures

Figure 1

Figure 1
<p>The pyramid framework of GaitSTAR integrates silhouette edge information and optical flow to capture shape and motion. It uses the Lucas–Kanade algorithm to handle the full dynamic range of pixel values. The framework extracts discriminative features through a scatter matrix and a temporal-attention mechanism, computing a decoding weight vector that enhances the focus on relevant temporal and spatial features. This process generates per-channel decoding weight vectors, leading to a linear projection of reweighted features into a unified channel-wise decoding vector.</p>
Full article ">Figure 2
<p>This figure illustrates feature selection from instances of the CASIA-B. (<b>a</b>) denotes the mask of the targeted area, and the (<b>b</b>) images depict the maximum possible feature area using a discriminative analysis approach.</p>
Full article ">Figure 3
<p>Illustration of feature set transformation capturing contextual linkages and dependencies. These are processed through a multi-head self-attention mechanism [<a href="#B41-mathematics-12-02458" class="html-bibr">41</a>] and integrated into an FFN architecture with residual connections for refined feature transformations.</p>
Full article ">Figure 4
<p>The figure depicts the data flow through a multi-head attention mechanism in transformer models. It details the steps involved in dynamic feature weighting, which expands the feature decoding space and computes attention distributions across key embeddings for each channel. This process improves query–key interactions, and it enhances expressiveness in both temporal and spatial contexts.</p>
Full article ">Figure 5
<p>This figure illustrates appearance-feature selection from instances of CASIA-B with different angles of a person carrying a bag.</p>
Full article ">Figure 6
<p>This figure illustrates the Gait Energy Images (GEIs), presented in sequence from left to right.</p>
Full article ">Figure 7
<p>This figure illustrates instances from the CASIA-B (<b>a</b>) and CASIA-C (<b>b</b>) datasets. In panel (<b>a</b>), the images depict the BG, NM, and CL conditions, arranged from left to right across different perspectives. Panel (<b>b</b>) showcases images depicting the FW, SW, and BW conditions, presented in sequence from left to right.</p>
Full article ">Figure 8
<p>ROC curves for two models. The black dashed line represents the baseline performance of a random classifier. The blue curve represents the ROC curve for Model 1 (GaitSTAR), and the red curve represents the ROC curve for Model 2 (MSGG).</p>
Full article ">
16 pages, 4044 KiB  
Article
PerFication: A Person Identifying Technique by Evaluating Gait with 2D LiDAR Data
by Mahmudul Hasan, Md. Kamal Uddin, Ryota Suzuki, Yoshinori Kuno and Yoshinori Kobayashi
Electronics 2024, 13(16), 3137; https://doi.org/10.3390/electronics13163137 - 8 Aug 2024
Viewed by 1164
Abstract
PerFication is a person identification technique that uses a 2D LiDAR sensor in a customized dataset KoLaSu (Kobayashi Laboratory of Saitama University). Video-based recognition systems are highly effective and are now at the forefront of research. However, it experiences bottlenecks. New inventions can [...] Read more.
PerFication is a person identification technique that uses a 2D LiDAR sensor in a customized dataset KoLaSu (Kobayashi Laboratory of Saitama University). Video-based recognition systems are highly effective and are now at the forefront of research. However, it experiences bottlenecks. New inventions can cause embarrassing situations, settings, and momentum. To address the limitations of technology, one must introduce a new technology to enhance it. Using biometric characteristics are highly reliable and valuable methods for identifying individuals. Most approaches depend on close interactions with the subject. A gait is the walking pattern of an individual. Most research on identifying individuals based on their walking patterns is conducted using RGB or RGB-D cameras. Only a limited number of studies utilized LiDAR data. Working with 2D LiDAR imagery for individual tracking and identification is excellent in situations where video monitoring is ineffective, owing to environmental challenges such as disasters, smoke, occlusion, and economic constraints. This study presented an extensive analysis of 2D LiDAR data using a meticulously created dataset and a modified residual neural network. In this paper, an alternative method of person identification is proposed that circumvents the limitations of video cameras in terms of capturing difficulties. An individual is precisely identified by the system through the utilization of ankle-level 2D LiDAR data. Our LiDAR-based detection system offers a unique method for person identification in modern surveillance systems, with a painstaking dataset, remarkable results, and a break from traditional camera setups. We focused on demonstrating the cost-effectiveness and durability of LiDAR sensors by utilizing 2D sensors in our research. Full article
Show Figures

Figure 1

Figure 1
<p>PerFication: An overview of 2D LiDAR-based estimation.</p>
Full article ">Figure 2
<p>Person tracking, property estimation, and recognition using LiDAR.</p>
Full article ">Figure 3
<p>Motion history image.</p>
Full article ">Figure 4
<p>Person identification experimental setup.</p>
Full article ">Figure 5
<p>KoLaSU, two persons’ data: MHI on top and posture on bottom.</p>
Full article ">Figure 6
<p>Person identification based on gait.</p>
Full article ">Figure 7
<p>Cross validation: gait performance test with cross-data.</p>
Full article ">Figure 8
<p>Performance analysis of combined data.</p>
Full article ">Figure 9
<p>Modern studies utilizing cutting-edge equipment [<a href="#B32-electronics-13-03137" class="html-bibr">32</a>,<a href="#B33-electronics-13-03137" class="html-bibr">33</a>].</p>
Full article ">
16 pages, 3173 KiB  
Article
Subtype-Specific Ligand Binding and Activation Gating in Homomeric and Heteromeric P2X Receptors
by Xenia Brünings, Ralf Schmauder, Ralf Mrowka, Klaus Benndorf and Christian Sattler
Biomolecules 2024, 14(8), 942; https://doi.org/10.3390/biom14080942 - 2 Aug 2024
Viewed by 1233
Abstract
P2X receptors are ATP-activated, non-specific cation channels involved in sensory signalling, inflammation, and certain forms of pain. Investigations of agonist binding and activation are essential for comprehending the fundamental mechanisms of receptor function. This encompasses the ligand recognition by the receptor, conformational changes [...] Read more.
P2X receptors are ATP-activated, non-specific cation channels involved in sensory signalling, inflammation, and certain forms of pain. Investigations of agonist binding and activation are essential for comprehending the fundamental mechanisms of receptor function. This encompasses the ligand recognition by the receptor, conformational changes following binding, and subsequent cellular signalling. The ATP-induced activation of P2X receptors is further influenced by the concentration of Mg2+ that forms a complex with ATP. To explore these intricate mechanisms, two new fluorescently labelled ATP derivatives have become commercially available: 2-[DY-547P1]-AHT-ATP (fATP) and 2-[DY-547P1]-AHT-α,βMe-ATP (α,βMe-fATP). We demonstrate a subtype-specific pattern of ligand potency and efficacy on human P2X2, P2X3, and P2X2/3 receptors with distinct relations between binding and gaiting. Given the high in vivo concentrations of Mg2+, the complex formed by Mg2+ and ATP emerges as an adequate ligand for P2X receptors. Utilising fluorescent ligands, we observed a Mg2+-dependent reduction in P2X2 receptor activation, while binding remained surprisingly robust. In contrast, P2X3 receptors initially exhibited decreased activation at high Mg2+ concentrations, concomitant with increased binding, while the P2X2/3 heteromer showed a hybrid effect. Hence, our new fluorescent ATP derivatives are powerful tools for further unravelling the mechanism underlying ligand binding and activation gating in P2X receptors. Full article
Show Figures

Figure 1

Figure 1
<p>Cartoon illustrating the interaction between a heterotrimeric P2X2/3 receptor with the ligands (f)ATP and Mg<sup>2+</sup>. The extracellular domain contains three principally different sites between the subunit interfaces for (f)ATP binding. Here, we show only one binding site as an example. ATP binding results in the ion flux. Mg<sup>2+</sup> ions can bind in complex with ATP to the orthosteric binding site and, additionally, to an allosteric binding site at the P2X receptor, resulting in activation and modulation of its function, respectively.</p>
Full article ">Figure 2
<p>Confocal microscopy was used to characterise the binding of fluorescently tagged ligands to HEK 293 cells stably expressing human P2X2 receptors. (<b>a</b>) Structure of fATP and α,βMe-fATP. The dye DY547P1 is attached to the 2-position of the purine ring through an aminohexylthio-linker. (<b>b</b>) Representative confocal images for quantification of fATP and α,βMe-fATP binding, including approximately 200 stably transfected cells/mm<sup>2</sup>. Specific binding is proven by the lack of signal in non-induced cells and in cells expressing P2X2 receptors in the presence of 100 µM ATP. (<b>c</b>) To measure the binding of the fluorescently labelled ATP to P2X2 receptors in HEK cells, an automated analysis was conducted.</p>
Full article ">Figure 3
<p>Activation of P2X receptors by different ligands. Representative current recordings of different ligands and concentrations-activation relationships from human P2X2 receptors (<b>a</b>), P2X3 receptors (<b>b</b>), and P2X2/3 receptors (<b>c</b>). The maximum amplitude of the current signals was normalised with respect to the maximum current amplitude at 100 µM ATP. Fluorescent ATP derivatives were normalised to 100 µM ATP due to high costs. Means of n = 5–17 cells (±SEM) were fitted with Equation (1) to obtain values for <span class="html-italic">EC</span><sub>50</sub> and H (<a href="#biomolecules-14-00942-t001" class="html-table">Table 1</a>). Records from HEK293 cells stably expressing the respective receptors in the whole-cell configuration at −50 mV.</p>
Full article ">Figure 4
<p>Concentration–binding relationships and concentration–activation relationships with fATP and α,βMe-fATP for human P2X2 receptors (<b>a</b>), P2X3 receptors (<b>b</b>), and P2X2/3 (<b>c</b>) receptors. The figure shows the concentration–binding and concentration–activation relationships and below competition assays for the corresponding receptor, which demonstrates the ability of fATP (1 µM for hP2X2 and 0.3 µM for hP2X3 and hP2X2/3) to compete with both ATP and α,βMe-ATP. The reference signal of 1 µM fATP (hP2X2) and 0.3 µM fATP (hP2X3 and hP2X2/3) without competing with other ligands was used for normalisation. Each receptor exhibits unique binding affinities, with α,βMe-ATP being able to differentiate between hP2X3 and hP2X2 or hP2X2/3. The data points, which indicate a binding signal, are derived from 15–40 images and are presented as mean ± SEM. The concentration–activation and concentration–binding relationships are normalised to their respective maximal values, except for the small or negligible current amplitudes with α,βMe-fATP. * <span class="html-italic">p</span> &lt; 0.05 and *** <span class="html-italic">p</span> &lt; 0.001. ns—non significant.</p>
Full article ">Figure 5
<p>The current response to ATP and α,βMe-fATP at human P2X2 receptors. (<b>a</b>) The co-application of α,βMe-fATP (10 µM) with ATP (0.1 µM) elicits a notable increase in the current response. (<b>b</b>) Comparison of the current responses of ATP, α,βMe-ATP, and their co-application with 0.1 µM ATP.</p>
Full article ">Figure 6
<p>Binding and activation of Mg<sup>2+</sup>ATP. Mg<sup>2+</sup>ATP can bind to P2X2, P2X3, and P2X2/3 and activates P2X3 and P2X2/3, but not P2X2. Each panel displays representative current traces with 0.3 µM ATP (black line) and 0.3 µM ATP containing Mg<sup>2+</sup> (orange line) for hP2X2 (<b>a</b>), hP2X3 (<b>b</b>), and hP2X2/3 (<b>c</b>). Below, the relationship between binding and activation is depicted for each respective subtype as a function of the Mg<sup>2+</sup> concentration. The data points show the binding of fATP (green line) obtained from 15–40 images (mean ± SEM). The ATP response (black line) is shown as mean ± SEM from 5–11 cells.</p>
Full article ">Figure 7
<p>Time courses of activation (<span class="html-italic">t<sub>on</sub></span>), deactivation (<span class="html-italic">t<sub>off</sub></span>), and desensitisation (<span class="html-italic">t<sub>des</sub></span>) of Mg<sup>2+</sup> modulated ATP-induced currents of P2X receptors. Mg<sup>2+</sup> changed time constants only at high Mg<sup>2+</sup> levels in human P2X3 receptors (<b>b</b>) and human P2X2/3 receptors (<b>c</b>), but not in human P2X2 receptors (<b>a</b>). *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
Back to TopTop