Exploring Gait Recognition in Wild Nighttime Scenes
<p>Examples of existing mainstream datasets. (<b>a</b>) CASIA-B. (<b>b</b>) OU-MVLP. (<b>c</b>) GREW. (<b>d</b>) Gait3D.</p> "> Figure 2
<p>Examples of GaitDN dataset (with key information obscured). (<b>a</b>) Daytime data. (<b>b</b>) Nighttime data.</p> "> Figure 3
<p>Collected daytime and nighttime data with corresponding human silhouettes. (<b>a</b>,<b>b</b>) show data collected during the daytime, where clear human silhouettes can be successfully extracted. (<b>c</b>,<b>d</b>) show data collected during the nighttime, where the silhouettes are either unidentifiable or incomplete.</p> "> Figure 4
<p>Detailed structure of (<b>a</b>) Self-Attention Graph Convolution (SA-GC) and (<b>b</b>) Multi-Scale Temporal Convolutional Network (MS-TCN).</p> "> Figure 5
<p>Examples of pedestrians with varying walking speeds in GaitDN. (<b>a</b>–<b>c</b>) illustrate the process of a pedestrian taking a step, from the moment the left foot leaves the ground to the moment the right foot leaves the ground in the subsequent step. (<b>a</b>) The pedestrian takes one step in 12 frames. (<b>b</b>) The pedestrian takes one step in 16 frames. (<b>c</b>) The pedestrian takes one step in 18 frames.</p> "> Figure 6
<p>The framework of GaitSAT.</p> "> Figure 7
<p>Example of pedestrian sequence in extreme conditions (with key information obscured). For pedestrian (<b>a</b>), the two sequences demonstrate notable variations in lighting direction and color, where the clothing blends closely with the background, making feature extraction more challenging. For pedestrian (<b>b</b>), the sequences reveal changes in lighting color and intensity, as well as significant differences in clarity and camera angles. For pedestrian (<b>c</b>), the background includes strong lighting variations and dynamic changes caused by moving vehicles, posing additional challenges for accurate gait recognition.</p> ">
Abstract
:1. Introduction
- To address the limitations in existing research on nighttime gait recognition, we constructed the first outdoor gait dataset that encompasses both daytime and nighttime data, named Gait Recognition of Day and Night (GaitDN). This dataset provides data support for gait recognition research and applications, particularly in low-light nighttime scenarios.
- A novel gait recognition framework GaitSAT is introduced, which is particularly suited for processing gait recognition tasks in complex nighttime scenarios. GaitSAT not only enhances the model’s adaptability to low-light outdoor scenes but also provides a flexible and generalizable framework for future research.
- Experimental results demonstrate that GaitSAT achieves state-of-the-art recognition accuracy and generalization performance among existing skeleton-based gait recognition methods in nighttime outdoor scenarios. These findings substantiate the effectiveness of our proposed method in practical applications and showcase the potential of GaitSAT in addressing more complex recognition scenarios.
2. Related Works
2.1. Gait Recognition Datasets
2.2. Gait Recognition Methods
3. GaitSAT Framework
3.1. Spatial Feature Extraction
3.2. Multi-Scale Temporal Rhythm Perception Module
3.3. Network Structure
4. The GaitDN Dataset
5. Results and Discussion
5.1. Experimental Settings
5.2. Ablation Study
5.2.1. Analysis of Input Feature Types
5.2.2. Analysis of the Spatial Feature Extraction Module
5.2.3. Analysis of Multi-Scale Temporal Rhythm Perception Module
5.3. Performance Comparison with State-of-the-Art Methods
- GaitGraph is a representative pose-based method that first introduced 2D human poses into gait recognition. It employs graph convolutional networks [38] for spatiotemporal modeling and uses supervised contrastive loss for training.
- GaitTR pioneers the incorporation of spatial Transformers in gait recognition, combining them with temporal convolutional networks to extract gait features from skeletons.
- GPGait proposes a series of human-oriented methods for preprocessing human poses, aiming to obtain unified and rich representations. It also utilizes mask design to achieve effective graph partitioning and extraction of local–global feature relationships.
5.4. Analysis of Generalization Performance
- When the target dataset is the daytime data from GaitDN, GaitSAT achieved the current best performance when using GREW and Gait3D as source datasets. The Rank-1 accuracies for GREW→GaitDN_Day (where “→” indicates the cross-domain direction from source to target dataset) and Gait3D → GaitDN_Day were 88.89% and 86.78%, respectively. However, when CASIA-B, OUMVLP, and GaitDN were used as source datasets, the cross-domain performance was slightly inferior to GPGait.
- When the target dataset is the nighttime data from GaitDN, GaitSAT demonstrated the strongest cross-domain performance across all source datasets. Notably, the Rank-1 accuracies for GREW → GaitDN_Night and Gait3D →GaitDN_Night reached 71.67% and 70.03%, respectively. These results comprehensively showcase GaitSAT’s excellent transferability on nighttime data. Regardless of the source dataset, the trained model maintains outstanding performance on nighttime data, indicating GaitSAT’s robust domain adaptation capability, particularly suitable for nighttime gait recognition.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
SA-GC | Self-Attention Graph Convolution; |
MS-TCN | Multi-Scale Temporal Convolutional Network. |
References
- Yu, S.; Tan, D.; Tan, T. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, 20–24 August 2006; Volume 4, pp. 441–444. [Google Scholar]
- Takemura, N.; Makihara, Y.; Muramatsu, D.; Echigo, T.; Yagi, Y. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Trans. Comput. Vis. Appl. 2018, 10, 4. [Google Scholar] [CrossRef]
- Zheng, J.; Liu, X.; Liu, W.; He, L.; Yan, C.; Mei, T. Gait recognition in the wild with dense 3d representations and a benchmark. In Proceedings of the IEEE/CVF Conference on cComputer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 20228–20237. [Google Scholar]
- Zhu, Z.; Guo, X.; Yang, T.; Huang, J.; Deng, J.; Huang, G.; Du, D.; Lu, J.; Zhou, J. Gait recognition in the wild: A benchmark. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 14789–14799. [Google Scholar]
- Li, W.; Hou, S.; Zhang, C.; Cao, C.; Liu, X.; Huang, Y.; Zhao, Y. An in-depth exploration of person re-identification and gait recognition in cloth-changing conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 13824–13833. [Google Scholar]
- Sepas-Moghaddam, A.; Etemad, A. Deep gait recognition: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 264–284. [Google Scholar] [CrossRef] [PubMed]
- Kukreja, V.; Kumar, D.; Kaur, A. Deep learning in Human Gait Recognition: An Overview. In Proceedings of the 2021 International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 4–5 March 2021; pp. 9–13. [Google Scholar]
- Lea, C.; Flynn, M.D.; Vidal, R.; Reiter, A.; Hager, G.D. Temporal convolutional networks for action segmentation and detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 156–165. [Google Scholar]
- Shiraga, K.; Makihara, Y.; Muramatsu, D.; Echigo, T.; Yagi, Y. Geinet: View-invariant gait recognition using a convolutional neural network. In Proceedings of the 2016 International Conference on Biometrics (ICB), Halmstad, Sweden, 13–16 June 2016; pp. 1–8. [Google Scholar]
- Wu, Z.; Huang, Y.; Wang, L.; Wang, X.; Tan, T. A comprehensive study on cross-view gait based human identification with deep cnns. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 209–226. [Google Scholar] [CrossRef] [PubMed]
- Song, Y.F.; Zhang, Z.; Shan, C.; Wang, L. Stronger, faster and more explainable: A graph convolutional baseline for skeleton-based action recognition. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1625–1633. [Google Scholar]
- Tan, D.; Huang, K.; Yu, S.; Tan, T. Efficient night gait recognition based on template matching. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 3, pp. 1000–1003. [Google Scholar]
- Iwama, H.; Okumura, M.; Makihara, Y.; Yagi, Y. The OU-ISIR gait database comprising the large population dataset and performance evaluation of gait recognition. IEEE Trans. Inf. Forensics Secur. 2012, 7, 1511–1521. [Google Scholar] [CrossRef]
- An, W.; Yu, S.; Makihara, Y.; Wu, X.; Xu, C.; Yu, Y.; Liao, R.; Yagi, Y. Performance evaluation of model-based gait on multi-view very large population database with pose sequences. IEEE Trans. Biom. Behav. Identity Sci. 2020, 2, 421–430. [Google Scholar] [CrossRef]
- Tan, D.; Huang, K.; Yu, S.; Tan, T. Recognizing night walkers based on one pseudoshape representation of gait. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
- DeCann, B.; Ross, A. Gait curves for human recognition, backpack detection, and silhouette correction in a nighttime environment. In Proceedings of the Biometric Technology for Human Identification VII, Orlando, FL, USA, 5–6 April 2010; Volume 7667, pp. 248–260. [Google Scholar]
- Liu, Z.; Malave, L.; Sarkar, S. Studies on silhouette quality and gait recognition. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; Volume 2, p. II. [Google Scholar]
- Ye, M.; Shen, J.; Lin, G.; Xiang, T.; Shao, L.; Hoi, S.C. Deep learning for person re-identification: A survey and outlook. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 2872–2893. [Google Scholar] [CrossRef] [PubMed]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep learning for computer vision: A brief review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed]
- Fan, C.; Peng, Y.; Cao, C.; Liu, X.; Hou, S.; Chi, J.; Huang, Y.; Li, Q.; He, Z. Gaitpart: Temporal part-based model for gait recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14225–14233. [Google Scholar]
- Huang, X.; Zhu, D.; Wang, H.; Wang, X.; Yang, B.; He, B.; Liu, W.; Feng, B. Context-sensitive temporal feature learning for gait recognition. In Proceedings of the IEEE/CVF international conference on computer vision, Montreal, QC, Canada, 10–17 October 2021; pp. 12909–12918. [Google Scholar]
- Lin, B.; Zhang, S.; Wang, M.; Li, L.; Yu, X. Gaitgl: Learning discriminative global-local feature representations for gait recognition. arXiv 2022, arXiv:2208.01380. [Google Scholar]
- Hou, S.; Cao, C.; Liu, X.; Huang, Y. Gait lateral network: Learning discriminative and compact representations for gait recognition. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 382–398. [Google Scholar]
- Chao, H.; He, Y.; Zhang, J.; Feng, J. Gaitset: Regarding gait as a set for cross-view gait recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8126–8133. [Google Scholar]
- Liao, R.; Yu, S.; An, W.; Huang, Y. A model-based gait recognition method with body pose and human prior knowledge. Pattern Recognit. 2020, 98, 107069. [Google Scholar] [CrossRef]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5693–5703. [Google Scholar]
- Teepe, T.; Khan, A.; Gilg, J.; Herzog, F.; Hörmann, S.; Rigoll, G. Gaitgraph: Graph convolutional network for skeleton-based gait recognition. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 2314–2318. [Google Scholar]
- Seong, S.; Choi, J. Semantic segmentation of urban buildings using a high-resolution network (HRNet) with channel and spatial attention gates. Remote Sens. 2021, 13, 3087. [Google Scholar] [CrossRef]
- Fu, Y.; Meng, S.; Hou, S.; Hu, X.; Huang, Y. Gpgait: Generalized pose-based gait recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 19595–19604. [Google Scholar]
- Pinyoanuntapong, E.; Ali, A.; Wang, P.; Lee, M.; Chen, C. Gaitmixer: Skeleton-based gait representation learning via wide-spectrum multi-axial mixer. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–9 June 2023; pp. 1–5. [Google Scholar]
- Chi, H.g.; Ha, M.H.; Chi, S.; Lee, S.W.; Huang, Q.; Ramani, K. Infogcn: Representation learning for human skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 20186–20196. [Google Scholar]
- Liu, Z.; Zhang, H.; Chen, Z.; Wang, Z.; Ouyang, W. Disentangling and unifying graph convolutions for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 143–152. [Google Scholar]
- Farhadi, A.; Redmon, J. Yolov3: An incremental improvement. In Proceedings of the Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; Springer: Berlin/Heidelberg, Germany, 2018; Volume 1804, pp. 1–6. [Google Scholar]
- He, L.; Liao, X.; Liu, W.; Liu, X.; Cheng, P.; Mei, T. Fastreid: A pytorch toolbox for general instance re-identification. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 9664–9667. [Google Scholar]
- Yan, S.; Xiong, Y.; Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Cheng, K.; Zhang, Y.; He, X.; Chen, W.; Cheng, J.; Lu, H. Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 183–192. [Google Scholar]
- Vaswani, A. Attention is all you need. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
- Zhang, S.; Tong, H.; Xu, J.; Maciejewski, R. Graph convolutional networks: A comprehensive review. Comput. Soc. Netw. 2019, 6, 1–23. [Google Scholar] [CrossRef] [PubMed]
- Zhu, H.; Zheng, Z.; Nevatia, R. Gait recognition using 3-d human body shape inference. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 909–918. [Google Scholar]
- Zou, S.; Xiong, J.; Fan, C.; Shen, C.; Yu, S.; Tang, J. A multi-stage adaptive feature fusion neural network for multimodal gait recognition. IEEE Trans. Biom. Behav. Identity Sci. 2024, 6, 539–549. [Google Scholar] [CrossRef]
- Marín-Jiménez, M.J.; Castro, F.M.; Delgado-Escaño, R.; Kalogeiton, V.; Guil, N. UGaitNet: Multimodal gait recognition with missing input modalities. IEEE Trans. Inf. Forensics Secur. 2021, 16, 5452–5462. [Google Scholar] [CrossRef]
- Li, G.; Guo, L.; Zhang, R.; Qian, J.; Gao, S. Transgait: Multimodal-based gait recognition with set transformer. Appl. Intell. 2023, 53, 1535–1547. [Google Scholar] [CrossRef]
Dataset | Year | Subject | Seq | Cam | Data Type | Wild | Daytime Data | Nighttime Data |
---|---|---|---|---|---|---|---|---|
CASIA-A | 2003 | 20 | 240 | 3 | RGB, Silh. | × | × | × |
CASIA Infrared | 2006 | 153 | 1530 | 1 | Infrared, Silh. | × | × | ✓ |
CASIA-B | 2006 | 124 | 13,640 | 11 | RGB, Silh. | × | × | × |
OU-ISIR Speed | 2010 | 34 | 306 | 1 | Silh. | × | × | × |
OU-ISIR-LP | 2012 | 4007 | 31,368 | 2 | Silh. | × | × | × |
OU-LP Bag | 2018 | 62,528 | 187,584 | 1 | Silh. | × | × | × |
OU-MVLP | 2018 | 10,307 | 288,596 | 14 | Silh. | × | ✓ | × |
OU-MVLP Pose | 2020 | 10,307 | 288,596 | 14 | 2D Pose | × | ✓ | × |
GREW | 2021 | 26,345 | 128,671 | 882 | Silh., 2D/3D Pose, Flow | ✓ | ✓ | × |
Gait3D | 2022 | 4000 | 25,309 | 39 | Silh., 2D/3D Pose, 3D Mesh&SMPL | ✓ | ✓ | × |
GaitDN | - | 1009 | 3300 | 3 | 2D Pose | ✓ | ✓ | ✓ |
Dataset | Batchsize | Iterations | Spatial Feature Extractor | Multi-Scale Temporal Rhythm Perception Module | |
---|---|---|---|---|---|
Output Channels | Number of SAT Units | Output Channel | |||
GaitDN | (32, 4) | 30 k | (64, 64, 128) | 2 | (128, 256) |
CASIA-B | (4, 32) | 40 k | (64, 64, 128) | 1 | (256) |
OUMVLP | (32, 16) | 150 k | (64, 128, 128) | 3 | (128, 256, 256) |
GREW | (32, 8) | 200 k | (64, 128, 128) | 3 | (128, 256, 256) |
Gait3D | (32, 4) | 60 k | (64, 64, 128) | 2 | (128, 256) |
Setting | CASIA-B | Gait3D | GaitDN | |
---|---|---|---|---|
Joints | Bones | |||
✓ | 73.56 | 19.30 | 72.13 | |
✓ | 72.44 | 17.80 | 80.95 | |
✓ | ✓ | 76.32 | 22.90 | 86.24 |
Block | CASIA-B | Gait3D | GaitDN |
---|---|---|---|
ResGCN [27] | 63.74 | 15.33 | 65.83 |
ST-GCN [35] | 73.60 | 19.89 | 74.33 |
Shift-GCN [36] | 71.26 | 18.44 | 74.49 |
PAGCN [29] | 74.10 | 22.56 | 84.77 |
SA-GC | 76.32 | 22.90 | 86.24 |
Block | Setting | CASIA-B | Gait3D | GaitDN |
---|---|---|---|---|
Stride | ||||
TCN [8] | All stride = 1 | 58.32 | 13.65 | 63.67 |
All stride = 2 | 56.33 | 13.98 | 62.69 | |
All stride = 3 | 55.97 | 12.11 | 58.98 | |
stride = 1, 2, 3 | 59.86 | 14.92 | 54.10 | |
Temporal Bottleneck Block [11] | All stride = 1 | 69.20 | 17.53 | 77.38 |
All stride = 2 | 67.56 | 17.04 | 76.52 | |
All stride = 3 | 63.05 | 15.54 | 74.81 | |
stride = 1, 2, 3 | 71.97 | 18.67 | 78.96 | |
MS-TCN | All stride = 1 | 73.74 | 22.60 | 85.19 |
All stride = 2 | 67.56 | 17.04 | 76.52 | |
All stride = 3 | 72.41 | 20.90 | 73.77 | |
stride = 1, 2, 3 | 76.32 | 22.90 | 86.24 |
Method | CASIA-B | OUMVLP | Gait3D | GREW | GaitDN | |||
---|---|---|---|---|---|---|---|---|
NM | BG | CL | Mean | |||||
GaitGraph | 86.37 | 76.5 | 65.24 | 76.04 | 4.24 | 8.60 | 10.18 | 69.38 |
GaitTR | 94.72 | 89.29 | 86.65 | 90.22 | 39.77 | 7.20 | 48.58 | 60.66 |
GPGait | 93.60 | 80.15 | 69.29 | 81.01 | 59.11 | 22.40 | 57.04 | 85.71 |
GaitSAT | 90.37 | 73.84 | 64.04 | 76.32 | 60.77 | 22.90 | 57.37 | 86.24 |
Method | Param. | Inference Time |
---|---|---|
GaitGraph | 0.320 | 0.0029 |
GaitTR | 0.512 | 0.0051 |
GPGait | 3.655 | 0.0428 |
GaitSAT | 2.961 | 0.0273 |
Source Dataset | Method | Target DataSet | |
---|---|---|---|
GaitDN_Day | GaitDN_Night | ||
CASIA-B | GaitGraph | 21.56 | 6.52 |
GaitTR | 21.16 | 18.33 | |
GPGait | 81.48 | 61.67 | |
GaitSAT | 78.84 | 61.96 | |
OUMVLP | GaitGraph | 40.52 | 26.08 |
GaitTR | 55.55 | 40.10 | |
GPGait | 80.95 | 56.37 | |
GaitSAT | 77.78 | 56.67 | |
GREW | GaitGraph | 84.97 | 69.56 |
GaitTR | 64.55 | 53.33 | |
GPGait | 87.83 | 71.56 | |
GaitSAT | 88.89 | 71.67 | |
Gait3D | GaitGraph | 65.39 | 43.48 |
GaitTR | 40.74 | 28.33 | |
GPGait | 85.56 | 69.89 | |
GaitSAT | 86.78 | 70.00 | |
GaitDN | GaitGraph | 72.19 | 65.36 |
GaitTR | 62.93 | 57.44 | |
GPGait | 87.83 | 66.07 | |
GaitSAT | 86.17 | 68.01 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, H.; Gong, W.; Li, Y.; Wu, Y.; Li, K.; Gonzàlez, J. Exploring Gait Recognition in Wild Nighttime Scenes. Appl. Sci. 2025, 15, 350. https://doi.org/10.3390/app15010350
Li H, Gong W, Li Y, Wu Y, Li K, Gonzàlez J. Exploring Gait Recognition in Wild Nighttime Scenes. Applied Sciences. 2025; 15(1):350. https://doi.org/10.3390/app15010350
Chicago/Turabian StyleLi, Haotian, Wenjuan Gong, Yutong Li, Yikai Wu, Kechen Li, and Jordi Gonzàlez. 2025. "Exploring Gait Recognition in Wild Nighttime Scenes" Applied Sciences 15, no. 1: 350. https://doi.org/10.3390/app15010350
APA StyleLi, H., Gong, W., Li, Y., Wu, Y., Li, K., & Gonzàlez, J. (2025). Exploring Gait Recognition in Wild Nighttime Scenes. Applied Sciences, 15(1), 350. https://doi.org/10.3390/app15010350