Indoor Human Action Recognition Based on Dual Kinect V2 and Improved Ensemble Learning Method
<p>The Joint Connection in Different Orientations.</p> "> Figure 2
<p>Key Ensemble Learning Method (<b>a</b>) Bagging Ensemble Learning Method (<b>b</b>) Boosting Ensemble Learning Method.</p> "> Figure 2 Cont.
<p>Key Ensemble Learning Method (<b>a</b>) Bagging Ensemble Learning Method (<b>b</b>) Boosting Ensemble Learning Method.</p> "> Figure 3
<p>Dual Kinect V2 System Deployment Plan (<b>a</b>) Deploying Plan Model (<b>b</b>) Deployment in the Real-World Situation.</p> "> Figure 4
<p>Improved Communication Process for Dual Kinect V2 System.</p> "> Figure 5
<p>The Overall Framework of the System with Adaptive Adjustment Function.</p> "> Figure 6
<p>The signal modulation process and the composite function (<b>a</b>) The Chirp Signal after being Processed (<b>b</b>) The Composite Window Function.</p> "> Figure 7
<p>Kinect V2 joints.</p> "> Figure 8
<p>The Iteration Process.</p> "> Figure 9
<p>The Accuracy Comparisons of four HAR tasks.</p> ">
Abstract
:1. Introduction
- (1)
- We introduce a novel dual Kinect V2 binocular system tailored for HAR in indoor flexible orientation settings, complemented by a meticulously designed identification procedure for HAR. Notably, to counteract the self-occlusion challenge endemic to flexible orientations, we integrate an indoor localization procedure and an adaptive weight adjustment mechanism. This system dynamically modifies its behavior based on real-time localization findings, harnessing the dual Kinect V2 system’s strengths and mitigating the adverse effects of self-occlusion.
- (2)
- In our adaptive weight adjustment mechanism, some other factors may appear to bring us negative impact when we are introducing the indoor localization module. Therefore, some effective measures should be taken. For acoustic signals used in the indoor localization process and the NLOS transmission paths in real-world situations, a novel method based on a fuzzy c-means algorithm is introduced to optimize the Support Vector Machine (SVM). We treat it as a weak classifier. The amalgamation of multiple SVMs culminates in the employment of an enhanced AdaBoost as a potent classifier, proficient in discerning NLOS acoustic signals in dynamic settings. It helps this process to obtain a more accurate indoor localization result. Then, the localization process is completed based on the identification results. This aspect will help to assist the dual Kinect V2 system in completing the adaptive weight adjustment mechanism based on the indoor localization results.
- (3)
- We present a cutting-edge feature extraction method based on skeleton joint data for identifying sitting, standing, raising one’s hand, and falling in tangible settings. Our HAR dataset with flexible orientations was produced. The Random Forest (RF) model is at the crux of this methodology. Addressing its inherent susceptibility to entrapment in local minima and its suboptimal parameter optimization efficiency—both of which compromise classification prowess—we propose a bat algorithm-optimized RF. Our approach greatly improves classification efficiency. Finally, we achieve extremely high HAR accuracy with the aid of an adaptive weight adjustment mechanism in indoor flexible orientation scenarios.
2. Related Devices and Methods
2.1. Kinect Devices
2.2. Ensemble Learning Method
3. System Setup and Framework
3.1. Dual Kinect V2 System
3.2. Preprocessing and Feature Extraction Method
4. Optimization Strategies of Improved Ensemble Learning Method
4.1. NLOS Acoustic Signal Identification Based on Fuzzy C-Means Algorithm and AdaBoost
- Stage 1:
- As for each acoustic signal feature, it can be seen as a single, non-linearly independent set of spatiotemporal samples. If there are samples in total, we can obtain . 10,000 Chirp signal samples are used in total, 5000 for LOS acoustic signals and 5000 for NLOS acoustic signals.
- Stage 2:
- Before training each weak classifier SVM, the weight of the -th group training can be recorded as: , . The initial weight should be set as . In -th iteration process, we normalize each weak classifier SVM initial weights separately and then adjust the adaptive weights based on the dynamic threshold. The algorithm pseudocode can be described as (Algorithm 1):
Algorithm 1. Adaptive Weight Adjustment Method for AdaBoost - For t = 1:T
- {
- /*Normalizing the weights of each weak classifiers */
- /*Each identification result for weak classifiers needs to be recorded as */
- /* The adaptive error rate of the -th classifier needs to be calculated*/
- For m = 1:M
- {/*Finding out the lowest adaptive error rate */
- ,
- }
- End For
- /* is used for the label, means the misclassification, means correct classification */
- }
- End For
- Stage 3:
- The results can be obtained through the previous iterative process and we are able to complete the weight determination process in the ensemble learning method. Meanwhile, the decision function in the advanced AdaBoost classifier can be written as:
4.2. Bat Algorithm-Optimized RF Method on HAR Task
- Stage 1:
- Initialization. The maximum number of iterations in the algorithm is set to 200. Meanwhile, the number of the infantries , the initial position of each infantry and its velocity should also be set. The parameters and in the process are used to illustrate the states of optimization method in subsequent iteration and updating process.
- Stage 2:
- Assuming that the current position of the infantry is the global optimization position, the fitness value is recorded as . They are brought into the RF to update the individual fitness function and the out of bag error .
- Stage 3:
- The infantry compares the fitness value with the local optimization value, and selects the larger value as the target, which is recorded as .
- Stage 4:
- During each iteration, operation and iteration results need to be fed back in the algorithm, so as to update their positions and velocities.
- Stage 5:
- The comparisons with and should be finished. When it comes to the maximum number of iterations or the is not changing, the optimization process can be terminated and the core process can step into Stage 6. Otherwise, the process should return to the Stage 4.
- Stage 6:
- and are supposed to be outputted according to all these stages mentioned above. Then, the improved RF model is totally given on this basis, and it leads to a better performance on HAR task in dual Kinect V2 system.
5. Experiment and Analysis
6. Conclusions and Future Works
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
NLOS | Non-Line of Sight |
TCP | Transmission Control Protocol |
IoT | Internet of Things |
SDKs | Software Development Kits |
HAR | Human Action Recognition |
IMU | Inertial Measurement Unit |
USRP | Universal Software Radio Peripheral |
EOD | Exploration Ordnance Disposal |
CNN | Convolutional Neural Network |
RNN | Recurrent Neural Network |
GCN | Graph Convolutional Neural Network |
BERT | Bidirectional Encoder Representation from Transformers |
GAN | Generic Adversarial Networks |
NLP | Natural Language Processing |
LOS | Line-of-Sight |
SVM | Support Vector Machine |
RF | Random Forest |
VB | Visual Basic |
API | Application Programming Interface |
DT | Decision Tree |
LFM | Linear Frequency Modulation |
OSI | Open System Interconnection |
UDP | User Datagram Protocol |
MEMS | Micro-Electro-Mechanical System |
TDoA | Time Difference of Arrival |
GPS | Global Positioning System |
LBS | Location Based Services |
AWGN | Additive White Gaussian Noise |
SVD | Singular Value Decomposition |
DoA | Direction of Arrival |
YOLO | You Only Look Once |
References
- Gao, F.; Fang, W.; Sun, X.; Wu, Z.; Zhao, G.; Li, G.; Li, R.; Fu, L.; Zhang, Q. A novel apple fruit detection and counting methodology based on deep learning and trunk tracking in modern orchard. Comput. Electron. Agric. 2022, 197, 107000. [Google Scholar] [CrossRef]
- Liu, H.; Deng, Y.; Guo, D.; Fang, B.; Sun, F.; Yang, W. An Interactive Perception Method for Warehouse Automation in Smart Cities. IEEE Trans. Ind. Inform. 2021, 17, 830–838. [Google Scholar] [CrossRef]
- Gong, L.; Wang, C. Research on Moving Target Tracking Based on FDRIG Optical Flow. Symmetry 2019, 11, 1122. [Google Scholar] [CrossRef]
- Chilo, N.O.M.; Ccari, L.F.C.; Supo, E.; Espinoza, E.S.; Vidal, Y.S.; Pari, L. Optimal Signal Processing for Steady Control of a Robotic Arm Suppressing Hand Tremors for EOD Applications. IEEE Access 2023, 11, 13163–13178. [Google Scholar] [CrossRef]
- Worrallo, A.G.; Hartley, T. Robust Optical Based Hand Interaction for Virtual Reality. IEEE Trans. Vis. Comput. Graph. 2022, 28, 4186–4197. [Google Scholar] [CrossRef] [PubMed]
- Majumder, S.; Kehtarnavaz, N. Vision and Inertial Sensing Fusion for Human Action Recognition: A Review. IEEE Sens. J. 2021, 21, 2454–2457. [Google Scholar] [CrossRef]
- Ramirez, H.; Velastin, S.A.; Aguayo, P.; Fabregas, E.; Farias, G. Human Activity Recognition by Sequences of Skeleton Features. Sensors 2022, 22, 3991. [Google Scholar] [CrossRef]
- Yu, Z.; Zahid, A.; Taha, A.; Taylor, W.; Kernec, J.L.; Heidari, H.; Imran, M.A.; Abbasi, Q.H. An Intelligent Implementation of Multi-Sensing Data Fusion with Neuromorphic Computing for Human Activity Recognition. IEEE Internet Things J. 2023, 10, 1124–1133. [Google Scholar] [CrossRef]
- Chen, J.; Sun, Y.; Sun, S. Improving Human Activity Recognition Performance by Data Fusion and Feature Engineering. Sensors 2021, 21, 692. [Google Scholar] [CrossRef]
- Ramirez, H.; Velastin, S.A.; Meza, I.; Fabregas, E.; Makris, D.; Farias, G. Fall Detection and Activity Recognition Using Human Skeleton Features. IEEE Access 2021, 9, 33532–33542. [Google Scholar] [CrossRef]
- Issa, M.E.; Helmi, A.M.; Al-Qaness, M.A.A.; Dahou, A.; Abd Elaziz, M.; Damaševičius, R. Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare 2022, 10, 1084. [Google Scholar] [CrossRef]
- Cao, Y.; Xie, R.; Yan, K.; Fang, S.-H.; Wu, H.-C. Novel Dynamic Segmentation for Human-Posture Learning System Using Hidden Logistic Regression. IEEE Signal Process. Lett. 2022, 29, 1487–1491. [Google Scholar] [CrossRef]
- Li, M.; Wei, F.; Li, Y.; Zhang, S.; Xu, G. Three-Dimensional Pose Estimation of Infants Lying Supine Using Data from a Kinect Sensor With Low Training Cost. IEEE Sens. J. 2020, 21, 6904–6913. [Google Scholar] [CrossRef]
- Bhiri, N.; Ameur, S.; Alouani, I.; Mahjoub, M.; Khalifa, A. Hand gesture recognition with focus on leap motion: An overview, real world challenges and future directions. Expert Syst. Appl. 2023, 226, 120125. [Google Scholar] [CrossRef]
- Yuwen, X.; Zhang, S.; Chen, L.; Zhang, H. Improved interpolation with sub-pixel relocation method for strong barrel distortion. Signal Process. 2023, 203, 108795. [Google Scholar] [CrossRef]
- Galván-Ruiz, J.; Travieso-González, C.M.; Pinan-Roescher, A.; Alonso-Hernández, J.B. Robust Identification System for Spanish Sign Language Based on Three-Dimensional Frame Information. Sensors 2023, 23, 481. [Google Scholar] [CrossRef]
- Wei, D.; Chen, L.; Zhao, L.; Zhou, H.; Huang, B. A Vision-Based Measure of Environmental Effects on Inferring Human Intention During Human Robot Interaction. IEEE Sens. J. 2022, 22, 4246–4256. [Google Scholar] [CrossRef]
- Tran, T.; Ruppert, T.; Eigner, G.; Abonyi, J. Assessing human worker performance by pattern mining of Kinect sensor skeleton data. J. Manuf. Syst. 2023, 70, 538–556. [Google Scholar] [CrossRef]
- Tölgyessy, M.; Dekan, M.; Chovanec, Ľ. Skeleton Tracking Accuracy and Precision Evaluation of Kinect V1, Kinect V2, and the Azure Kinect. Appl. Sci. 2021, 11, 5756. [Google Scholar] [CrossRef]
- Mansoor, M.; Amin, R.; Mustafa, Z.; Sengan, S.; Aldabbas, H.; Alharbi, M.T. A machine learning approach for non-invasive fall detection using Kinect. Multimed. Tools Appl. 2022, 81, 15491–15519. [Google Scholar] [CrossRef]
- Kuriakose, B.; Shrestha, R.; Sandnes, F. DeepNAVI: A deep learning based smartphone navigation assistant for people with visual impairments. Expert Syst. Appl. 2023, 212, 118720. [Google Scholar] [CrossRef]
- Moon, S.; Park, Y.; Ko, D.; Suh, L. Multiple Kinect Sensor Fusion for Human Skeleton Tracking Using Kalman Filtering. Int. J. Adv. Robot. Syst. 2016, 13, 1–10. [Google Scholar] [CrossRef]
- Chhetri, S.; Alsadoon, A.; Al-Dala’in, T.; Prasad, P.; Rashid, T.A.; Maag, A. Deep learning for vision-based fall detection system: Enhanced optical dynamic flow. Comput. Intell. 2021, 37, 578–595. [Google Scholar] [CrossRef]
- Apicella, A.; Snidaro, L. Deep Neural Networks for Real-Time Remote Fall Detection. In Proceedings of the International Conference on Pattern Recognition, Virtual, 10–15 January 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 188–201. [Google Scholar]
- Cheng, K.; Zhang, Y.; He, X.; Chen, W.; Cheng, J.; Lu, H. Skeleton-Based Action Recognition with Shift Graph Convolutional Network. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 183–192. [Google Scholar]
- Duan, H.; Wang, J.; Chen, K.; Lin, D. PYSKL: Towards Good Practices for Skeleton Action Recognition. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; pp. 7351–7354. [Google Scholar]
- Duan, H.; Wang, J.; Chen, K.; Lin, D. DG-STGCN: Dynamic Spatial-Temporal Modeling for Skeleton-based Action Recognition. arXiv 2022, arXiv:2210.05895. [Google Scholar]
- Ramirez, H.; Velastin, S.A.; Cuellar, S.; Fabregas, E.; Farias, G. BERT for Activity Recognition Using Sequences of Skeleton Features and Data Augmentation with GAN. Sensors 2023, 23, 1400. [Google Scholar] [CrossRef]
- Degardin, B.; Neves, J.; Lopes, V.; Brito, J.; Yaghoubi, E.; Proenca, H. Generative Adversarial Graph Convolutional Networks for Human Action Synthesis. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 1150–1159. [Google Scholar]
- Xu, L.; Song, Z.; Wang, D.; Su, J.; Fang, Z.; Ding, C.; Gan, W.; Yan, Y.; Jin, X.; Yang, X.; et al. ActFormer: A GAN Transformer Framework towards General Action-Conditioned 3D Human Motion Generation. arXiv 2022, arXiv:2203.07706. [Google Scholar]
- Shahroudy, A.; Liu, J.; Ng, T.; Wang, G. NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis. arXiv 2016, arXiv:1604.02808. [Google Scholar]
- Liu, J.; Shahroudy, A.; Perez, M.; Wang, G.; Duan, L.; Kot, A. NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2684–2701. [Google Scholar] [CrossRef]
- Kan, R.; Wang, M.; Zhou, Z.; Zhang, P.; Qiu, H. Acoustic Signal NLOS Identification Method Based on Swarm Intelligence Optimization SVM for Indoor Acoustic Localization. Wirel. Commun. Mob. Comput. 2022, 2022, 5210388. [Google Scholar] [CrossRef]
- Kan, R.; Wang, M.; Liu, X.; Liu, X.; Qiu, H. An Advanced Artificial Fish School Algorithm to Update Decision Tree for NLOS Acoustic Localization Signal Identification with the Dual-Receiving Method. Appl. Sci. 2023, 13, 4012. [Google Scholar] [CrossRef]
- Seifallahi, M.; Mehraban, A.; Galvin, J.; Ghoraani, B. Alzheimer’s Disease Detection Using Comprehensive Analysis of Timed Up and Go Test via Kinect V.2 Camera and Machine Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 1589–1600. [Google Scholar] [CrossRef]
- Li, B.; Zhang, C.; Han, C.; Bai, B. Gesture Recognition Based on Kinect V2 and Leap Motion Data Fusion. Int. J. Pattern Recognit. Artif. Intell. 2019, 33, 1–27. [Google Scholar] [CrossRef]
- Kwolek, B.; Kepski, M. Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Prog. Biomed 2014, 117, 489–501. [Google Scholar] [CrossRef]
- Tran, T.; Le, T.; Pham, D.; Hoang, V.; Khong, V.; Tran, Q.; Nguyen, T.; Pham, C. A Multi-Modal Multi-View Dataset for Human Fall Analysis and Preliminary Investigation on Modality. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 1947–1952. [Google Scholar]
- Adhikari, K.; Bouchachia, H.; Nait-Charif, H. Activity Recognition for Indoor Fall Detection Using Convolutional Neural Network. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; pp. 81–84. [Google Scholar]
- Liu, C.; Hu, Y.; Li, Y.; Song, S.; Liu, J. PKU-MMD: A large scale benchmark for continuous multi-modal human action understanding. arXiv 2017, arXiv:1703.07475. [Google Scholar]
- Martínez-Villaseñor, L.; Ponce, H.; Brieva, J.; Moya-Albor, E.; Núñez-Martínez, J.; Peñafort-Asturiano, C. UP-Fall Detection Dataset: A Multimodal Approach. Sensors 2019, 19, 1988. [Google Scholar] [CrossRef] [PubMed]
- Hansen, L.; Salamon, P. Neural network ensembles. IEEE Trans. Pattern Recognit. Mach. Intell. 1990, 12, 993–1001. [Google Scholar] [CrossRef]
- Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Salim, H.; Alaziz, M.; Abdalla, T. Human Activity Recognition Using the Human Skeleton Provided by Kinect. Iraqi J. Electr. Electron. Eng. 2021, 17, 183–189. [Google Scholar] [CrossRef]
- Abobakr, A.; Hossny, M.; Nahavandi, S. A Skeleton-Free Fall Detection System from Depth Images Using Random Decision Forest. IEEE Syst. J. 2018, 12, 2994–3005. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R. Experiments with a New Boosting Algorithm. In Machine Learning, Proceedings of the Thirteenth International Conference, San Francisco, CA, USA, 3–6 July 1996; ACM: New York, NY, USA, 1996; pp. 148–156. [Google Scholar]
- Huang, X.; Li, Z.; Jin, Y.; Zhang, W. Fair-AdaBoost: Extending AdaBoost method to achieve fair classification. Expert Syst. Appl. 2022, 202, 117240. [Google Scholar] [CrossRef]
- Avidan, S. Spatialboost: Adding Spatial Reasoning to AdaBoost. In Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 386–396. [Google Scholar]
- Zhang, L.; Huang, D.; Wang, X.; Schindelhauer, C.; Wang, Z. Acoustic NLOS Identification Using Acoustic Channel Characteristics for Smartphone Indoor Localization. Sensors 2017, 17, 727. [Google Scholar] [CrossRef]
- Hazra, S.; Pratap, A.; Nandy, A. A Novel Probabilistic Network Model for Estimating Cognitive-Gait ConnectionUsing Multimodal Interface. IEEE Trans. Cogn. Dev. Syst. 2023, 15, 1430–1448. [Google Scholar] [CrossRef]
- Wang, Y.; Wu, Y.; Jung, S.; Hoermann, S.; Yao, S.; Lindeman, R. Enlarging the Usable Hand Tracking Area by Using Multiple Leap Motion Controllers in VR. IEEE Sens. J. 2021, 21, 17947–17961. [Google Scholar] [CrossRef]
- Wang, Y.; Chang, F.; Wu, Y.; Hu, Z.; Li, L.; Li, P.; Lang, P.; Yao, S. Multi-Kinects fusion for full-body tracking in virtual reality-aided assembly simulation. Int. J. Distrib. Sens. Netw. 2022, 18, 1–15. [Google Scholar] [CrossRef]
- Yang, X.; Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
Core Method | NLOS Acoustic Signal Identification Accuracy |
---|---|
Logistic Regression [34,50] | 47.19% |
SVM (Averaging various kernel functions) [33,50] | 61.45% |
LDA [50] | 55.12% |
Decision Tree [34] | 69.35% |
Random Forest | 73.43% |
Original AdaBoost (Consists of SVM) | 91.50% |
Fuzzy C Method-AdaBoost (Consists of SVM) in this manuscript | 98.13% |
Methods | Positioning Accuracy (m) |
---|---|
Average Error in our self-made DoA-Kinect V2 based system | 1.50 |
Method from [33] | 0.20 |
Method from [34] | 0.16 |
Method in our manuscript | 0.12 |
Sitting | Standing | Raising One’s Hand | Falling | |
---|---|---|---|---|
RF | 77.52% | 77.54% | 79.33% | 80.19% |
Bat-RF (part of our methods) | 88.90% | 87.93% | 80.95% | 88.02% |
AdaBoost | 85.82% | 89.79% | 82.56% | 91.80% |
c-means-AdaBoost (part of our methods) | 87.99% | 91.12% | 83.11% | 93.45% |
CNN | 86.42% | 90.12% | 74.79% | 85.43% |
GCN | 92.39% | 89.98% | 84.69% | 94.23% |
Temporal shift GCN | 95.39% | 94.19% | 88.23% | 96.16% |
GCN-CNN | 96.56% | 97.98% | 89.98% | 95.15% |
Kinetic-GAN | 90.15% | 94.18% | 80.17% | 92.15% |
BERT-GAN | 80.99% | 88.23% | 82.29% | 90.16% |
MotionBERT | 85.67% | 91.25% | 83.33% | 95.48% |
Sitting | Standing | Raising One’s Hand | Falling | |
---|---|---|---|---|
KNN | 61.55% | 98.15% | 59.65% | 81.25% |
SVM | 73.45% | 96.10% | 66.70% | 79.85% |
RF | 78.65% | 99.25% | 73.70% | 85.95% |
MLP | 80.60% | 90.40% | 80.90% | 82.50% |
CNN | 86.10% | 99.45% | 87.95% | 95.75% |
AdaBoost | 82.85% | 98.95% | 83.00% | 90.80% |
RCN | 85.55% | 98.20% | 82.35% | 91.80% |
Temporal shift GCN | 90.15% | 94.72% | 84.10% | 89.40% |
DG-STGCN | 92.55% | 93.65% | 83.60% | 90.15% |
GCN-CNN | 90.05% | 99.15% | 81.20% | 97.25% |
BERT-GAN | 89.95% | 95.95% | 82.65% | 96.75% |
Kinetic-GAN | 92.15% | 98.95% | 89.25% | 98.15% |
MotionBERT | 94.70% | 99.20% | 88.10% | 97.25% |
Our Method | 91.65% | 98.75% | 89.90% | 97.70% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kan, R.; Qiu, H.; Liu, X.; Zhang, P.; Wang, Y.; Huang, M.; Wang, M. Indoor Human Action Recognition Based on Dual Kinect V2 and Improved Ensemble Learning Method. Sensors 2023, 23, 8921. https://doi.org/10.3390/s23218921
Kan R, Qiu H, Liu X, Zhang P, Wang Y, Huang M, Wang M. Indoor Human Action Recognition Based on Dual Kinect V2 and Improved Ensemble Learning Method. Sensors. 2023; 23(21):8921. https://doi.org/10.3390/s23218921
Chicago/Turabian StyleKan, Ruixiang, Hongbing Qiu, Xin Liu, Peng Zhang, Yan Wang, Mengxiang Huang, and Mei Wang. 2023. "Indoor Human Action Recognition Based on Dual Kinect V2 and Improved Ensemble Learning Method" Sensors 23, no. 21: 8921. https://doi.org/10.3390/s23218921
APA StyleKan, R., Qiu, H., Liu, X., Zhang, P., Wang, Y., Huang, M., & Wang, M. (2023). Indoor Human Action Recognition Based on Dual Kinect V2 and Improved Ensemble Learning Method. Sensors, 23(21), 8921. https://doi.org/10.3390/s23218921