The Classification of Movement in Infants for the Autonomous Monitoring of Neurological Development
"> Figure 1
<p>An illustration of how the camera is positioned relative to the child.</p> "> Figure 2
<p>Two images of the data produced using the 2D marker-less pose estimation algorithm in media pipe. (<b>a</b>) shows all possible markers, and (<b>b</b>) shows a limited set being shown as not all points were visible to the camera.</p> "> Figure 3
<p>The results of the BiLSTM network when applied to the classification of infants dexterous movement. (<b>a</b>) The results from the 3 label experiment. (<b>b</b>) The results from the 4 label experiment. In bold are the highest values per column.</p> "> Figure 4
<p>The results of the LSTM network when applied to the classification of infants dexterous movement. (<b>a</b>) The results from the 3 label experiment. (<b>b</b>) The results from the 4 label experiment.</p> "> Figure 5
<p>The results of the Bi-LSTMCNN network when applied to the classification of infants dexterous movement. (<b>a</b>) The results from the 3 label experiment. (<b>b</b>) The results from the 4 label experiment.</p> "> Figure 6
<p>The results of the convolutional neural network (CNN) when applied to the classification of infants dexterous movement. (<b>a</b>) The results from the 3 label experiment. (<b>b</b>) The results from the 4 label experiment.</p> "> Figure 7
<p>Confusion matrix presenting the results of the Bi-LSTM-CNN on classification of infant position data.</p> ">
Abstract
:1. Introduction
- A novel data set has been created where infants between 3 and 12 months old were recorded organically both at rest and when interacting with toys. Therefore, we can see a wider range of hand movements alongside general positional and postural orientations such as laying down and sitting.
- Multiple deep learning architectures, including LSTM’s, Bi-LSTM’s and convolutional neural networks (CNN), as well as the Bi-LSTM-CNN have been used to ascertain which model optimally fits the data.
- The data were labelled in a human-readable way. Therefore, the frequency of specific movements or positions could be calculated. This, for example, allowed for output showing how often an infant used both hands to interact with a toy and whether or not they were capable of independently sitting upright or changing their position.
- The work demonstrates that Google’s media pipe is capable of accurately tracking the dexterous and positional movements of children when interacting with toys, and that this data can be classified using deep learning architectures.
2. Materials and Methods
- No control of any toy (NC).
- Limited control of a toy with a single hand (LC1H). This was typically when the infant was making contact with a toy but not having gained control of it. i.e., moving the toy by hitting it, or having one hand on the toy whilst it was on the ground.
- Full control with a single hand (FC1H), that is, when an infant was grasping the object and moving it of their own accord for a sustained period of time (approximately three seconds to differentiate from limited control).
- Full control with two hands (FC2H), when the infant had grasped the object with both hands and manipulated it.
- Limited control using two hands (LC2H). This final label was disregarded due to infrequent occurrence, meaning insufficient data were available for training examples.
- Laying on their back, which is typically the most effective way to interact with toys when an infant is unable to sit (position back (PB)).
- Laying on their front which allowed some relative movement; however, the infants appeared to find it difficult to interact with the toys in this position (position front (PF)).
- In a sitting position which was typically the easiest position to interact and pick up multiple toys (position sitting (PS)).
Deep Learning
3. Results
3.1. Objective Performance-Dexterous Movements
3.2. Objective Performance-Position
3.3. Comparative Performance of Network Architectures
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Banerjee, T.K.; Hazra, A.; Biswas, A.; Ray, J.; Roy, T.; Raut, D.K.; Chaudhuri, A.; Das, S.K. Neurological disorders in children and adolescents. Indian J. Pediatr. 2009, 76, 139–146. [Google Scholar] [CrossRef] [PubMed]
- Camfield, P.; Camfield, C. Transition to adult care for children with chronic neurological disorders. Ann. Neurol. 2011, 69, 437–444. [Google Scholar] [CrossRef] [PubMed]
- Weyandt, L.L.; Clarkin, C.M.; Holding, E.Z.; May, S.E.; Marraccini, M.E.; Gudmundsdottir, B.G.; Shepard, E.; Thompson, L. Neuroplasticity in children and adolescents in response to treatment intervention: A systematic review of the literature. Clin. Transl. Neurosci. 2020, 4, 21. [Google Scholar] [CrossRef]
- Blauw-Hospers, C.H.; Hadders-Algra, M. A systematic review of the effects of early intervention on motor development. Dev. Med. Child Neurol. 2005, 47, 421–432. [Google Scholar] [CrossRef]
- McIntyre, S.; Morgan, C.; Walker, K.; Novak, I. Cerebral palsy—Don’t delay. Dev. Disabil. Res. Rev. 2011, 17, 114–129. [Google Scholar] [CrossRef]
- Novak, I.; Morgan, C.; Adde, L.; Blackman, J.; Boyd, R.N.; Brunstrom-Hernandez, J.; Cioni, G.; Damiano, D.; Darrah, J.; Eliasson, A.; et al. Early, accurate diagnosis and early intervention in cerebral palsy, advances in diagnosis and treatment. JAMA Pediatr. 2017, 171, 897–907. [Google Scholar] [CrossRef]
- Reid, L.B.; Rose, S.E.; Boyd, R.N. Rehabilitation and neuroplasticity in children with unilateral cerebral palsy. Nat. Rev. Neurol. 2015, 11, 390–400. [Google Scholar] [CrossRef]
- Sterling, C.; Taub, E.; Davis, D.; Rickards, T.; Gauthier, L.V.; Griffin, A.; Uswatte, G. Structural neuroplastic change after constraint-induced movement therapy in children with cerebral palsy. Pediatrics 2013, 131, e1664–e1669. [Google Scholar] [CrossRef]
- Musselman, K.E.; Stoyanov, C.T.; Marasigan, R.; Jenkins, M.E.; Konczak, J.; Morton, S.M.; Bastian, A.J. Prevalence of ataxia in children: A systematic review. Neurology 2014, 82, 80–89. [Google Scholar] [CrossRef]
- Duan, H.; Zhai, G.; Min, X.; Che, Z.; Fang, Y.; Yang, X.; Gutiérrez, J.; Callet, P.L. A dataset of eye movements for the children with autism spectrum disorder. In Proceedings of the 10th ACM Multimedia Systems Conference, Amherst, MA, USA, 18–21 June 2019; pp. 255–260. [Google Scholar]
- Perez, D.L.; Aybek, S.; Popkirov, S.; Kozlowska, K.; Stephen, C.D.; Anderson, J.; Shura, R.; Ducharme, S.; Carson, A.; Hallett, M.; et al. A review and expert opinion on the neuropsychiatric assessment of motor functional neurological disorders. J. Neuropsychiatry Clin. Neurosci. 2021, 33, 14–26. [Google Scholar] [CrossRef]
- Vitrikas, K.; Dalton, H.; Breish, D. Cerebral palsy: An overview. Am. Fam. Physician 2020, 101, 213–220. [Google Scholar]
- Mckinnon, C.T.; Meehan, E.M.; Harvey, A.R.; Antolovich, G.C.; Morgan, P.E. Prevalence and characteristics of pain in children and young adults with cerebral palsy: A systematic review. Dev. Med. Child Neurol. 2019, 61, 305–314. [Google Scholar] [CrossRef]
- Reid, S.M.; Meehan, E.; McIntyre, S.; Goldsmith, S.; Badawi, N.; Reddihough, D.S.; Australian Cerebral Palsy Register Group. Temporal trends in cerebral palsy by impairment severity and birth gestation. Dev. Med. Child Neurol. 2016, 58, 25–35. [Google Scholar] [CrossRef]
- Gao, Y.; Phillips, J.M.; Zheng, Y.; Min, R.; Fletcher, P.T.; Gerig, G. Fully convolutional structured lstm networks for joint 4d medical image segmentation. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1104–1108. [Google Scholar]
- Razzak, M.I.; Naz, S.; Zaib, A. Deep Learning for Medical Image processing: Overview, Challenges and the Future. In Classification in BioApps; Springer: Cham, Switzerland, 2018; pp. 323–350. [Google Scholar]
- Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221. [Google Scholar] [CrossRef]
- Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- McCay, K.D.; Hu, P.; Shum, H.P.H.; Woo, W.L.; Marcroft, C.; Embleton, N.D.; Munteanu, A.; Ho, E.S.L. A pose-based feature fusion and classification framework for the early prediction of cerebral palsy in infants. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 30, 8–19. [Google Scholar] [CrossRef]
- Rad, N.M.; Furlanello, C. Applying deep learning to stereotypical motor movement detection in autism spectrum disorders. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), Barcelona, Spain, 12–15 December 2016; pp. 1235–1242. [Google Scholar]
- Tucker, C.S.; Behoora, I.; Nembhard, H.B.; Lewis, M.; Sterling, N.W.; Huang, X. Machine learning classification of medication adherence in patients with movement disorders using non-wearable sensors. Comput. Biol. Med. 2015, 66, 120–134. [Google Scholar] [CrossRef]
- Turner, A.; Hayes, S. The classification of minor gait alterations using wearable sensors and deep learning. IEEE Trans. Biomed. Eng. 2019, 66, 3136–3145. [Google Scholar] [CrossRef]
- Turner, A.; Scott, D.; Hayes, S. The classification of multiple interacting gait abnormalities using insole sensors and machine learning. In Proceedings of the 2022 IEEE International Conference on Digital Health (ICDH), Barcelona, Spain, 10–16 July 2022; pp. 69–76. [Google Scholar]
- Nakano, N.; Sakura, T.; Ueda, K.; Omura, L.; Kimura, A.; Iino, Y.; Fukashiro, S.; Yoshioka, S. Evaluation of 3d markerless motion capture accuracy using openpose with multiple video cameras. Front. Sport. Act. Living 2020, 2, 50. [Google Scholar] [CrossRef]
- Chen, Y.; Tian, Y.; He, M. Monocular human pose estimation: A survey of deep learning-based methods. Comput. Vis. Image Underst. 2020, 192, 102897. [Google Scholar] [CrossRef]
- Lugaresi, C.; Tang, J.; Nash, H.; McClanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.L.; Yong, M.G.; Lee, J.; et al. Mediapipe: A framework for building perception pipelines. arXiv 2019, arXiv:1906.08172. [Google Scholar]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5693–5703. [Google Scholar]
- Kim, W.; Sung, J.; Saakes, D.; Huang, C.; Xiong, S. Ergonomic postural assessment using a new open-source human pose estimation technology (openpose). Int. J. Ind. Ergon. 2021, 84, 103164. [Google Scholar] [CrossRef]
- Rad, N.M.; Kia, S.M.; Zarbo, C.; Laarhoven, T.v.; Jurman, G.; Venuti, P.; Marchiori, E.; Furlanello, C. Deep learning for automatic stereotypical motor movement detection using wearable sensors in autism spectrum disorders. Signal Process. 2018, 144, 180–191. [Google Scholar]
- Suzuki, S.; Amemiya, Y.; Sato, M. Deep learning assessment of child gross-motor. In Proceedings of the 2020 13th International Conference on Human System Interaction (HSI), Tokyo, Japan, 6–8 June 2020; pp. 189–194. [Google Scholar]
- DiPietro, R.; Hager, G.D. Deep learning, Rnns and lstm. In Handbook of Medical Image Computing and Computer Assisted Intervention; Elsevier: Amsterdam, The Netherlands, 2020; pp. 503–519. [Google Scholar]
- Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.Y.; Liu, J. LSTM network: A deep learning approach for short-term traffic forecast. IET Intell. Transp. Syst. 2017, 11, 68–75. [Google Scholar] [CrossRef]
- Kubota, K.J.; Chen, J.A.; Little, M.A. Machine learning for large-scale wearable sensor data in parkinson’s disease, Concepts, promises, pitfalls, and futures. Mov. Disord. 2016, 31, 1314–1326. [Google Scholar] [CrossRef]
- Zhu, J.; Bolsterlee, B.; Chow, B.V.Y.; Cai, C.; Herbert, R.D.; Song, Y.; Meijering, E. Deep learning methods for automatic segmentation of lower leg muscles and bones from mri scans of children with and without cerebral palsy. NMR Biomed. 2021, 34, e4609. [Google Scholar] [CrossRef]
- Bahado-Singh, R.O.; Vishweswaraiah, S.; Aydas, B.; Mishra, N.K.; Guda, C.; Radhakrishna, U. Deep learning/artificial intelligence and blood-based DNA epigenomic prediction of cerebral palsy. Int. J. Mol. Sci. 2019, 20, 2075. [Google Scholar] [CrossRef]
- Groos, D.; Adde, L.; Aubert, S.; Boswell, L.; De Regnier, R.A.; Fjørtoft, T.; Gaebler-Spira, D.; Haukeland, A.; Loennecken, M.; Msall, M.; et al. Development and validation of a deep learning method to predict cerebral palsy from spontaneous movements in infants at high risk. JAMA Netw. Open 2022, 5, e2221325. [Google Scholar] [CrossRef]
- Sakkos, D.; Mccay, K.D.; Marcroft, C.; Embleton, N.D.; Chattopadhyay, S.; Ho, E.S.L. Identification of abnormal movements in infants: A deep neural network for body part-based prediction of cerebral palsy. IEEE Access 2012, 9, 94281–94292. [Google Scholar] [CrossRef]
- Kwong, A.K.L.; Fitzgerald, T.L.; Doyle, L.W.; Cheong, J.L.Y.; Spittle, A.J. Predictive validity of spontaneous early infant movement for later cerebral palsy: A systematic review. Dev. Med. Child Neurol. 2018, 60, 480–489. [Google Scholar] [CrossRef]
- Silva, N.; Zhang, D.; Kulvicius, T.; Gail, A.; Barreiros, C.; Lindstaedt, S.; Kraft, M.; Bölte, S.; Poustka, L.; Nielsen-Saines, K.; et al. The future of general movement assessment, The role of computer vision and machine learning—A scoping review. Res. Dev. Disabil. 2021, 110, 103854. [Google Scholar] [CrossRef]
- Siami-Namini, S.; Tavakoli, T.; Namin, A.S. The performance of lstm and bilstm in forecasting time series. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 3285–3292. [Google Scholar]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks, analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [Google Scholar] [CrossRef]
- Karim, F.; Majumdar, S.; Darabi, H. Insights into LSTM fully convolutional networks for time series classification. IEEE Access 2019, 7, 67718–67725. [Google Scholar] [CrossRef]
- Karim, F.; Majumdar, S.; Darabi, H.; Chen, S. LSTM fully convolutional networks for time series classification. IEEE Access 2017, 6, 1662–1669. [Google Scholar] [CrossRef]
- Krichen, M.; Mihoub, A.; Alzahrani, M.; Adoni, W.; Nahhal, T. Are Formal Methods Applicable to Machine Learning and Artificial Intelligence? In Proceedings of the 2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH), Riyadh, Saudi Arabia, 9–11 May 2022; pp. 48–53. [Google Scholar]
- Raman, R.; Gupta, N.; Jeppu, Y. Framework for Formal Verification of Machine Learning Based Complex System-of-Systems. INSIGHT 2023, 26, 91–102. [Google Scholar] [CrossRef]
0. Nose | 12. Right shoulder | 24. Right hip |
1. Left eye inner | 13. Left elbow | 25. Left knee |
2. Left eye | 14. Right elbow | 26. Right knee |
3. Left eye outer | 15. Left wrist | 27. Left ankle |
4. Right eye inner | 16. Right wrist | 28. Right ankle |
5. Right eye | 17. Left pinky | 29. Left heel |
6. Right eye outer | 18. Right pinky | 30. Right heel |
7. Left ear | 19. Left index | 31. Left foot |
8. Right ear | 20. Right index | 32. Right foot |
9. Mouth left | 21. Left thumb | |
10. Mouth right | 22. Right thumb | |
11. Left shoulder | 23. Left hip |
Network Topology | |||
---|---|---|---|
Bi-LSTM | Bi-LSTM-CNN | CNN | LSTM |
Sequence Input Layer | Sequence Input Layer | Sequence Input Layer | Sequence Input Layer |
Dropout Layer | Dropout Layer | Dropout Layer | Dropout Layer |
Bi-LSTM layer (200 units) | 1 × 1 Convolutional Layer | 1 × 1 Convolutional Layer | LSTM layer (200 units) |
Dropout Layer | Bi-LSTM layer (200 units) | Dropout Layer | Dropout Layer |
ReLU Layer | Dropout Layer | ReLU Layer | ReLU Layer |
Fully Connected Layer | Flatten Layer | MaxPooling layer | Fully Connected Layer |
Softmax Layer | ReLU Layer | Fully Connected Layer | Softmax Layer |
Output Layer | Fully Connected Layer | Softmax Layer | Output Layer |
Softmax Layer | Output Layer | ||
Output Layer |
Precision 3 Class | Recall 3 Class | F1 Score | |
---|---|---|---|
Bi-LSTM | 68.5 | 67.9 | 68.2 |
LSTM | 53.4 | 53.6 | 53.5 |
BiLSTMCNN | 64.9 | 64.2 | 64.5 |
CNN | 55.3 | 53.1 | 54.2 |
Precision 4 Class | Recall 4 Class | F1 Score | |
---|---|---|---|
Bi-LSTM | 55.2 | 55.1 | 55.1 |
LSTM | 53.0 | 53.3 | 53.1 |
BiLSTMCNN | 50.1 | 51.4 | 50.7 |
CNN | 46.3 | 45.9 | 46.1 |
Precision | Recall | F1 Score | |
---|---|---|---|
Bi-LSTM | 83.2 | 82.7 | 82.9 |
LSTM | 78.2 | 77.9 | 78.0 |
BiLSTMCNN | 84.6 | 84.5 | 84.5 |
CNN | 69.0 | 67.8 | 68.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Turner, A.; Hayes, S.; Sharkey, D. The Classification of Movement in Infants for the Autonomous Monitoring of Neurological Development. Sensors 2023, 23, 4800. https://doi.org/10.3390/s23104800
Turner A, Hayes S, Sharkey D. The Classification of Movement in Infants for the Autonomous Monitoring of Neurological Development. Sensors. 2023; 23(10):4800. https://doi.org/10.3390/s23104800
Chicago/Turabian StyleTurner, Alexander, Stephen Hayes, and Don Sharkey. 2023. "The Classification of Movement in Infants for the Autonomous Monitoring of Neurological Development" Sensors 23, no. 10: 4800. https://doi.org/10.3390/s23104800
APA StyleTurner, A., Hayes, S., & Sharkey, D. (2023). The Classification of Movement in Infants for the Autonomous Monitoring of Neurological Development. Sensors, 23(10), 4800. https://doi.org/10.3390/s23104800