Abstract
Determining and tracking the location of specific points of real objects in space is not an easy task. Nowadays, this task is performed by artificial deep neural networks. Various methods and techniques have been competing with each other in recent years. Most of them do it effectively and with a satisfactory result. The success of such solutions is associated with long-term learning and a large amount of training material. The following article should answer the question whether and how the selected tool affects the results obtained. Several approaches to solving the problem using different technologies are presented. The material was verified for a selected group of photos of children in the first weeks of life.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cao, Z., et al.: OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Trans. Pattern Anal. Mach. Intell. 43, 1 (2021)
Ionescu, C., Fuxin Li, C.S.: Human3.6M Dataset. http://vision.imar.ro/human3.6m/description.php. Accessed 03 Sep 2021
Ceseracciu, E., et al.: Comparison of markerless and marker-based motion capture technologies through simultaneous data collection during gait: Proof of concept. PLoS One. 9(3), 1–7 (2014)
Charles, J., et al.: Personalizing human video pose estimation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3063–3072 (2016)
Doroniewicz, I., et al.: Writhing movement detection in newborns on the second and third day of life using pose-based feature machine learning classification. Sensors (Switzerland). 20(21), 1–15 (2020)
Doroniewicz, I., et al.: Temporal and spatial variability of the fidgety movement descriptors and their relation to head position in automized general movement assessment. Acta Bioeng. Biomech. 23(3), 1–21 (2021)
Groos, D., et al.: Towards human performance on automatic motion tracking of infant spontaneous movements. Comput. Med. Imaging Graph. 95, 1–14 (2021)
Groos, D., Ramampiaro, H., Ihlen, E.A.F.: EfficientPose: scalable single-person pose estimation. Appl. Intell. 51(4), 2518–2533 (2020). https://doi.org/10.1007/s10489-020-01918-7
Hanbyul (Han), J., Simon, T., Donglai Xiang, Y.R.Y.A.S.: CMU Panoptic Dataset. http://domedb.perception.cs.cmu.edu/. Accessed 03 Sep 2021
Hesse, N., et al.: Body pose estimation in depth images for infant motion analysis. In: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. (2017). https://doi.org/10.1109/EMBC.2017.8037221
Hesse, N., Bodensteiner, C., Arens, M., Hofmann, U.G., Weinberger, R., Sebastian Schroeder, A.: Computer vision for medical infant motion analysis: state of the art and RGB-D data set. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11134, pp. 32–49. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11024-6_3
Hesse, N., et al.: Estimating body pose of infants in depth images using random ferns. In: Proceedings of the IEEE International Conference on Computer Vision (2015). https://doi.org/10.1109/ICCVW.2015.63
Johnson, S., Everingham, M.: Clustered pose and nonlinear appearance models for human pose estimation. In: British Machine Vision Conference BMVC 2010 - Proceedings, pp. 1–11 (2010)
Lin, T., Maire, M.: COCO Dataset | Papers With Code. https://paperswithcode.com/dataset/coco. Accessed 03 Sep 2021
Migliorelli, L., et al.: The babyPose dataset. Data Br. 33 (2020). https://doi.org/10.1016/j.dib.2020.106329
Nakano, N., et al.: Evaluation of 3D markerless motion capture accuracy using open-pose with multiple video cameras. Front. Sport. Act. Living. 2(50), 1–9 (2020)
Passmore, E., et al.: Deep learning for automated pose estimation of infants at home from smart phone videos. Gait Posture. 81 (2020). https://doi.org/10.1016/j.gaitpost.2020.08.026
Sapp, B., Taskar, B.: MODEC: Multimodal decomposable models for human pose estimation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3674–3681, Portland, OR, USA (2013)
Sun, K., et al.: Deep high-resolution representation learning for human pose estimation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 5686–5696 (2019)
Huang, X., Fu, N., Shuangjun Liu, S.O.: Invariant representation learning for infant pose estimation with small data. In: 16th IEEE International Conference on Automatic Face and Gesture Recognition, p. 18, Jodhpur, India (2021). https://doi.org/10.1109/FG52635.2021.9666956
Home - WRNCH. https://wrnch.ai/, Accessed 03 Sep 2021, CMU Panoptic Dataset. http://domedb.perception.cs.cmu.edu/. Accessed 03 Sep 2021
COCO Dataset | Papers With Code. https://paperswithcode.com/dataset/coco. Accessed 03 Sep 2021
Human3.6M Dataset. http://vision.imar.ro/human3.6m/description.php. Accessed 03 Sep 2021
Leeds Sports Pose Dataset. http://sam.johnson.io/research/lsp.html. Accessed 03 Sep 2021
MPII Human Pose Database. http://human-pose.mpi-inf.mpg.de/. Accessed 03 Sep 2021
Pose Estimation. https://paperswithcode.com/task/pose-estimation. Accessed 03 Sep 2021
Pose Estimation on MPII Human Pose.https://paperswithcode.com/sota/pose-estimation-on-mpii-human-pose. Accessed 03 Sep 2021
VGG Pose Datasets. https://www.robots.ox.ac.uk/~vgg/data/pose/. Accessed 03 Sep 2021
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mrozek, A. et al. (2022). Comparative Analysis of Selected Methods of Identifying the Newborn’s Skeletal Model. In: Pietka, E., Badura, P., Kawa, J., Wieclawek, W. (eds) Information Technology in Biomedicine. ITIB 2022. Advances in Intelligent Systems and Computing, vol 1429. Springer, Cham. https://doi.org/10.1007/978-3-031-09135-3_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-09135-3_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09134-6
Online ISBN: 978-3-031-09135-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)