[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Chemoresistive Gas Sensors Based on Electrospun 1D Nanostructures: Synergizing Morphology and Performance Optimization
Next Article in Special Issue
Advanced Imaging Integration: Multi-Modal Raman Light Sheet Microscopy Combined with Zero-Shot Learning for Denoising and Super-Resolution
Previous Article in Journal
Simulation and Optimization of Transmitting Transducers for Well Logging
Previous Article in Special Issue
ARM4CH: A Methodology for Autonomous Reality Modelling for Cultural Heritage
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Non-Contacted Height Measurement Method in Two-Dimensional Space

by
Phu Nguyen Trung
1,
Nghien Ba Nguyen
1,*,
Kien Nguyen Phan
2,*,
Ha Pham Van
1,
Thao Hoang Van
2,
Thien Nguyen
3 and
Amir Gandjbakhche
3
1
Faculty of Information Technology, Hanoi University of Industry, No. 298 Cau Dien, Bac Tu Liem, Hanoi 143510, Vietnam
2
School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, No. 1 Dai Co Viet, Hai Ba Trung, Hanoi 100000, Vietnam
3
Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, 49 Convent Drive, Bethesda, MD 20892-4480, USA
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(21), 6796; https://doi.org/10.3390/s24216796
Submission received: 7 August 2024 / Revised: 18 October 2024 / Accepted: 21 October 2024 / Published: 23 October 2024

Abstract

:
Height is an important health parameter employed across domains, including healthcare, aesthetics, and athletics. Numerous non-contact methods for height measurement exist; however, most are limited to assessing height in an upright posture. This study presents a non-contact approach for measuring human height in 2D space across different postures. The proposed method utilizes computer vision techniques, specifically the MediaPipe library and the YOLOv8 model, to analyze images captured with a smartphone camera. The MediaPipe library identifies and marks joint points on the human body, while the YOLOv8 model facilitates the localization of these points. To determine the actual height of an individual, a multivariate linear regression model was trained using the ratios of distances between the identified joint points. Data from 166 subjects across four distinct postures: standing upright, rotated 45 degrees, rotated 90 degrees, and kneeling were used to train and validate the model. Results indicate that the proposed method yields height measurements with a minimal error margin of approximately 1.2%. Future research will extend this approach to accommodate additional positions, such as lying down, cross-legged, and bent-legged. Furthermore, the method will be improved to account for various distances and angles of capture, thereby enhancing the flexibility and accuracy of height measurement in diverse contexts.

1. Introduction

It is critical to have an accurate and convenient method for measuring human height, which is an important variable in healthcare for calculating Body Mass Index (BMI) and determining various treatment-related metrics [1]. BMI enables the classification of individuals as overweight, underweight, or of ideal weight [2]. Moreover, BMI is instrumental in population-based studies due to its widespread acceptance in identifying specific body mass categories that may indicate health or social issues. Recent evidence also suggests that particular BMI ranges are associated with moderate and age-related mortality risks [3].
Height measurement is typically performed in an erect standing posture. For non-critical patients, contact methods such as table scales, standing scales, or medical measuring devices are commonly employed. However, for critically ill patients in intensive care units (ICUs), requiring them to move or assume an upright position for height measurement is often impractical [4]. Additionally, severely ill patients are frequently unconscious or incapacitated, complicating accurate height assessments. Therefore, the development of a non-contact height measurement method for critically ill patients is particularly important in the ICU setting [5,6].
Currently, height measurements for ICU patients in hospitals in Vietnam are frequently conducted by nurses; however, the nurse-to-patient ratio is often insufficient. This situation introduces significant challenges in obtaining accurate measurements. Accurate height data are crucial, as it is integral to calculating treatment parameters such as creatinine indices [7,8]. Thus, the implementation of an automatic, non-contact height measurement method represents a critical step toward ensuring the highest possible accuracy for calculating treatment parameters.
Several non-contact methods have been proposed for measuring height in special populations, such as the elderly, hospitalized individuals, bedridden patients, and those with skeletal deformities. A study conducted at Jimma University demonstrated that height estimates derived from linear body measurements, including arm span, knee height, and half-arm span, serve as useful surrogate measures [9]. However, the study was limited by a narrow age range, including only adults aged 18 to 40 years, which may not adequately represent the broader adult population, especially considering the potential decline in height in older age groups. Furthermore, Haritosh has investigated the use of facial proportions to estimate body height [10]. This method involves calculating height from facial images by extracting facial features through convolutional neural networks and predicting height using artificial neural networks. However, the average error rate in the measurement is approximately 7.3 cm, which constitutes a significant deviation in height assessment.
A common method for estimating human height from images or videos is skeletal extraction [11]. This approach utilizes computer vision and image-processing methodologies to analyze visual data. The accuracy of this method can be affected by various factors, including camera focal length, angle, and ambient-lighting conditions. To enhance the precision of height measurements, we propose a study employing MediaPipe to extract skeletal point coordinates from images capturing both a person and a reference object—a black cardboard of fixed dimensions. These coordinates, represented in a two-dimensional space as X and Y values, are used to calculate the lengths of bone segments, thus facilitating height estimation. Following the extraction of skeletal points, a machine-learning model will be employed to train the input data and estimate human height. We hypothesize that the use of a reference object will improve the accuracy of height measurement.

2. Materials and Methods

2.1. The Proposed Method

Figure 1 illustrates a diagram of the proposed height measurement method. The block diagram consists of six primary blocks. The first block serves as the input, which is an image of a person in a vertical position. The second block identifies and marks human body landmarks (skeleton points) using the OpenCV and MediaPipe libraries. The third block calculates the length of each skeleton using the MediaPipe library. This step involves calculating a centimeter-per-pixel (cm/pixel) ratio using a reference object, counting the number of pixels in each skeleton, and then calculating the skeleton length in centimeters. In the fourth block, the lengths of the skeletons are fed into a multivariate linear regression model to train the model. In the fifth block, human height is predicted using the trained model. Finally, in the sixth block, human height is obtained.
The OpenCV (Open Computer Vision) is a leading open-source library for computer vision, machine learning, and image processing. It is written in C/C++, which enables it to achieve very fast calculation speeds and allows for use in real-time applications [12]. MediaPipe is a series of cross-platform machine-learning solutions used for tasks such as face detection, face mesh, and human pose estimation [13]. It consists of three main parts, namely a framework for inference from sensory data, a set of tools for performance evaluation, and a collection of reusable inference and processing components [14]. YOLOv8 is a computer vision model for object recognition and detection developed by Ultralytics in 2016 [15]. Among different object detection algorithms, the YOLO (You Only Look Once) framework has stood out for its remarkable balance of speed and accuracy, enabling the rapid and reliable identification of objects in images. Since its inception, the YOLO family has evolved through multiple iterations, each building upon previous versions to address limitations and enhance performance [15].
Using the OpenCV and MediaPipe, a total of 501 landmarks (skeleton joints) were identified. These landmarks were then fed to a customized multiclass classification model to understand the relationship between each class and its coordinates for classifying and detecting a body posture [16]. The OpenCV library was first used for image processing. After that, the MediaPipe library was applied to extract x, y, and z coordinates, as well as the number of pixels for each joint. Finally, the YOLOv8 model was employed to identify the black cardboard, calculate the number of pixels in the cardboard, and determine the ratio of cm/pixel according to Equation (1):
( k ) = h e i g h t ( c m ) d i s ( p i x e l )
The human body was divided into six segments: h 1 is the distance from the shoulder to the hip, h 2 is the distance from the hip to the knee, h 3 is the distance from the knee to the ankle, h 4 is the distance from the ankle to the sole of the foot, h 5 is the distance from the middle of the shoulder to the middle of the mouth, and h 6 is the distance from the middle of the mouth to the nose. The distance between two points, A( x a , y a ) and B( x b , y b ), was calculated. In this project, our calculations were based on the normalized coordinates x i obtained from MediaPipe y i . These coordinates were then converted to a pixel coordinate system using Equations (2) and (3).
X i = i m a g e _ w i d t h x i
Y i = i m a g e _ h e i g h t y i
The pixel coordinates were then used to calculate the distances between landmarks and the lengths of the skeleton segments in the human body. The coordinates of the midpoint of the shoulder and the coordinates of the midpoint of the hip were used to calculate the distance h 1 (Equation (4)):
h 1 = k × ( X 23 + X 24 2 X 11 + X 12 2 ) 2 + ( Y 23 + Y 24 2 Y 11 + Y 12 2 ) 2
The skeletal segment h 2 was calculated as the distance between points 23 and 25 (Equation (5)):
h 2 = k × ( X 25 X 23 ) 2 + ( Y 25 Y 23 ) 2
Similarly, the distance h 3 was calculated as the distance between points 27 and 25 (Equation (6)):
h 3 = k × X 27 X 25 2 + Y 27 Y 25 2
The distance h 4 from the ankle to the sole of the left foot was calculated as follows:
h 4 = k × ( Y 29 Y 31 ) x 27 + ( X 31 X 29 ) Y 27 + ( Y 31 Y 29 ) X 29 ( X 31 X 29 ) Y 29 Y 29 Y 31 2 + X 31 X 29 2
The distance h 5 was calculated as the distance from the midpoint of the shoulder to the midpoint of the mouth (Equation (8)):
h 5 = k × ( X 11 + X 12 2 X 9 + X 10 2 2 + Y 11 + Y 12 2 Y 9 + Y 10 2 2
Finally, the distance h 6 from the midpoint of the mouth to the nose was calculated as follows:
h 6 = k × ( X 0 X 9 + X 10 2 2 + Y 0 Y 9 + Y 10 2 2

2.2. Predicting Result of Height Measurement

A multivariable linear regression, which is an extension of a single-variable linear regression algorithm, was used to train and predict body height. This algorithm has proven to be highly effective in predicting outcomes based on two or more independent variables.
The multivariate linear regression [17] equation takes the following form:
Y = β0 + β1 × X1 + β2 × X2 + … + βn × Xn + ε
where Y is the dependent variable that needs to be predicted. X1, X2, …, Xn are the independent variables, and β0, β1, β2, …, βn are the relationship coefficients.
After calculating the length of each skeletal segment, we applied a multivariate linear regression equation to predict human height. Equation (10) becomes the following:
h = β 0 + β 1 h 1 + β 2 h 2 + β 3 h 3 + β 4 h 4 + β 5 h 5 + β 6 h 6 + ε
where h is the predicted height; h1, h2, h3, …, h6 are the calculated distance of skeleton segments; and β0, β1, β2, …, β6 are the correlation coefficients obtained during the process of training the multivariate linear regression model.
The multivariate linear regression model is an important tool for investigating relationships between several response variables and multiple predictor variables. The primary focus is on making inferences about the unknown regression coefficient matrices. We propose multivariate bootstrap techniques as a means for drawing inferences about these matrices. A real data example and two simulated data examples that provide finite sample verifications of our theoretical results are presented in [18,19].

2.3. Data Collection

This study was approved by the Hanoi University of Science and Technology. Data were collected from 166 adult subjects who agreed to participate in the study. Photographs of the subjects were taken with a smartphone camera. The smartphone was fixed on a tripod at a height of approximately 115 cm from the ground (Figure 2). The tripod was positioned at distances of 200 cm and 300 cm from the subject. A 20.5 cm × 30.5 cm black cardboard was placed next to the subject on the wall, with the center of the cardboard at a height of approximately 115 cm from the ground. On the opposite side of the subject, a wall height chart was attached.
Subjects were guided to perform four different postures during the experiment. Firstly, there was the standing-upright position (Figure 3a), where subjects stood straight and looked directly ahead. This position simulates the body in a natural state, with no tilt or rotation. Secondly, in the 45-degree rotation position (Figure 3b), subjects turned their bodies 45 degrees away from the camera while looking straight ahead. This pose simulates the body at a slight angle, which can affect how bone segments appear in the image. Thirdly, in the horizontal 90-degree rotation position (Figure 3c), subjects turned their bodies 90 degrees from the camera and looked straight ahead. This position simulates the body at a greater angle and illustrates patients’ positions in a hospital bed. This helps to better understand the differences in measurements of bone segments when the body is in a horizontal state, which is important in medical applications. Finally, in the kneeling position (Figure 3d), subject turned their bodies 90 degrees but bent their knees. This position is especially important for understanding changes in bone segments when the body is in a bent-knee state, simulating situations where the body is not completely upright. In each pose, subjects remained in position throughout the image capturing process to ensure the accuracy of the measurements. Staying steady and immobile during each scan is crucial to ensure that body landmarks are accurately and consistently identified.

2.4. Data Processing

The obtained images were processed using MediaPipe and YOLOv8 to extract the X and Y coordinates of the landmarks on the body, as well as the parameters of the reference object, which served for calculating the lengths of the skeletal segments. After that, the mean and standard deviation (SD) [19] were used to remove outliers to increase model accuracy. Specifically, data outside of the ±3 SD range were removed. The remaining valid values were used as input for the training model. Finally, a multivariate linear regression model was applied to the skeletal segments to estimate subjects’ height. The collected data consisted of 166 samples for each posture, with heights ranging from 148 cm to 184 cm. After eliminating outliners, a new dataset consisting of 162 samples was divided into 80% for training and 20% for testing.

3. Results

3.1. Standing Upright Position

After training the model with the training and test samples, we developed Equation (12) to estimate height based on bone segment lengths. Table 1 provides the evaluation results for the standing-upright position. This method has an average error of 1.94 cm (1.14%) across the test data samples. The error is mainly due to a lack of camera calibration and inaccuracies in extracting coordinates from MediaPipe, as well as varying lighting conditions during data collection.
H = 1.07533865 h 1   + 1.2476316 h 2   + 0.59605108 h 3 + 0.6496244 h 4 + 0.76927537 h 5 2.13930107 h 6 + 58.4779461

3.2. 45-Degree Rotation Position

Similarly, Equation (13) was developed to estimate body height for the 45-degree rotation position. Evaluation results for this position are presented in Table 2. The average error is 1.91 cm (1.12%).
H = 0.70003005 h 1 + 0.98866088 h 2 + 0.76985497 h 3 + 0.35090296 h 4 + 0.68119476 h 5 0.40682656 h 6 + 72.229882

3.3. Horizontal 90-Degree Rotation Position

After training with the dataset for posture 3, Equation (14) was derived to estimate height for the 90-degree turned posture. Table 3 provides the evaluation results for the horizontal 90-degree rotation position. The average error is 2.62 cm (1.54%).
H = 0.08162038 h 1   + 0.70345667 h 2   + 0.67353882 h 3   + 0.55258515 h 4   + 0.42507086 h 5 0.20956018 h 6 + 73.384567

3.4. Kneeling Position

For the 90-degree sideways bent-knee position, we derived Equation (15) to calculate body height. Results are presented in Table 4. The evaluation results for this position show an average error of 2.45 cm (1.43%).
H = 0.36422493 h 1 + 0.81095132 h 2 + 0.58705451 h 3 + 0.58410078 h 4 + 0.35394516 h 5 0.21066217 h 6 + 73.779571

4. Discussion

The experimental results demonstrate that the proposed height estimation method can estimate human height relatively accurately, with an average error ranging from 1.91 cm to 2.62 cm (1.12–1.54%). Among the four postures, the height estimation model for the 45-degree rotation position yields the best results, with an average error of 1.91 cm (1.12%). For the other postures, the achieved results are less accurate. The standing-upright position has a result nearly equal to that of the 45-degree rotation posture, with an average error of 1.94 cm (1.14%). However, for the horizontal 90-degree rotation position and the kneeling position, the errors are significantly larger, with average errors of 2.62 cm (1.54%) and 2.45 cm (1.43%), respectively. This is mainly because the MediaPipe model does not perform as effectively when estimating height in more complex postures compared to the standing-upright posture. Additionally, other factors, such as lighting conditions and camera angles, also affect the accuracy of the measurement.

5. Conclusions

This study presents a non-contact height measurement method utilizing the MediaPipe library in conjunction with the YOLOv8 model to extract joint coordinates and calculate bone lengths, employing a multivariate linear regression function for predicting human height from images. Experimental results indicate that the average errors between the estimated and actual heights range from 1.91 cm to 2.62 cm (1.12% to 1.54%). This level of accuracy is deemed acceptable for a variety of applications. Future research will focus on expanding the methodology to determine the height of individuals in various standing and lying positions. The goal is to develop a flexible and efficient software application capable of measuring height across diverse real-world contexts. The integration of technologies such as MediaPipe and YOLOv8 demonstrates significant potential for applications in fields such as medicine, sports, and health monitoring, where reliable and precise height measurements from images are essential.

Author Contributions

Conceptualization, P.N.T., N.B.N. and K.N.P.; methodology, P.N.T., N.B.N., K.N.P., H.P.V. and T.H.V.; software, P.N.T., N.B.N., T.H.V. and K.N.P.; validation, P.N.T., N.B.N., K.N.P., H.P.V., T.H.V. and T.N.; formal analysis, P.N.T., N.B.N., T.H.V. and K.N.P.; investigation, P.N.T., N.B.N., T.H.V. and K.N.P.; resources, N.B.N. and K.N.P.; data curation, P.N.T. and T.H.V.; writing, P.N.T., N.B.N., K.N.P. and T.N.; writing—review and editing, P.N.T., N.B.N., K.N.P., H.P.V., T.H.V., T.N. and A.G.; visualization, P.N.T., T.H.V. and T.N.; supervision, N.B.N. and K.N.P.; project administration, N.B.N. and K.N.P.; funding acquisition, N.B.N. and K.N.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was approved by the Hanoi University of Science and Technology on 28 July 2023.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient to publish this paper.

Data Availability Statement

Data will be available from the corresponding authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Deaton, A. Height, health, and development. Proc. Natl. Acad. Sci. USA 2007, 104, 13232–13237. [Google Scholar] [CrossRef] [PubMed]
  2. Obese, H.J.O.R. Body mass index (BMI). Obes. Res. 1998, 6, 51S–209S. [Google Scholar]
  3. Nuttall, F.Q. Body mass index: Obesity, BMI, and health: A critical review. Nutr. Today 2015, 50, 117–128. [Google Scholar] [CrossRef] [PubMed]
  4. Phan, K.N.; Anh, V.T.; Manh, H.P.; Thu, H.N.; Thuy, N.T.; Thi, H.N.; Thuy, A.N.; Trung, P.N. The Non-Contact Height Measurement Method Using MediaPipe and OpenCV in a 2D Space. In Proceedings of the 2023 1st International Conference on Health Science and Technology (ICHST), Hanoi, Vietnam, 28–29 December 2023; pp. 1–6. [Google Scholar] [CrossRef]
  5. Dennis, D.M.; Hunt, E.E.; Budgeon, C.A. Measuring height in recumbent critical care patients. Am. J. Crit. Care 2015, 24, 41–47. [Google Scholar] [CrossRef] [PubMed]
  6. L’her, E.; Martin-Babau, J.; Lellouche, F. Accuracy of height estimation and tidal volume setting using anthropometric formulas in an ICU Caucasian population. Ann. Intensive Care 2016, 6, 55. [Google Scholar] [CrossRef] [PubMed]
  7. Duc, T.T.; Phan, K.N.; Lan, P.N.; Mai PB, T.; Ngoc, D.C.; Manh, H.N.; Le Hoang, O.; Trung, P.N. Design and Development of Bedside Scale with Embedded Software to Calculate Treatment Parameters for Resuscitated Patients. In Proceedings of the 2023 1st International Conference on Health Science and Technology (ICHST), Hanoi, Vietnam, 28–29 December 2023; pp. 1–6. [Google Scholar] [CrossRef]
  8. Nguyễn, P.K.; Đoàn, B.T.; Lê, H.O. Phần mềm hỗ trợ tính toán các thông số điều trị cho bệnh nhân hồi sức tích hợp với cân bệnh nhân. Tạp Chí Y Học Việt Nam 2023, 529, 179–183. [Google Scholar] [CrossRef]
  9. Digssie, A.; Argaw, A.; Belachew, T. Developing an equation for estimating body height from linear body measurements of Ethiopian adults. J. Physiol.-Thropology 2018, 37, 26. [Google Scholar] [CrossRef] [PubMed]
  10. Haritosh, A. A novel method to estimate Height, Weight and Body Mass Index from face images. In Proceedings of the Twelfth International Conference on Contemporary Computing (IC3), Noida, India, 8–10 August 2019; IEEE: Piscataway, NJ, NSA, 2019. [Google Scholar]
  11. Lee, D.S.; Kim, J.S.; Jeong, S.C.; Kwon, S.K. Human height estimation by color deep learning and depth 3D conversion. Appl. Sci. 2010, 10, 5531. [Google Scholar] [CrossRef]
  12. Bradski, G. The opencv library. Dr. Dobb’s J. Softw. Tools Prof. Program. 2000, 25, 120–123. [Google Scholar]
  13. Lugaresi, C.; Tang, J.; Nash, H.; McClanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.L.; Yong, M.G.; Lee, J.; et al. Mediapipe: A framework for building perception pipelines. arXiv 2019, arXiv:1906.08172. [Google Scholar]
  14. Lugaresi, C.; Tang, J.; Nash, H.; McClanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.L.; Yong, M.; Lee, J.; et al. Mediapipe: A framework for perceiving and processing reality. In Proceedings of the Third workshop on computer vision for AR/VR at IEEE computer vision and pattern recognition (CVPR), Long Beach, CA, USA, 17 June 2019; Volume 2019. [Google Scholar]
  15. Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  16. Singh, A.K.; Kumbhare, V.A.; Arthi, K. Real-time human pose detection and recognition using mediapipe. In Proceedings of the International Conference on Soft Computing and Signal Processing, Hyderabad, India, 18–19 June 2021; Springer Nature: Singapore, 2021; pp. 145–154. [Google Scholar]
  17. Tranmer, M.; Elliot, M. Multiple linear regression. Cathie Marsh Cent. Census Surv. Res. (CCSR) 2008, 5, 1–5. [Google Scholar]
  18. Eck, D.J. Bootstrapping for multivariate linear regression models. Stat. Probab. Lett. 2018, 134, 141–149. [Google Scholar] [CrossRef]
  19. Livingston, E.H. The mean and standard deviation: What does it all mean? J. Surg. Res. 2004, 119, 117–123. [Google Scholar] [CrossRef] [PubMed]
Figure 1. System diagram.
Figure 1. System diagram.
Sensors 24 06796 g001
Figure 2. Tripod set up and camera.
Figure 2. Tripod set up and camera.
Sensors 24 06796 g002
Figure 3. Height measurement in different postures; (a) standing-upright position; (b) 45-degree rotation position; (c) horizontal 90-degree rotation position; and (d) Kneeling position. Lines and points in each figure represent segments and joints determined from the OpenCV and the MediaPipe libraries.
Figure 3. Height measurement in different postures; (a) standing-upright position; (b) 45-degree rotation position; (c) horizontal 90-degree rotation position; and (d) Kneeling position. Lines and points in each figure represent segments and joints determined from the OpenCV and the MediaPipe libraries.
Sensors 24 06796 g003
Table 1. Prediction results for 17 subjects in the standing-upright posture.
Table 1. Prediction results for 17 subjects in the standing-upright posture.
SamplesActual Height
(cm)
Predicted Height
(cm)
Error
(cm)
Error Rate
(%)
1174170.95143.0485751.752055
2177177.42260.4226360.238778
3162167.69945.6993823.518137
4168168.32820.3282230.195371
5172169.18832.8117321.634728
6169172.85583.8557732.281522
7170169.25510.7449090.438182
8170173.34023.3401761.964809
9180180.44990.4498620.249923
10175173.03431.9657131.123264
11169171.77932.7792891.64455
12175175.50550.5054520.28883
13173175.81682.8167931.628204
14168170.09522.0951631.247121
15171170.78310.2168590.126818
16168167.10570.8943230.532335
17163163.99930.9992870.613059
Average170.8171.62411.9396561.145746
Table 2. Prediction results for 17 subjects in the 45-degree tilted-standing posture.
Table 2. Prediction results for 17 subjects in the 45-degree tilted-standing posture.
SamplesActual Height (cm)Predicted Height (cm)Error
(cm)
Error Rate
(%)
1174170.34263.6574262.101969
2177178.74881.7488340.988042
3162167.00365.0036253.088657
4168167.31310.6868610.408846
5172174.82132.8213371.640312
6169166.29782.7021941.598931
7170167.92832.0717121.218654
8170168.66851.3315420.78326
9180178.42541.5745560.874753
10175174.13320.8667950.495312
11169171.02582.0258111.198705
12175175.73140.7314340.417962
13173174.03681.036840.599329
14168164.07623.9238092.335601
15171172.58871.5887120.929071
16168167.47330.5267270.313528
17163162.79550.204530.125479
Average170.8170.67121.9119261.124612
Table 3. Prediction results for 17 subjects in the 90-degree tilted-standing posture.
Table 3. Prediction results for 17 subjects in the 90-degree tilted-standing posture.
SamplesActual Height (cm)Predicted Height (cm)Error
(cm)
Error Rate
(%)
1177177.88650.8864950.500845
2162169.28087.28084.494321
3168168.01630.0163190.009714
4172169.43182.5682041.493142
5169168.1730.8269590.489325
6170176.07396.0739283.572899
7170168.5281.471990.865877
8180183.21673.2166691.787039
9175171.73363.2664461.86654
10169174.76925.7692263.413743
11169168.78020.2198490.130088
12175172.88442.1155691.208896
13173175.14892.1489421.242163
14168165.5622.4380411.451215
15171173.90132.9012621.696644
16168167.89820.101780.060583
17163166.22743.227431.980018
Average170.5171.61842.6194061.544885
Table 4. Prediction results for 17 subjects in the 90-degree tilted-standing posture with bent knees.
Table 4. Prediction results for 17 subjects in the 90-degree tilted-standing posture with bent knees.
SamplesActual Height (cm)Predicted Height
(cm)
Error
(cm)
Error Rate
(%)
1174169.36614.6339062.663164
2177178.50041.500380.847672
3162163.42231.4222820.877952
4168168.33690.3368860.200527
5172170.55491.4451340.840194
6169168.74440.2556350.151263
7170176.00426.0041883.531876
8170175.10935.1093343.005491
9180182.64752.6474981.470832
10175178.37413.3741081.928062
11169172.61413.6141172.138531
12175172.98362.016441.152252
13173175.89762.8975551.674887
14168166.55281.447250.861458
15171169.42261.5774270.922472
16168167.00250.9974960.593748
17163165.33232.3323321.430879
Average170.8171.81562.4477631.428898
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nguyen Trung, P.; Nguyen, N.B.; Nguyen Phan, K.; Pham Van, H.; Hoang Van, T.; Nguyen, T.; Gandjbakhche, A. A Non-Contacted Height Measurement Method in Two-Dimensional Space. Sensors 2024, 24, 6796. https://doi.org/10.3390/s24216796

AMA Style

Nguyen Trung P, Nguyen NB, Nguyen Phan K, Pham Van H, Hoang Van T, Nguyen T, Gandjbakhche A. A Non-Contacted Height Measurement Method in Two-Dimensional Space. Sensors. 2024; 24(21):6796. https://doi.org/10.3390/s24216796

Chicago/Turabian Style

Nguyen Trung, Phu, Nghien Ba Nguyen, Kien Nguyen Phan, Ha Pham Van, Thao Hoang Van, Thien Nguyen, and Amir Gandjbakhche. 2024. "A Non-Contacted Height Measurement Method in Two-Dimensional Space" Sensors 24, no. 21: 6796. https://doi.org/10.3390/s24216796

APA Style

Nguyen Trung, P., Nguyen, N. B., Nguyen Phan, K., Pham Van, H., Hoang Van, T., Nguyen, T., & Gandjbakhche, A. (2024). A Non-Contacted Height Measurement Method in Two-Dimensional Space. Sensors, 24(21), 6796. https://doi.org/10.3390/s24216796

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop