[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3701571.3701607acmotherconferencesArticle/Chapter ViewFull TextPublication PagesmumConference Proceedingsconference-collections
research-article
Open access

JumpMetric: Assessment of Fiducial Positions for Vertical Jump Height Estimation from Depth Cameras and Wearable Sensors

Published: 02 December 2024 Publication History

Abstract

While the training of vertical jumps offers benefits for agility and performance across various amateur sports, the objective measurement of jump height remains a challenge compared to simpler assessments like the broad jump distance in a sand pit. We present and compare two approaches for the estimation of vertical jump height from an off-the-shelf depth camera and cost-efficient wearable motion sensors with an onboard accelerometer. With these, we assess the accuracy achievable at diverse fiducial positions, which are 7 skeletal joints and 10 wearing positions, respectively. A user study was conducted with 44 subjects (33 male, 11 female, 23.1  ±  2.2 years) performing countermovement jumps. From the simultaneous recordings of the two modalities, provided as a public dataset, the subjects’ vertical jump height is estimated and compared to the manually determined ground truth. The most accurate values of the depth camera were obtained from the pelvis and thoracic spine joint with an average error of -15.8  ±  23.3 mm and 24.2  ±  35.1 mm, respectively. The best estimates from the wearable motion sensor data were obtained from the neck position and the ankles with an average error of 18.8  ±  29.0 mm and -4.8  ±  35.2 mm, respectively. The results demonstrate that, by selecting suitable and reliable fiducial positions, both modalities can successfully be used to estimate athletes’ vertical jump height with easy-to-use and cost-efficient tools.

1 Introduction

Many sports disciplines like volleyball and basketball involve vertical jumping, and the ability to jump high is essential for the athlete’s competitive advantage [24, 34]. In both professional and amateur sectors, training vertical jump is beneficial for improving overall agility and performance. Therefore, measuring and keeping track of athletes’ progress and improvements is crucial [1, 22]. Moreover, vertical jump height is an indicator of lower body strength and, thus, can serve as a tangible goal to keep patients and athletes motivated to work toward their rehabilitation and general fitness. There exist a variety of off-the-shelf apparatuses and systems for this purpose. The common Vertec is a commercial product that consists of a frame with horizontal vanes that are rotated out of their initial position to indicate the height reached by the athlete. While this mechanical tool is limited in its resolution to ½ inch (12.7 mm) by the vanes, there are also diverse contact mats and force plates that allow for measurements with a finer granularity. As all of these utilities tend to be bulky, difficult to set up, and expensive, they are not easily accessible to amateurs and semi-professional sports clubs. Therefore, the objective and, most importantly, simple measurement of vertical jump height remains a challenge compared to simpler assessments, such as the broad jump distance in a sand pit.
In this paper, we present two easy-to-use and cost-efficient approaches to accurately estimate athletes’ vertical jump height. In the user study, each of the 44 participants (33 male, 11 female) in the age of 23.1 ±  2.2 years performed 5 countermovement jumps (CMJ), resulting in a total of 220 jumps. These were simultaneously recorded with the commercial depth camera Microsoft Azure Kinect and M5Stack M5StickC Plus off-the-shelf wearable devices with onboard 3-axis accelerometer sensor. In order to gather ground truth information, a conventional digital camera was used to document the jumps and the vertical hip displacement along a measuring tape. Thales’ theorem on proportionality was then applied to correct the perspective displacement of the manual readings from the video footage. While the measurements of the depth camera allow to directly obtain the jump height from the vertical joint displacement, the flight time (FT) method [19, 21] was applied for the acceleration signals from the wearable motion sensors. The feature of crossing the baseline at about 1 g turned out to be a simple yet very effective feature to identify the take-off and landing points in the time series of the acceleration along the y-axis.
In this paper, we make the following contributions:
We present two approaches for the accurate vertical jump height estimation using a) a conventional depth camera and b) cost-efficient wearable motion sensors with onboard 3-axis accelerometer.
We evaluate the accuracy of these modalities and, furthermore, assess the accuracy achievable at diverse fiducial positions, at 7 skeletal joints and 10 wearing positions, respectively.
The dataset [29] is available through the university’s research database: https://doi.org/10.48436/c0584-yqb91

2 Related Work

Both depth cameras and wearable sensing devices, typically equipped with accelerometer sensors, have widely been used to track human motion and recognize activities. While depth cameras capture motion from an external point of view, body-worn motion sensors take over an internal, egocentric perspective.

2.1 Depth Camera

Initially released for gaming applications, commercial, off-the-shelf depth cameras, such as the Microsoft Azure Kinect used in this research, have proven themselves useful for tracking the users’ body skeletal model and analyzing their motion in real time. Since the sensor observes the user from an external perspective, it requires a free line of sight and a suitable view angle to track the body parts without occlusion [16, 17, 27, 30]. However, it was demonstrated that even partially occluded body parts can be modeled and simulated for signal reconstruction [18].
Previous research evaluated the agreement of depth cameras and marker-based motion capturing systems, the gold standard, for the monitoring of “functional movements”, such as squats [17]. The results showed a moderate to high validity and that, in general, the impact of the view angle was insignificant. However, the achieved accuracy decreased significantly for occluded body parts, e.g., a hidden leg in the side view. Similarly, the joint angles have successfully been estimated during diverse physical activities, including vertical jumps [7, 26]. With a focus on the take-off phase, the trajectories of knee and shoulder joints during vertical jumping have been tracked with a depth camera and compared to those of a conventional camera-based 2d motion analysis system [15]. With a focus on the landing phase, instead, other research found the joint angles “relatively consistent” between a depth camera and a camera-based motion analysis system [11]. While most research focuses on ergonomics, injury prevention, and rehabilitation, the actual vertical jump height, estimated from depth camera data, has to our knowledge never been in the focus of interest yet.

2.2 Wearable Motion Sensors

In general, accelerometers have become very popular in the field of human activity recognition (HAR) [2]. The highly integrated sensors measure the rate of change in velocity of a body in its own instantaneous rest frame. Termed as gravity, a resting body on the Earth’s surface experiences an acceleration of 1 g or about 9.81 m/ s2  toward the Earth’s center of mass. Therefore, jumping causes an acceleration directed contrary to the force of gravity, which makes it possible to estimate the vertical jump height from accelerometer readings. Attached to the hip, lower back, and waist level, accelerometers have already successfully been used to estimate vertical jump height [5, 6, 14, 23, 31]. According to Dias et al. [10], there are primarily two approaches: the double integration of the vertical reaction force (DIF), which can also utilize the acceleration and neglect the person’s mass, and the determination of the flight time (FT) to estimate the reached jump height, according to Kibele [19] and Moir [21].
On the one hand, since the DIF method calculates the displacement of the center of mass (COM), it is important that the subjects start a jump in the same position as they land in. On the other hand, the FT method relies on the determination of the time difference between take-off and landing. Since this time is even squared, as apparent from equation (2), it is very sensitive to inaccuracies in the measurements. The accurate and reliable identification of take-off and landing is, therefore, critical to obtain precise timestamps. Typically, the absolute value of acceleration rapidly increases at take-off, which allows to determine the point in time when an athlete leaves the ground. Similarly, it increases again for landing and usually exceeds the previous peak of the take-off. In previous research, the FT method showed a slight offset compared to professional equipment, but consistent estimates within the system suggest that it can still be useful for tracking an athlete’s individual progress and improvement over time [5, 21].
The vertical jump height has previously been estimated with only a single accelerometer placed at the subjects’ hips [5, 8]. According to Casartelli et al. [5], the estimation of the jump height using the FT method overestimated the jump height with a systematic error of about 70 mm but showed a relatively good precision of ±  27 mm. In contrast, the DIF method was less reliable, showed a “poor validity”, and a low precision with a standard deviation of more than ±  120 mm. Similarly, Conceição et al. [8] describe that the estimation of the jump height with the FT method resulted in an accuracy of 6.1  ±  17.1 mm while the DIF method showed a low precision of ±  122.7 mm.
Additionally, gyroscope sensors have been used to correct the sensors’ rotation during the jump [14, 23]. In comparison to stereophotogrammetry measurements, Picerno et al. [23] evaluated two IMU-based methods that achieved a significantly biased accuracy of − 127  ±  68 mm (r  =  0.31) without and marginally biased 6  ±  55 mm (r  =  0.87) by applying a gyroscope-based compensation method. Similarly, Heredia-Jimenez and Orantes-Gonzalez [14] used a force place as reference, which is “considered the gold standard method”, and achieved even better accuracy of 3  ±  33 mm.
In another approach, Althouse [3] investigated the validity of seven IMU sensor locations and evaluated 15 different models that weighted the sensors’ contributions according to the mass of the limbs and combined their estimates into one. With a root mean square error (RSME) of 2.0 (1.2 m/ s2 ) between the acceleration derived from the model and the force plate, the best performance was achieved with IMU signals from the trunk, thighs, and feet.
In general, various factors must be taken into account to find the most suitable and sensitive body positions for wearable sensors. According to Zeagler [33], the weight of the sensors should not affect the wearer or their balance. The sternum, lower back, and waist are the most suitable positions for tracking full-body movements. In a standing posture, the body’s COM is typically about 100 mm lower than the navel [9]. While being perceived as a comfortable wearing position, the waist is also very close to the COM [32], which allows for the best full-body acceleration estimates. Diverse sensors are available in commercial wearable devices such as wristbands or smartwatches, but they primarily capture limb movements that are independent of the body as a whole. Gemperle et al. [12] formulated guidelines for the placement of wearable devices which highlight that the sensors should be placed with as little skin, soft tissue, and muscle movement as possible to be both reliable and comfortable to wear. Furthermore, the social acceptance should be considered when selecting body positions for the attachment of wearable devices [4].

3 Sensing Concept

Based on two different sensing modalities, two approaches are implemented and evaluated regarding their feasibility in estimating vertical jump height. While the first one uses an external depth camera to track a selection of 7 skeletal joints, the second one employs 10 wearable sensing devices to monitor the subjects’ motion from an internal, egocentric perspective. In order to determine the achieved accuracy, ground truth is manually obtained by capturing the jump along a measuring tape with a conventional video camera.
Figure 1:
Figure 1: Front view of the Microsoft Azure Kinect depth camera with the automatically fitted skeleton model of 32 joint positions:(a) subject standing still before a jump; (b) subject in a squatting position during a jump; (c) subject at the highest point of their jump.

3.1 Depth Camera

The commercially available, off-the-shelf Microsoft Azure Kinect depth camera is used to record the subjects’ limb and skeletal joint positions. It applies the time-of-flight principle, which involves emitting infrared light with a projector and measuring the time until the reflected light returns from the objects and environment. The time difference between these two events is then used to calculate the distance to an object [20]. The Kinect camera allows for the simultaneous tracking of a total of 32 skeletal joint positions at a frame rate of 30 fps (frames per second). Fig. 1 illustrates the obtained skeletal model for a subject performing a CMJ, which involves standing, squatting, and reaching the peak of the jump. The orientation of each skeletal joint is represented by a normalized unit quaternion, a complex and continuous representation of orientation in 3d-space that avoids gimbal lock [13]. The joint positions are recorded relative to the camera’s global reference frame and measured in millimeters (mm).
As detailed in equation (1) and visible in the exemplary time series of Fig. 6, the vertical displacement of the joints, and hence the jump height hd, is calculated by taking the mean \(\overline{y}_{25\%}\) of the first \(25 \,\%\) of the y values as a baseline and subtracting it from the detected peak position ymax:
\begin{equation} \begin{split} h_{d} & = \; y_{\text{max}} - \overline{y}_{\text{25\%}}\\ & = \; \max (y_{i})_{i=\left[n / 4\right]}^{n-1} \, - \, \frac{1}{\left[n / 4\right]-1} \Sigma _{j=0}^{\left[n / 4\right]-1} y_{j} \end{split} \end{equation}
(1)
Figure 2:
Figure 2: Overview of the attachment positions of the wearable motion sensors on the subjects: lower neck, chest (sternum), hips (left and right), thighs (left and right), ankles (left and right), and wrists (left and right).
Figure 3:
Figure 3: Photos of the body-worn setup: (a) attachment positions of the wearable motion sensors with onboard 3-axis accelerometers and the y-axis pointing upward; (b) an orange tracking dot affixed to a participant’s hip as reference in the video footage.

3.2 Wearable Motion Sensors

The commercially available M5Stack M5StickC Plus is used as a cost-efficient, off-the-shelf wearable device to monitor the wearer’s motion. Built around the popular ESP32 microcontroller, it is used to record the measurements from the onboard 3-axis accelerometer sensor InvenSense MPU6886. As illustrated in Fig. 2, a total of 10 devices are attached to specific body positions, which are the participants’ lower neck, chest (sternum), hips (left and right), thighs (left and right), ankles (left and right), and wrists (left and right). The sensor nodes are attached to these positions using 3d-printed mountings and elastic Velcro straps, on the clothing layer, as presented in Fig. 3a. Only the sensor at the lower neck was directly taped to the participants’ skin with skin-friendly, non-irritating tape. All devices were attached to the body with the accelerometers’ y-axis pointing up toward the ceiling.
The acceleration signals are used to identify the moments of take-off and landing. All 10 devices simultaneously record the three sensor channels at a sampling rate of 100 Hz and assign a specific timestamp to each data point. The participant’s flight time, originally termed as “time in air” by Moir [21], can be derived from the timestamps associated with take-off t0 and landing t1. In combination with the gravitational acceleration g of approximately 9.81 m/ s2 , the flight time then allows the approximation of the vertical jump height hw by applying the following equation (2) [19, 21]:
\begin{equation} h_{w} = \frac{1}{2} \cdot g \cdot \left(\frac{t_{1} - t_{0}}{2}\right)^{2} \end{equation}
(2)
Figure 4:
Figure 4: Illustration of the camera setup for ground truth measurement and the application of Thales’ theorem on proportionality to correct the perspective displacement error ε in the manual readings due to different distances camera to reference frame / hip dcp (5.65 m) and camera to measuring tape dcm (6.45 m). Actual subject’s trajectory \(\overline{P_{s}P_{j}}\) (green) during the jump and its projection on the measuring tape \(\overline{P^{\prime }_{s}P^{\prime }_{j}}\) (red). Grayed out, illustration of the effect of a shorter distance dcp resulting in the widening of the angle α and, therefore, an increase in the perspective displacement error ε.

3.3 Ground Truth

Ground truth information is essential to evaluate the measurements obtained from the two sensing modalities by comparing their values to a reliable and valid reference. Therefore, the subjects’ jumps are additionally recorded with a conventional Sony Alpha 6000 camera. As shown in Fig3b, every participant is provided with an orange tracking dot that is affixed to their hip. Placed to the side of the participants, a measuring tape enables the visual quantification of their hip displacement in the recorded video footage. In order to do that, first, a frame is selected from the period of the subject standing in a stable position and the tracking point’s position is read from the tape as \(P^{\prime }_{s}\). Then, a second frame is identified with the subject reaching the peak of the jump \(P^{\prime }_{j}\), the largest displacement in the video. The vertical jump height \(h^{\prime }_{g} = \overline{P^{\prime }_{s}P^{\prime }_{j}}\) is then determined by subtracting the standing position \(P^{\prime }_{s}\) from the peak position \(P^{\prime }_{j}\) as \(h^{\prime }_{g} = \left| P^{\prime }_{j} - P^{\prime }_{s} \right|\).
Unfortunately, the participants’ hip trajectory is closer to the camera than the measuring tape and not running in one line, which inevitably causes a non-negligible perspective displacement error in the manual readings. As illustrated in Fig. 4, the error, caused by the different distances of camera to hip dcp and camera to measuring tape dcm, is corrected by applying Thales’ theorem on proportionality, as provided in the following equation (3):
\begin{equation} h_{g} = \frac{d_{cp}}{d_{cm}} \cdot h^{\prime }_{g} \end{equation}
(3)
Depending on the position of the camera Pc, the actual hip positions Ps of the standing and Pj of the jumping participant are projected to the measuring tape with the positions \(P^{\prime }_{s}\) and \(P^{\prime }_{j}\). The shorter the distance between the trajectory of the subject’s hip displacement \(\overline{P_{s}P_{j}}\) and the parallel intercept of the projection on the measuring tape \(\overline{P^{\prime }_{s}P^{\prime }_{j}}\), the smaller the perspective displacement error. In contrast, the larger the distance from the camera position Pc to the subject’s trajectory \(\overline{P_{s}P_{j}}\), the narrower the spanned angle α and the smaller the perspective displacement error. As presented in Fig. 5, the camera was consequently placed at a larger distance to the jump reference frame, with a distance of 5.65 m at which the measuring tape’s scale was still readable in the video frames.
Figure 5:
Figure 5: Photos of the experimental setup: a) view from the video camera for ground truth determination; b) the participants’ jump area with the reference frame and the measuring tape as reference at the participants’ right; c) view from the depth camera’s position in front of the subjects.

4 User Study

The CMJ was identified as interesting, relevant, and suitable for the user study. To perform a CMJ, the subject first lowers their hips in a squatting position and then propels the entire body upward. This allows the athlete to store elastic energy in their muscles and tendons, which is then released explosively. The muscles involved in the jump generate a positive force at the lower reversal point of the movement, which is greater than the force generated by the subject’s own mass due to the squatting movement, and thus results in pretension according to the principle of the initial force. The achieved jump height is determined by the take-off speed and the depth of flexion.

4.1 Participants

The study was conducted according to the university’s ethics guidelines and did not require dedicated approval. A total of 44 subjects (33 male and 11 female) were recruited through word-of-mouth (13) and a lecture (31) that requires the students to participate in a user study of their choice. Therefore, aside from a single non-student, the subjects were primarily young undergraduate (36) and graduate (7) students in the age of 23.0  ±  2.2 years, with a height of 178.3  ±  9.0 cm and a weight of 73.2  ±  13.3 kg, resulting in a characteristic body mass index (BMI) of 22.9  ±  3.4. Before participating in the study, all subjects were asked about any medical contraindications and gave written consent that they could perform the required jumps without any risk of physical injury. Subjects were informed that they could terminate the study at any time without giving reasons and that quitting would not have any negative consequences, e.g. on the conveying lecture. To ensure minimal movement of the wearable sensors, participants were instructed to wear tight-fitting clothing. This also assured the least possible movement of the orange tracking dot (Fig3b). The subjects were asked to wear low-cut shoes that do not cover the ankles so that the devices can be attached correctly.

4.2 Study Design

During the user study, each participant performed a total of 5 CMJs. Before the recording of these, the correct performance of CMJs were explained verbally and demonstrated by the instructor. The subjects were also given the chance to try it once to get feedback on their correct implementation. For each of the consecutively performed jumps, the subjects had a time window of 30 s and about one minute of rest between the individual jumps. Between jumps, the correct location of all body-worn sensing devices was checked. If a wearable device slipped out of place or did not send any data during the recording, the jump was repeated after the rest time. In case that a subject did not land within the predefined jump reference frame, the attempt was marked as invalid and repeated, as well. For the recording, the subjects were encouraged to jump as high as they could.

4.3 Measurement Setup

The user study took place during the semester break in a quiet part of the university building and without passing visitors. The room had a pleasant temperature of around \(20 \,\mathrm{ \mathrm{^{\circ }\mathrm{\mathrm{C}}}}\). The subjects were given a separate room to change clothes and were able to perform the jumps unobserved by third parties with the door closed.
The subjects were asked to perform the CMJ within a predefined 600 × 800 mm2  jump reference frame, which was marked with colored tape on the floor, as shown in Fig. 5. This way, it was ensured that the subjects’ body parts would not leave the depth camera’s field of view during the performed jumps.
The depth camera was placed at a distance of 2.90 m to the front of the subjects and in a horizontal orientation 1.40 m above the floor. The subjects wore 10 sensing devices at specific body positions. On the right-hand side of the jump frame, and thus of the participants, a measuring tape was spanned from the floor to the ceiling of the room. The video camera was placed on the left-hand side at a distance of 5.65 m from the jump reference frame and recorded the side view of the jumping subjects with the orange tracking dot placed on their hip facing toward the camera. As previously explained, the camera was placed at a large distance to minimize the perspective displacement error.

5 Evaluation

Vertical jump estimates were obtained with two different modalities and, thus, two different approaches were used to evaluate the data. For the depth camera, the offset between a standing position and the highest point during the jump is used to determine the jump height. To evaluate the data from the wearable sensors, the feature of crossing the baseline was used to determine the time of take-off and landing. This enables the estimation of the vertical jump height from the flight time (FT). The results from both modalities are compared to the manually determined and corrected ground truth.

5.1 Depth Camera

The software of the Microsoft Azure Kinect depth camera allows to automatically fit a skeleton model on the identified human body and the export of up to 32 body joints which serve as fiducial positions. Although the dataset contains the recordings of all joints available, the assessment covers only the subset of the 7 most relevant positions: neck, thoracic spine, pelvis, left and right ankle, as well as left and right wrist. The vertical jump height is then determined from the difference between the baseline and the peak of the jump. The baseline is determined by averaging the vertical position prior to the jump (i.e., \(25 \,\%\) of the recording), with the subject standing in a calm and stable pose. The peak of the jump is then simply identified as the maximum vertical displacement along the y-axis. Please note that due to the reversed internal coordinate system of the Microsoft Azure Kinect, the maximum displacement is here the minimum value.
Two exemplary time series of the vertical joint displacement are showcased in Fig. 6. For the recordings of a well-performed CMJ (6a ), it is apparent that the subject smoothly lowered their entire body in a squatting motion before jumping upward. The subject then performed the jump with their legs stretched out and hands resting on the hips. In contrast, for the recordings of an invalid CMJ (6b ), the subject had their knees pulled up during the jump, which makes the accurate jump height estimation from the ankle joints difficult. Moreover, the subject was swinging their arms excessively, which in turn made it significantly more difficult to determine the jump height from the wrists.
Figure 6:
Figure 6: Illustration of the joints’ vertical displacement (y-axis) during a CMJ, recorded with the depth camera over time (x-axis). Two examples: (a) example of a well-performed jump with the legs stretched out and arms close to the body, resulting in great agreement of the curves and accurately aligned peak positions; (b) example of an invalid jump with the knees tucked in and excessive arm swing. Baseline range of \(25 \,\%\) (red area) and marks of the maximum displacement peaks (black). Please note the inverse y-axis orientation.
Figure 7:
Figure 7: Illustration of the accelerometer signals (y-axis) during a CMJ over time (x-axis), recorded with wearable motion sensors attached to the lower neck, chest (sternum), hip (left), thigh (left), ankle (left), and wrist (left) positions. Two examples: (a) example of a well-performed jump with the legs stretched out and arms close to the body, resulting in clear and precise characteristic features for take-off and landing; (b) example of an invalid jump with the knees tucked in and excessive arm swing, resulting in ambiguous and vague characteristics. Additional marks: take-off point (arrow up) and landing (arrow down). Time series were aligned manually.

5.2 Wearable Motion Sensors

The ten wearable devices were simultaneously transmitting the sensor readings to a computer. In most attachment positions, a successive increase of the vertical acceleration is observable until the take-off takes place. At the moment of the jump, the acceleration shows a sudden change as the subjects leap into the air. At about half the way through the jump, the acceleration meets the zero line of 0 g as the body finds itself in free-falling motion. The moment the feet touch the ground for landing, the acceleration again increases abruptly, typically even more significant than at the take-off due to a much greater force being exerted on the body.
Fig. 7 showcases recordings from different sensor positions for (7a) a well-performed and (7b) an invalid CMJ. Based on these observations and the manual analysis of the time series, the characteristic of crossing the baseline of about 1 g was identified as a simple yet suitable feature to automatically determine take-off and landing in the time series, and thus to calculate FT and to estimate the vertical jump height.
For the lower neck, sternum, and hip position, only the signal in the y-direction (upward) has been considered. After an initial increase of the acceleration during the squatting motion, the acceleration drops below 1 g. This point marks the take-off of the subject jumping upward. The landing is reached when the signal again exceeds the baseline of about 1 g. At the ankle, a little dip within the initial rising of the signal is visible as the starting point of the jump. This is due to most participants raising their heels slightly before take-off. During the landing, two spikes can be observed in the signal. This is caused by participants landing on the ball of their foot. The landing is assumed to happen between the two spikes. For the thigh, the same features were used for the landing, but the features, as seen at the lower neck, sternum, and hip positions, were used to determine the take-off.
Because the wrists can move independently from the COM’s jumping trajectory, it was not always and reliably possible to accurately estimate the jump height with any simple feature similar to the described ones. Fig. 7b shows an example in which no clear features can be identified to determine the FT accurately.

6 Dataset Description

The collected data will be made available for further research as a comprehensive dataset. The file subjects.csv provides an overview of the individual subjects’ demographic information: gender of female or male, age in years, height in cm, weight in kg, if they were conveyed by the lecture, if they were a student and, if so, of which degree. The file groundtruth.csv comprises the manually determined and the corrected ground truth information associated with the individual subject IDs: jump number, jump height, and whether the jump is considered an overall well-executed jump. The two folders depthcamera and wearables contain 44 sub-folders associated with subject IDs, which in turn contain the five files j1.csv to j5.csv, each containing the recordings of one CMJ. The files in the depthcamera folder are generated by the depth camera and provide the 3d-coordinates for all 32 recorded joint positions, along with the accompanied timestamp. The files in the wearables folder incorporate the time series of all the 3-axis acceleration data (x, y, and z) from the 10 wearable motion sensors, along with a timestamp for each sample.
The dataset [29] is available through the university’s research database: https://doi.org/10.48436/c0584-yqb91

7 Results and Discussion

Both modalities proved themselves useful for estimating the vertical jump height. Table 1 provides an overview of the achieved error, correlation, and significance in relation to the manually determined ground truth. The results from the wearable motion sensors turned out to be more robust with a larger number of useful body positions. In contrast, from the depth camera, only a limited number of skeletal joints can be used to reliably determine the displacement. For both modalities, it is important that the fiducial positions are close to the COM, or at least to the core of the body, and that they do not move too independently, as is the case with the limbs and especially the wrists.

7.1 Depth Camera

Using the pelvis joint as a fiducial position to determine the jump height showed both the best accuracy and precision with a mean error of \(\overline{\varepsilon _{d}}\) = -15.8  ±  23.3 mm. The thoracic spine at the approximate height of the sternum and the neck resulted in comparable mean errors of \(\overline{\varepsilon _{d}}\) = -24.2  ±  35.1 mm and \(\overline{\varepsilon _{d}}\) = -24.2  ±  49.1 mm, respectively, with slightly better precision for the thoracic spine. For body parts that undergo more independent movement during the jump, in particular for extremities, the results are considerably worse. With a mean error of \(\overline{\varepsilon _{d}}\) = -30.8  ±  160.2 mm, the ankles show acceptable accuracy but poor precision. For the wrists with a mean error of \(\overline{\varepsilon _{d}}\) = -329.4  ±  341.5 mm, neither accuracy nor precision are acceptable. This is also reflected by the correlation with r2 rapidly decreasing from 0.945 (r = 0.972, p  <  0.000001) for the pelvis to 0.080 (r = 0.284, p  <  0.000001) for the wrists. The Bland-Altman plots are provided in Fig. 8.
For the depth camera, it is important to choose joints whose trajectory resembles the COM’s displacement. As visible in Fig6a, although offset by some distance individual to the subject’s body, the trajectories of pelvis, thoracic spine, and neck are usually quite similar. If the jump is performed well, also the wrists’ trajectory can be similar to the pelvis’ one. Since the ankles do not lower during the squatting, there is no downslope visible, but the difference between baseline and peak can still represent the jump height well if the legs are stretched out properly. Unfortunately, several participants tucked in their feet mid-jump and therefore artificially increased their estimated jump height, as showcased in Fig6b, which in turn increases the error observed at the fiducial position.

7.2 Wearable Motion Sensors

The sensor positioned on the lower neck showed the best precision with a mean error of \(\overline{\varepsilon _{d}}\) = 18.8  ±  29.0 mm. Unexpectedly, the best accuracy was obtained from the ankles and thighs with a mean error of \(\overline{\varepsilon _{d}}\) = -4.8  ±  35.2 mm and \(\overline{\varepsilon _{d}}\) = 5.8  ±  42.5 mm, respectively. The sternum of the chest shows similar accuracy and precision with a mean error of \(\overline{\varepsilon _{d}}\) = -6.1  ±  41.1 mm. Although close to the COM, the hips showed a comparatively poor mean error of \(\overline{\varepsilon _{d}}\) = 19.6  ±  30.3 mm. As previously noticed for the depth camera, the wrists again show the worst performance with an unacceptable mean error of \(\overline{\varepsilon _{d}}\) = -431.1  ±  2685.9 mm that is caused by unrealistic jump height estimates from erroneously determined flight times. The good agreement with ground truth, except for the wrists, is also reflected by the correlation with r2 ranging from 0.914 (r = 0.956, p  <  0.000001) for the neck to 0.826 (r = 0.909, p  <  0.000001) for the thighs. The wrists do not show any correlation with ground truth (r2 = 0.004, r =  − 0.064, p = 0.220939). Fig. 9 shows the Bland-Altman plots.
The results align with Conceição et al. [8] who observed a mean error of 53 mm for accelerometer sensors attached to the hip level. In general, sensors placed at positions that have less soft tissue [33] tend to show better accuracy due to less independent movement of the sensors due to inertia. In particular the sensor on the lower neck, which was taped directly onto the skin of the subjects, had quasi no room for independent movement and hence showed the best performance. Because the quadriceps is a big muscle that is responsible for hip flexion and knee extension [25], which are both movements essential to jumping [28], the sensors placed on the thigh naturally have to undergo a lot of movement. This is apparent in both the lowest precision and r2 (apart from the wrists), but also in the time series of the acceleration (Fig. 7b). A special case are the sensors placed on the participants’ limbs (ankles and wrists) as they can move relatively independently from the jump motion and the trajectory of the COM. Movements performed during the flight time, however, can be ignored as the FT method solely relies on the take-off and landing time. This means that the signals obtained from the ankles can still yield good results as no matter the movement of the individual subjects, they always have to start and finish their jumps with their feet on the ground. However, this is not true for the wrists as they do not necessarily need to be in a certain position at the start or end of a jump. Because of this initial uncertainty, it was not possible to yield any reliable jump height estimate from the wrist-worn sensors.

7.3 Limitations

In general, obtaining ground truth measurements of vertical jumps is not easy. While the mechanical Vertec offers only a limited resolution of typically 12.7 mm, other solutions such as contact mats and force plates allow for finer measures but come again with other limitations and disadvantages. Therefore, we decided to record the participants’ countermovement jumps with a video camera and manually read the ground truth or reference information from a measuring tape in the background of the captured video footage. A semi-transparent overlay of a frame with the uncovered measuring tape allowed for an accurate reading. We also minimized the perspective displacement by increasing the distance of the camera for a narrower view and then corrected the remaining error by applying Thales’s theorem on proportionality. Nevertheless, the participants’ hips were not all of the same width and not necessarily aligned with the outer contour of the jump reference frame. However, the positional errors within the reference frame are marginal and, for a subject with a typical jump height of 400 mm, a hip displacement of 100 mm within the reference frame would result in an error of less than 6 mm. This error inevitably affects the accuracy of all fiducial positions but can be disregarded in consideration of the other sources of inaccuracies, such as the modalities themselves. Also the manual reading of the vertical displacement from the video frames is prone to inaccuracies due to the limited resolution of the measuring tape, the background being slightly blurred, and the very subjective interpretation of the tracking dot’s position. Moreover, although requested in the study announcement, the clothing of several participants was not as body-tight as desired and the tracking dot might slightly have moved due to the movement of the textile on the skin.
Although we refer to the reference measurements as ‘ground truth’ for a better understanding of the study design, this term is not very precise. The previously mentioned inaccuracies of the reference measurements indicate that the reliability of the values is not absolute. Therefore, we decided to use Bland-Altman plots which are common in this research field [23] and intended to analyze the agreement between two modalities with some inherent error.
Although the wearable motion sensors also provide an onboard gyroscope, only the accelerometer data was recorded and used in this study. In future experiments, the gyroscope signals might be helpful to correct the alignment of the sensors [14, 23] and to further improve the accuracy. Moreover, gyroscopes and the advanced analysis of the subjects’ motion sequences might even allow for obtaining accurate estimates from sensor positions that failed in this study.

8 Conclusion

Objectively measuring vertical jump height is important for athletes to monitor and improve their overall agility and performance, but it can also serve as an indicator of lower body strength and, thus, as a tangible goal in rehabilitation. The two sensor modalities assessed, a commercial depth camera and wearable motion sensors with onboard accelerometers, proved themselves as suitable and easy-to-use tools for cost-efficient jump height estimation. For the depth camera, the most accurate estimates were obtained from the pelvis (\(\overline{\varepsilon _{d}}\) = 15.8  ±  23.3 mm, r = 0.97, r2 = 0.95, p < 0.000001) and the thoracic spine (\(\overline{\varepsilon _{d}}\) = 24.2  ±  35.1 mm, r = 0.93, r2 = 0.87, p < 0.000001) joints. For the wearable motion sensors, the best estimates were obtained from the precise neck position (\(\overline{\varepsilon _{w}}\) = 18.8  ±  29.0 mm, r = 0.96, r2 = 0.91, p < 0.000001) and the more accurate ankles (\(\overline{\varepsilon _{w}}\) = -4.8  ±  35.2 mm, r = 0.94, r2 = 0.87, p < 0.000001). However, the agreement of the estimates with ground truth is generally better for the wearables (r2 between 0.83 and 0.91) while for the depth camera only the pelvis joint reaches a high agreement (r2 = 0.95) and the others are comparatively low (r2 between 0.27 and 0.87).
Especially placing sensors on body parts that can move independently from the center of mass (COM) or the body core turned out to be challenging and it was not possible to obtain reliable vertical jump height estimates from them. Additionally, sensor positions on soft tissue, such as the thigh, are not well suited, as they not only absorb the acceleration of the jump itself but also adopt the movements of the tissue.
We observed better results with body positions that reflect the movement of the jump as closely as possible. Especially the take-off and landing are important features that need to be accurately detectable when using the flight time as a measure to estimate the vertical jump height. We found the sensors placed on the lower neck and the hip to fulfill these requirements the best. For future work, using a gyroscope in addition to an accelerometer could lead to more accurate results as this allows for the correction of the sensors’ inclination.
For certain joint positions, the depth camera yielded similar results as the best-performing accelerometer positions. It needs to be said, however, that, due to the simpler calculation, this method is more error-prone as a crooked standing position can tamper with the calculated vertical jump height more easily. Additionally, contrary to working with acceleration data, body parts that are tucked independently of the core body, such as the ankles of the legs, cannot be used to properly estimate the jump height. Perhaps the flight time method could also be applied to the depth camera data and, thus, also provide more accurate estimates that are independent of the joint displacement.
Table 1:
 error ε in mmcorrelation
 positionn\(\overline{\varepsilon }\)σrr2p
depth cameraneck19024.23649.0990.8660.750<  0.000001
thoracic spine19024.18135.1180.9330.870<  0.000001
pelvis19015.84523.2790.9720.945<  0.000001
ankles380-30.752160.2450.5230.273<  0.000001
ankle (left)190-31.686162.0250.5170.268<  0.000001
ankle (right)190-29.819158.4400.5280.279<  0.000001
wrists380-329.403341.5470.2840.080<  0.000001
wrist (left)190-327.050335.4060.2840.0810.000072
wrist (right)190-331.757347.5630.2830.0800.000074
wearable motion sensorslower neck18218.83228.9600.9560.914<  0.000001
sternum182-6.05541.1230.9130.833<  0.000001
hips35419.59230.3110.9460.894<  0.000001
hip (left)17716.94428.4250.9520.907<  0.000001
hip (right)17722.24031.8660.9400.884<  0.000001
thighs3545.80142.5380.9090.826<  0.000001
thigh (left)1776.83340.7980.9120.833<  0.000001
thigh (right)1774.76944.1870.9050.820<  0.000001
ankles364-4.77235.1920.9350.874<  0.000001
ankle (left)182-4.47132.9280.9420.887<  0.000001
ankle (right)182-5.07337.3160.9290.864<  0.000001
wrists364-431.0862685.866-0.0640.0040.220939
wrist (left)182-273.1692118.516-0.1040.0110.164091
wrist (right)182-589.0033144.801-0.0400.0020.589787
Table 1: Overview of the results from the two modalities depth camera, with 7 joints as fiducial positions, and wearable motion sensors, with 10 attachment locations as fiducial positions. With the number of valid measurements n out of 220, the mean error \(\overline{\varepsilon }\), the standard deviation of the error σ, the Pearson correlation coefficient r, the squared coefficient r2, and the p-value p. Highlighted in gray are the best results of both modalities. Very significant correlation for the threshold p  <  0.000001.
Figure 8:
Figure 8: Bland-Altman plots for the depth camera in comparison to the manually determined ground truth. Assessment of 5 fiducial positions using 7 out of the available 32 joints from the subjects’ body skeletal model: (a) neck, (b) chest, (c) pelvis, (d) ankles (left and right), and (e) wrists (left an right).
Figure 9:
Figure 9: Bland-Altman plots for the wearable motion sensors in comparison to the manually determined ground truth. Assessment of 6 fiducial positions using 10 wearable motion sensors with onboard 3-axis accelerometers: (a) neck, (b) chest, (c) hips (left and right), (d) thighs (left and right), (e) ankles (left and right), and (f) wrists (left and right). The plot (f) is zoomed to clip outliers at the bottom right and hence provide better insights into the main cluster of measurements.

References

[1]
Kent Adams, John P. O’Shea, Katie L. O’Shea, and Mike Climstein. 1992. The Effect of Six Weeks of Squat, Plyometric and Squat-Plyometric Training on Power Production. The Journal of Strength & Conditioning Research 6, 1 (Feb. 1992), 36. https://journals.lww.com/nsca-jscr/abstract/1992/02000/The_Effect_of_Six_Weeks_of_Squat,_Plyometric_and.6.aspx%EF%BF%BD%C3%9C
[2]
Umran Alrazzak and Bassem Alhalabi. 2019. A Survey on Human Activity Recognition Using Accelerometer Sensor. In 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR). 152–159.
[3]
Dianne Althouse. 2022. Effects of IMU Sensor Location and Number on the Validity of Vertical Acceleration Time-Series Data in Countermovement Jumping. (2022).
[4]
Kamiar Aminian and Bijan Najafi. 2004. Capturing human motion using body-fixed sensors: outdoor measurement and clinical applications. Computer Animation and Virtual Worlds 15, 2 (2004), 79–94. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/cav.2.
[5]
Nicola Casartelli, Roland Müller, and Nicola A. Maffiuletti. 2010. Validity and Reliability of the Myotest Accelerometric System for the Assessment of Vertical Jump Height. The Journal of Strength & Conditioning Research 24, 11 (Nov. 2010), 3186.
[6]
Carlo Castagna, Marco Ganzetti, Massimiliano Ditroilo, Marco Giovannelli, Alessandro Rocchetti, and Vincenzo Manzi. 2013. Concurrent Validity of Vertical Jump Performance Assessment Systems. The Journal of Strength & Conditioning Research 27, 3 (March 2013), 761.
[7]
Tyler J. Collings, Daniel Devaprakash, Claudio Pizzolato, David G. Lloyd, Rod S. Barrett, Gavin K. Lenton, Lucas T. Thomeer, and Matthew N. Bourne. 2024. Inclusion of a skeletal model partly improves the reliability of lower limb joint angles derived from a markerless depth camera. Journal of Biomechanics 170 (June 2024), 112160.
[8]
Filipe Conceição, Martin Lewis, Hernâni Lopes, and Elza M. M. Fonseca. 2022. An Evaluation of the Accuracy and Precision of Jump Height Measurements Using Different Technologies and Analytical Methods. Applied Sciences 12, 1 (Jan. 2022), 511. Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.
[9]
Paul Davidovits. 2019. Static Forces. Physics in Biology and Medicine (2019), 1–20.
[10]
Jonathan Ache Dias, Juliano Dal Pupo, Diogo C. Reis, Lucas Borges, Saray G. Santos, Antônio RP Moro, and Noé G. Jr Borges. 2011. Validity of Two Methods for Estimation of Vertical Jump Height. The Journal of Strength & Conditioning Research 25, 7 (July 2011), 2034.
[11]
Moataz Eltoukhy, Adam Kelly, Chang-Young Kim, Hyung-Pil Jun, Richard Campbell, and Christopher Kuenze. 2016. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics. Sports Biomechanics 15, 1 (Jan. 2016), 89–102. Publisher: Routledge _eprint: https://doi.org/10.1080/14763141.2015.1123766.
[12]
F. Gemperle, C. Kasabach, J. Stivoric, M. Bauer, and R. Martin. 1998. Design for wearability. In Digest of Papers. Second International Symposium on Wearable Computers (Cat. No.98EX215). 116–122.
[13]
Andrew J. Hanson. 2006. Visualizing quaternions. Morgan Kaufmann ; Elsevier Science [distributor], San Francisco, CA : Amsterdam ; Boston.
[14]
Jose Heredia-Jimenez and Eva Orantes-Gonzalez. 2020. Comparison of Three Different Measurement Systems to Assess the Vertical Jump Height. Revista Brasileira de Medicina do Esporte 26 (April 2020), 143–146. Publisher: Sociedade Brasileira de Medicina do Exercício e do Esporte.
[15]
Shariman Ismadi Ismail, Effirah Osman, Norasrudin Sulaiman, and Rahmat Adnan. 2016. Comparison between Marker-less Kinect-based and Conventional 2D Motion Analysis System on Vertical Jump Kinematic Properties Measured from Sagittal View. In Proceedings of the 10th International Symposium on Computer Science in Sports (ISCSS), Paul Chung, Andrea Soltoggio, Christian W. Dawson, Qinggang Meng, and Matthew Pain (Eds.). Springer International Publishing, Cham, 11–17.
[16]
Eugenio Ivorra, Mario Ortega Pérez, and Mariano Luis Alcañiz Raya. 2021. Azure Kinect body tracking under review for the specific case of upper limb exercises. In MM Science Journal (Online), Vol. 2021. MM Publishing, 4333–4341. Accepted: 2022-02-14T19:02:50Z ISSN: 1805-0476.
[17]
Sungbae Jo, Sunmi Song, Junesun Kim, and Changho Song. 2022. Agreement between Azure Kinect and Marker-Based Motion Analysis during Functional Movements: A Feasibility Study. Sensors 22, 24 (Jan. 2022), 9819. Number: 24 Publisher: Multidisciplinary Digital Publishing Institute.
[18]
Jochen Kempfle and Kristof Van Laerhoven. 2018. Respiration Rate Estimation with Depth Cameras: An Evaluation of Parameters. In Proceedings of the 5th International Workshop on Sensor-Based Activity Recognition and Interaction (Berlin, Germany) (iWOAR ’18). Association for Computing Machinery, New York, NY, USA, Article 4, 10 pages.
[19]
Armin Kibele. 1998. Possibilities and Limitations in the Biomechanical Analysis of Countermovement Jumps: A Methodological Study. Journal of Applied Biomechanics 14, 1 (Feb. 1998), 105–117.
[20]
Elise Lachat, Hélène Macher, Tania Landes, and Pierre Grussenmeyer. 2015. Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor) Towards a Potential Use for Close-Range 3D Modeling. Remote Sensing 7, 10 (Oct. 2015), 13070–13097. Number: 10 Publisher: Multidisciplinary Digital Publishing Institute.
[21]
Gavin L. Moir. 2008. Three Different Methods of Calculating Vertical Jump Height from Force Platform Data in Men and Women. Measurement in Physical Education and Exercise Science 12, 4 (Oct. 2008), 207–218.
[22]
Gregory D. Myer, Kevin R. Ford, Oseph P. Palumbo, and Timothy E. Hewett. 2005. NEUROMUSCULAR TRAINING IMPROVES PERFORMANCE AND LOWER-EXTREMITY BIOMECHANICS IN FEMALE ATHLETES. The Journal of Strength & Conditioning Research 19, 1 (Feb. 2005), 51. https://journals.lww.com/nsca-jscr/abstract/2005/02000/neuromuscular_training_improves_performance_and.10.aspx
[23]
Pietro Picerno, Valentina Camomilla, and Laura Capranica. 2011. Countermovement jump performance assessment using a wearable 3D inertial measurement unit. Journal of Sports Sciences 29, 2 (Jan. 2011), 139–146. Publisher: Routledge _eprint: https://doi.org/10.1080/02640414.2010.523089.
[24]
Tine Sattler, Damir Sekulic, Vedran Hadzic, Ognjen Uljevic, and Edvin Dervisevic. 2012. Vertical Jumping Tests in Volleyball: Reliability, Validity, and Playing-Position Specifics. The Journal of Strength & Conditioning Research 26, 6 (June 2012), 1532.
[25]
Michael Schünke, Erik Schulte, and Udo Schumacher. 2007. Prometheus: LernAtlas der Anatomie: Allgemeine Anatomie und Bewegungssystem: 182 Tabellen.... Vol. 1. Georg Thieme Verlag.
[26]
Jose Sulla-Torres, Bruno Andre Santos Pamo, and Fabrizzio Jorge Cárdenas Rodríguez. 2023. Evaluation of Physical Activity by Computer Vision Using Azure Kinect in University Students. In 2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME). 1–6.
[27]
Michal Tölgyessy, Martin Dekan, and Ľuboš Chovanec. 2021. Skeleton Tracking Accuracy and Precision Evaluation of Kinect V1, Kinect V2, and the Azure Kinect. Applied Sciences 11, 12 (Jan. 2021), 5756. Number: 12 Publisher: Multidisciplinary Digital Publishing Institute.
[28]
Thomas J. Withrow, Laura J. Huston, Edward M. Wojtys, and James A. Ashton-Miller. 2006. The Relationship between Quadriceps Muscle Force, Knee Flexion, and Anterior Cruciate Ligament Strain in an in Vitro Simulated Jump Landing. The American Journal of Sports Medicine 34, 2 (Feb. 2006), 269–274. Publisher: SAGE Publications Inc STM.
[29]
Florian Wolling, Christoff Kügler, and Patrick Trollmann. 2024. Dataset for the Vertical Jump Height Estimation from Depth Camera and Wearable Accelerometer Sensor Data.
[30]
Qing-Jun Xing, Yuan-Yuan Shen, Run Cao, Shou-Xin Zong, Shu-Xiang Zhao, and Yan-Fei Shen. 2022. Functional movement screen dataset collected with two Azure Kinect depth sensors. Scientific Data 9, 1 (March 2022), 104. Publisher: Nature Publishing Group.
[31]
Jiaqing Xu, Anthony Turner, Paul Comfort, John R. Harry, John J. McMahon, Shyam Chavda, and Chris Bishop. 2023. A Systematic Review of the Different Calculation Methods for Measuring Jump Height During the Countermovement and Drop Jump Tests. Sports Medicine 53, 5 (May 2023), 1055–1072.
[32]
Che-Chang Yang and Yeh-Liang Hsu. 2010. A Review of Accelerometry-Based Wearable Motion Detectors for Physical Activity Monitoring. Sensors 10, 8 (Aug. 2010), 7772–7788. Number: 8 Publisher: Molecular Diversity Preservation International.
[33]
Clint Zeagler. 2017. Where to wear it: functional, technical, and social considerations in on-body location for wearable technology 20 years of designing for wearability. In Proceedings of the 2017 ACM International Symposium on Wearable Computers(ISWC ’17). Association for Computing Machinery, New York, NY, USA, 150–157.
[34]
G. Ziv and R. Lidor. 2010. Vertical jump in female and male volleyball players: a review of observational and experimental studies. Scandinavian Journal of Medicine & Science in Sports 20, 4 (2010), 556–567. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1600-0838.2009.01083.x.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
MUM '24: Proceedings of the International Conference on Mobile and Ubiquitous Multimedia
December 2024
568 pages
ISBN:9798400712838
DOI:10.1145/3701571

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 December 2024

Check for updates

Author Tags

  1. vertical jump height
  2. wearable sensing
  3. sensor positioning
  4. depth camera
  5. countermovement jump

Qualifiers

  • Research-article

Conference

MUM '24

Acceptance Rates

Overall Acceptance Rate 190 of 465 submissions, 41%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 218
    Total Downloads
  • Downloads (Last 12 months)218
  • Downloads (Last 6 weeks)218
Reflects downloads up to 07 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media