[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (157)

Search Parameters:
Keywords = MediaPipe

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3748 KiB  
Article
Sudden Fall Detection of Human Body Using Transformer Model
by Duncan Kibet, Min Seop So, Hahyeon Kang, Yongsu Han and Jong-Ho Shin
Sensors 2024, 24(24), 8051; https://doi.org/10.3390/s24248051 - 17 Dec 2024
Viewed by 373
Abstract
In human activity recognition, accurate and timely fall detection is essential in healthcare, particularly for monitoring the elderly, where quick responses can prevent severe consequences. This study presents a new fall detection model built on a transformer architecture, which focuses on the movement [...] Read more.
In human activity recognition, accurate and timely fall detection is essential in healthcare, particularly for monitoring the elderly, where quick responses can prevent severe consequences. This study presents a new fall detection model built on a transformer architecture, which focuses on the movement speeds of key body points tracked using the MediaPipe library. By continuously monitoring these key points in video data, the model calculates real-time speed changes that signal potential falls. The transformer’s attention mechanism enables it to catch even slight shifts in movement, achieving an accuracy of 97.6% while significantly reducing false alarms compared to traditional methods. This approach has practical applications in settings like elderly care facilities and home monitoring systems, where reliable fall detection can support faster intervention. By homing in on the dynamics of movement, this model improves both accuracy and reliability, making it suitable for various real-world situations. Overall, it offers a promising solution for enhancing safety and care for vulnerable populations in diverse environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) MediaPipe skeletal framework [<a href="#B3-sensors-24-08051" class="html-bibr">3</a>]; (<b>b</b>) Experimental environment and key points by MediaPipe Pose.</p>
Full article ">Figure 2
<p>Video clips for MediaPipe Pose on each posture class.</p>
Full article ">Figure 3
<p>Transformer model structure [<a href="#B54-sensors-24-08051" class="html-bibr">54</a>].</p>
Full article ">Figure 4
<p>(<b>a</b>) ROC; (<b>b</b>) confusion matrix.</p>
Full article ">
18 pages, 2211 KiB  
Article
Accuracy Evaluation of 3D Pose Reconstruction Algorithms Through Stereo Camera Information Fusion for Physical Exercises with MediaPipe Pose
by Sebastian Dill, Arjang Ahmadi, Martin Grimmer, Dennis Haufe, Maurice Rohr, Yanhua Zhao, Maziar Sharbafi and Christoph Hoog Antink
Sensors 2024, 24(23), 7772; https://doi.org/10.3390/s24237772 - 4 Dec 2024
Viewed by 617
Abstract
In recent years, significant research has been conducted on video-based human pose estimation (HPE). While monocular two-dimensional (2D) HPE has been shown to achieve high performance, monocular three-dimensional (3D) HPE poses a more challenging problem. However, since human motion happens in a 3D [...] Read more.
In recent years, significant research has been conducted on video-based human pose estimation (HPE). While monocular two-dimensional (2D) HPE has been shown to achieve high performance, monocular three-dimensional (3D) HPE poses a more challenging problem. However, since human motion happens in a 3D space, 3D HPE offers a more accurate representation of the human, granting increased usability for complex tasks like analysis of physical exercise. We propose a method based on MediaPipe Pose, 2D HPE on stereo cameras and a fusion algorithm without prior stereo calibration to reconstruct 3D poses, combining the advantages of high accuracy in 2D HPE with the increased usability of 3D coordinates. We evaluate this method on a self-recorded database focused on physical exercise to research what accuracy can be achieved and whether this accuracy is sufficient to recognize errors in exercise performance. We find that our method achieves significantly improved performance compared to monocular 3D HPE (median RMSE of 30.1 compared to 56.3, p-value below 106) and can show that the performance is sufficient for error recognition. Full article
Show Figures

Figure 1

Figure 1
<p>A flowchart depicting the approach to evaluate the 3D pose reconstruction through stereo camera information fusion by performed a least squares optimization to fit the reconstructed 3D pose data to the GT data.</p>
Full article ">Figure 2
<p>Schematic of the experimental setup. The test subject is wearing markers represented by white dots. The exact marker positions on the subject can bee seen in <a href="#sensors-24-07772-f003" class="html-fig">Figure 3</a>a. The subject is recorded by 11 active infrared cameras (IRCs) for the MoCap system as well as a frontal RGB camera (CF) and lateral RGB camera (CL).</p>
Full article ">Figure 3
<p>General visualization of the 31 marker positions of the MoCap system (left) and the MediaPipe Pose output (right). (<b>a</b>) Not all markers are visible from the back of the person. The ones positioned at the subject’s front are color-coded in red. In this work, we only use markers A19 and A21 relating to the right knee, A18 and A20 relating to the left knee, A27 and A29 relating to the right ankle, A26 and A28 relating to the left ankle, A13 and A17 relating to the right hip, A12 and A16 relating to the left hip, A5 for the right shoulder, A4 for the left shoulder, A7 and A9 relating to the right elbow, A6 and A8 relating to the left elbow, A15 for the right wrist and A14 for the left wrist. Explanations for the marker labels can be found in <a href="#sensors-24-07772-t0A1" class="html-table">Table A1</a>. (<b>b</b>) MediaPipe’s output consists of x-y-z coordinates of 33 different landmarks. The coordinate system’s origin is in the upper left corner of the image, with x increasing from left to right and y increasing from top to bottom. The z axis is pointed perpendicularly away from the image plane. In this work, we only consider joints B11 to B16 for the upper body and B23 to B28 for the lower body. Explanations for the joint labels can be found in <a href="#sensors-24-07772-t0A2" class="html-table">Table A2</a>.</p>
Full article ">Figure 4
<p>RMSE over all 81 recordings, for all reconstruction methods. The suffix _f denotes whether the signals were filtered by a <math display="inline"><semantics> <mrow> <mn>4</mn> <mi>th</mi> </mrow> </semantics></math>-order Butterworth low-pass with cut-off frequency of 2 Hz before fusion. All other filter methods performed worse and were therefore excluded from the graph. For visual clarity, three singular outliers above 225 were excluded from the graph.</p>
Full article ">Figure 5
<p>RMSE over all 81 recordings, for the unfiltered epipolar reconstruction, over the nine included subjects. For visual clarity, one singular outlier above 200 was excluded from the graph.</p>
Full article ">Figure 6
<p>RMSE for all subjects, for the unfiltered epipolar reconstruction, over experiments.</p>
Full article ">Figure 7
<p>Visualization of the reconstruction for subject 7, experiment 1. On the left, frames from the frontal and lateral video are shown, with the MediaPipe 2D output drawn on top. On the right, the 2D projections onto the x-y-plane and y-z-plane of the reconstructed 3D pose (<b>top</b>) and the GT 3D pose (<b>bottom</b>) are shown.</p>
Full article ">Figure 8
<p>Angle RMSE of the left and right knee over all 81 recordings, for the different unfiltered reconstruction methods.</p>
Full article ">Figure 9
<p>Example angle of the left knee (<b>left</b>) and hip (<b>right</b>) for the first three repetitions of the first sets of each experiment of subject 7. The red and green dotted lines show the mean value of the first three peaks for the unfiltered epipolar reconstruction and GT, respectively. (<b>a</b>) Left knee angle for E1 (<b>top</b>), E2 (<b>middle</b>) and E3 (<b>bottom</b>). (<b>b</b>) Left hip angle for E1 (<b>top</b>), E2 (<b>middle</b>) and E3 (<b>bottom</b>).</p>
Full article ">
14 pages, 1241 KiB  
Article
Quantifying Arm and Leg Movements in 3-Month-Old Infants Using Pose Estimation: Proof of Concept
by Marcelo R. Rosales, Janet Simsic, Tondi Kneeland and Jill Heathcock
Sensors 2024, 24(23), 7586; https://doi.org/10.3390/s24237586 - 27 Nov 2024
Viewed by 549
Abstract
Background: Pose estimation (PE) has the promise to measure pediatric movement from a video recording. The purpose of this study was to quantify the accuracy of a PE model to detect arm and leg movements in 3-month-old infants with and without (TD, for [...] Read more.
Background: Pose estimation (PE) has the promise to measure pediatric movement from a video recording. The purpose of this study was to quantify the accuracy of a PE model to detect arm and leg movements in 3-month-old infants with and without (TD, for typical development) complex congenital heart disease (CCHD). Methods: Data from 12 3-month-old infants (N = 6 TD and N = 6 CCHD) were used to assess MediaPipe’s full-body model. Positive predictive value (PPV) and sensitivity assessed the model’s accuracy with behavioral coding. Results: Overall, 499 leg and arm movements were identified, and the model had a PPV of 85% and a sensitivity of 94%. The model’s PPV in TD was 84% and the sensitivity was 93%. The model’s PPV in CCHD was 87% and the sensitivity was 98%. Movements per hour ranged from 399 to 4211 for legs and 236 to 3767 for arms for all participants, similar ranges to the literature on wearables. No group differences were detected. Conclusions: There is a strong promise for PE and models to describe infant movements with accessible and affordable resources—like a cell phone and curated video repositories. These models can be used to further improve developmental assessments of limb function, movement, and changes over time. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Sample image of all 33 anatomical landmarks on a child. The added number indicates the following virtual markers used for the study: (1) right shoulder, (2) left shoulder, (3) right wrist, (4) left wrist, (5) right ankle, and (6) left ankle.</p>
Full article ">Figure 2
<p>Example of movement count for a single limb. In this case, we display the movement of the left ankle. Triangles represent each occurrence of a movement.</p>
Full article ">Figure 3
<p>Box plots for leg (<b>A</b>) and arm (<b>B</b>) movement rate between infants with (CCHD) and without (TD) complex congenital heart disease. Dots on each plot represent average individual movement rate data. Edges of the box represent the 25th and 75th percentile, the middle line in the box is the median, and whiskers represent maximum and minimum values.</p>
Full article ">Figure 4
<p>Box plots of sample entropy for leg and arm movements for infants with (CCHD) and without (TD) complex congenital heart disease. The side and limb for each plot is as follows: (<b>A</b>) left leg, (<b>B</b>) right leg, (<b>C</b>) left arm, (<b>D</b>) right arm. Dots on each plot represent individual data. Edges of the box represent the 25th and 75th percentile, the middle line in the box is the median, and whiskers represent maximum and minimum values. A plus represents a potential outlier according to the MATLAB function.</p>
Full article ">
9 pages, 877 KiB  
Proceeding Paper
Gait-Driven Pose Tracking and Movement Captioning Using OpenCV and MediaPipe Machine Learning Framework
by Malathi Janapati, Leela Priya Allamsetty, Tarun Teja Potluri and Kavya Vijay Mogili
Eng. Proc. 2024, 82(1), 4; https://doi.org/10.3390/ecsa-11-20470 - 26 Nov 2024
Viewed by 45
Abstract
Pose tracking and captioning are extensively employed for motion capturing and activity description in daylight vision scenarios. Activity detection through camera systems presents a complex challenge, necessitating the refinement of numerous algorithms to ensure accurate functionality. Even though there are notable characteristics, IP [...] Read more.
Pose tracking and captioning are extensively employed for motion capturing and activity description in daylight vision scenarios. Activity detection through camera systems presents a complex challenge, necessitating the refinement of numerous algorithms to ensure accurate functionality. Even though there are notable characteristics, IP cameras lack integrated models for effective human activity detection. With this motivation, this paper presents a gait-driven OpenCV and MediaPipe machine learning framework for human pose and movement captioning. This is implemented by incorporating the Generative 3D Human Shape (GHUM 3D) model which can classify human bones, while Python can classify the human movements as either usual or unusual. This model is fed into a website equipped with camera input, activity detection, and gait posture analysis for pose tracking and movement captioning. The proposed approach comprises four modules, two for pose tracking and the remaining two for generating natural language descriptions of movements. The implementation is carried out on two publicly available datasets, CASIA-A and CASIA-B. The proposed methodology emphasizes the diagnostic ability of video analysis by dividing video data available in the datasets into 15-frame segments for detailed examination, where each segment represents a time frame with detailed scrutiny of human movement. Features such as spatial-temporal descriptors, motion characteristics, or key point coordinates are derived from each frame to detect key pose landmarks, focusing on the left shoulder, elbow, and wrist. By calculating the angle between these landmarks, the proposed method classifies the activities as “Walking” (angle between −45 and 45 degrees), “Clapping” (angles below −120 or above 120 degrees), and “Running” (angles below −150 or above 150 degrees). Angles outside these ranges are categorized as “Abnormal”, indicating abnormal activities. The experimental results show that the proposed method is robust for individual activity recognition. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow of the proposed methodology.</p>
Full article ">Figure 2
<p>Human activity recognition using gait.</p>
Full article ">Figure 3
<p>Process of feature extraction.</p>
Full article ">Figure 4
<p>Crawling recognized.</p>
Full article ">Figure 5
<p>Walking recognized.</p>
Full article ">Figure 6
<p>Real-time activity analysis.</p>
Full article ">Figure 7
<p>Graph of clapping.</p>
Full article ">Figure 8
<p>Graph of running.</p>
Full article ">Figure 9
<p>Graph of crawling.</p>
Full article ">Figure 10
<p>Graph of walking.</p>
Full article ">
20 pages, 8072 KiB  
Article
Using a Webcam to Assess Upper Extremity Proprioception: Experimental Validation and Application to Persons Post Stroke
by Guillem Cornella-Barba, Andria J. Farrens, Christopher A. Johnson, Luis Garcia-Fernandez, Vicky Chan and David J. Reinkensmeyer
Sensors 2024, 24(23), 7434; https://doi.org/10.3390/s24237434 - 21 Nov 2024
Viewed by 679
Abstract
Many medical conditions impair proprioception but there are few easy-to-deploy technologies for assessing proprioceptive deficits. Here, we developed a method—called “OpenPoint”—to quantify upper extremity (UE) proprioception using only a webcam as the sensor. OpenPoint automates a classic neurological test: the ability of a [...] Read more.
Many medical conditions impair proprioception but there are few easy-to-deploy technologies for assessing proprioceptive deficits. Here, we developed a method—called “OpenPoint”—to quantify upper extremity (UE) proprioception using only a webcam as the sensor. OpenPoint automates a classic neurological test: the ability of a person to use one hand to point to a finger on their other hand with vision obscured. Proprioception ability is quantified with pointing error in the frontal plane measured by a deep-learning-based, computer vision library (MediaPipe). In a first experiment with 40 unimpaired adults, pointing error significantly increased when we replaced the target hand with a fake hand, verifying that this task depends on the availability of proprioceptive information from the target hand, and that we can reliably detect this dependence with computer vision. In a second experiment, we quantified UE proprioceptive ability in 16 post-stroke participants. Individuals post stroke exhibited increased pointing error (p < 0.001) that was correlated with finger proprioceptive error measured with an independent, robotic assessment (r = 0.62, p = 0.02). These results validate a novel method to assess UE proprioception ability using affordable computer technology, which provides a potential means to democratize quantitative proprioception testing in clinical and telemedicine environments. Full article
(This article belongs to the Special Issue Advanced Sensors in Biomechanics and Rehabilitation)
Show Figures

Figure 1

Figure 1
<p>The visual display of the OpenPoint proprioception assessment, as implemented with a webcam. (<b>A</b>): The start position for the pointing task. Note that the image displayed to the participant is mirrored, so the user’s left hand appears on the left side of the screen. The assessment requires users to touch the fingertip of one hand with the fingertip of the other hand. The hand on the torso is the “target hand”, which is normally obscured using a graphically overlaid polygon, as shown on the left. (<b>B</b>): We removed the polygon to illustrate the accuracy of the finger tracking algorithm. The user is instructed to raise their pointing finger to a start target indicated by the green circle. The software then shows a target on the tip of one of the fingers of the cartoon hand (red circle). Following a three second countdown, the user is given an instruction to point and tries to touch the fingertip on their target hand, which is hidden by the polygon. Participants were instructed to refrain from directly looking at their own target hand. The tracking algorithm robustly tracks both fingertips and determines when the pointing finger stops moving, measuring the pointing error to assess proprioceptive ability.</p>
Full article ">Figure 2
<p>Pointing error calculation. (<b>A</b>) Example output from MediaPipe. The orange lines connect the landmarks returned by MediaPipe when the fingers are fully extended. We defined pointing error as the distance between the fingertips in the frontal plane (blue line). (<b>B</b>) Results from a simple experiment where the participants kept the distance between their fingers constant but moved their hands away from the camera by sliding backward on a rolling chair. The pixel-based pointing error (blue) decreased as the individual rolled back from the camera, as did apparent hand size, measured in pixels (orange line). The pixel-based pointing error (blue) has been multiplied by six to better show the decrease in distance. Dividing pixel-based pointing error by <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>a</mi> <mi>n</mi> <msub> <mrow> <mi>d</mi> <mi>s</mi> <mi>i</mi> <mi>z</mi> <mi>e</mi> </mrow> <mrow> <mi>p</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> produced a constant pointing error (green) that can be scaled to centimeters based on the calibration photos in (<b>C</b>). (<b>C</b>) An example calibration photo of participant’s hand lying on top of graph paper in order to calculate <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>a</mi> <mi>n</mi> <msub> <mrow> <mi>d</mi> <mi>s</mi> <mi>i</mi> <mi>z</mi> <mi>e</mi> </mrow> <mrow> <mi>c</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Graphical summary of the different tasks tested in Experiment 1.</p>
Full article ">Figure 4
<p>Examples of persons post stroke performing the pointing task. In Experiment 2, participants who had had a stroke sometimes could not extend the fingers of their target (hemiparetic) hand and were instructed to point to different landmarks on their hand depending on their capability. (<b>A</b>) Participant pointing to the fingertips while holding a foam pillow against the chest. (<b>B</b>) Participant pointing to the PIP joint while using an arm sling to hold his arm in a fixed position during the duration of the experiment. (<b>C</b>) Participant pointing to the MCP joint and using an arm sling.</p>
Full article ">Figure 5
<p>(<b>A</b>) Experimental setup for measuring finger proprioceptive error using the Crisscross assessments. For Crisscross, the FINGER robot moved the index and middle fingers in a crossing movement and participants were instructed to press a button with their other hand when they perceived them to be overlapped. The gray rectangle indicates the location of the opaque plastic divider used during the assessment to block the hand from view. (<b>B</b>) Example trajectories for the metacarpophalangeal (MCP) joint of the index (blue) and middle (black) fingers during Crisscross.</p>
Full article ">Figure 6
<p>Experiment 1 results. In this experiment we evaluated the pointing error of unimpaired young (<span class="html-italic">n</span> = 22) and older (<span class="html-italic">n</span> = 18) individuals in different tasks. (<b>A</b>) Two-dimensional representation of the target hand (in black) showing the mean and standard deviation across participants of the pointing endpoint (in colors). The plotted data are from the young group. (<b>B</b>) Pointing error for each task (black: mean and SD for younger participants, dark red: mean and SD for older participants). Colored points show the pointing error for individual users.</p>
Full article ">Figure 7
<p>Pointing error as a function of different factors in Experiment 1. (<b>A</b>) Visual condition (ANOVA, <span class="html-italic">p</span> &lt; 0.001). (<b>B</b>) Real or fake target hand (<span class="html-italic">p</span> &lt; 0.001). (<b>C</b>) Age (<span class="html-italic">p</span> = 0.005). (<b>D</b>) Distance from the target hand to the body (<span class="html-italic">p</span> &lt; 0.001). The error bars represent the standard deviation (SD) of the pointing errors.</p>
Full article ">Figure 8
<p>Further Analysis of Pointing Error from Experiment 1. (<b>A</b>) The effect of target hand conditions (real and fake) and visual condition (full, partial, and blindfolded), <span class="html-italic">p</span> &lt; 0.001 (<b>B</b>) The effect of visual condition (full, partial, and blindfolded) and age (young and older), <span class="html-italic">p</span> = 0.05. (<b>C</b>) The effect of target hand (real and fake) and age (young and older), <span class="html-italic">p</span> = 0.002. (<b>D</b>) The effect of visual condition (full, partial, and blindfolded) and age (young and older) for the real hand, <span class="html-italic">p</span> = 0.59. (<b>E</b>) The effect of visual condition (full, partial, and blindfolded) and age (young and older) for the fake hand, <span class="html-italic">p</span> = 0.09, with additional lines showing the effects of task order. (<b>F</b>) The effect of distance (target hand close to the body vs. target hand extended out from the body) and age (older and young), <span class="html-italic">p</span> &lt; 0.001. The error bars represent the standard deviation (SD) of the pointing errors.</p>
Full article ">Figure 9
<p>Results from Experiment 2. Proprioceptive pointing error was higher in persons who had experienced a stroke and was correlated with an independent, robot-based measure of their finger proprioception. (<b>A</b>) The pointing errors from Task 2 comparing the older and stroke groups. The stroke group had a significantly larger pointing error compared to the older group (<span class="html-italic">p</span> &lt; 0.001). The error bars represent the standard deviation (SD) of the pointing errors. (<b>B</b>) OpenPoint pointing error was moderately correlated with the Crisscross finger proprioception error angular error. Each scatter point represents a participant.</p>
Full article ">
26 pages, 4018 KiB  
Article
A MediaPipe Holistic Behavior Classification Model as a Potential Model for Predicting Aggressive Behavior in Individuals with Dementia
by Ioannis Galanakis, Rigas Filippos Soldatos, Nikitas Karanikolas, Athanasios Voulodimos, Ioannis Voyiatzis and Maria Samarakou
Appl. Sci. 2024, 14(22), 10266; https://doi.org/10.3390/app142210266 - 7 Nov 2024
Viewed by 892
Abstract
This paper introduces a classification model that detects and classifies argumentative behaviors between two individuals by utilizing a machine learning application, based on the MediaPipe Holistic model. The approach involves the distinction between two different classes based on the behavior of two individuals, [...] Read more.
This paper introduces a classification model that detects and classifies argumentative behaviors between two individuals by utilizing a machine learning application, based on the MediaPipe Holistic model. The approach involves the distinction between two different classes based on the behavior of two individuals, argumentative and non-argumentative behaviors, corresponding to verbal argumentative behavior. By using a dataset extracted from video frames of hand gestures, body stance and facial expression, and by using their corresponding landmarks, three different classification models were trained and evaluated. The results indicate that Random Forest Classifier outperformed the other two by classifying argumentative behaviors with 68.07% accuracy and non-argumentative behaviors with 94.18% accuracy, correspondingly. Thus, there is future scope for advancing this classification model to a prediction model, with the aim of predicting aggressive behavior in patients suffering with dementia before their onset. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
Show Figures

Figure 1

Figure 1
<p>Argumentative image dataset sample.</p>
Full article ">Figure 2
<p>Non-argumentative image dataset sample.</p>
Full article ">Figure 3
<p>Cross-validation metrics for the three models.</p>
Full article ">Figure 4
<p>AUC scores of the three trained models. A model that makes random guesses (practically a model with no discriminative power), is represented by the diagonal dashed blue line that extends from the bottom left (0, 0) to the top right (1, 1). The ROC curve for any model that outperforms the random one will be above this diagonal line.</p>
Full article ">Figure 5
<p>Confusion matrix of Random Forest Classifier after training.</p>
Full article ">Figure 6
<p>Confusion matrix of Gradient Boosting after training.</p>
Full article ">Figure 7
<p>Confusion matrix of Ridge Classifier after training.</p>
Full article ">Figure 8
<p>Learning curve of Random Forest Classifier after training.</p>
Full article ">Figure 9
<p>Learning curve for Gradient Boosting after training.</p>
Full article ">Figure 10
<p>Learning curve of Ridge Classifier after training.</p>
Full article ">Figure 11
<p>Paired <span class="html-italic">t</span>-test statistic results across all models and metrics.</p>
Full article ">Figure 12
<p>Confusion Matrix of Random Forest Classifier after testing.</p>
Full article ">Figure 13
<p>ROC AUC score of Random Forest Classifier after testing.</p>
Full article ">Figure 14
<p>Final model evaluation metrics.</p>
Full article ">Figure 15
<p>Probability range/count of correct argumentative and non-argumentative predictions per 0.1 accuracy range, with 1.0 being the perfect accuracy score.</p>
Full article ">
25 pages, 5540 KiB  
Article
IMITASD: Imitation Assessment Model for Children with Autism Based on Human Pose Estimation
by Hany Said, Khaled Mahar, Shaymaa E. Sorour, Ahmed Elsheshai, Ramy Shaaban, Mohamed Hesham, Mustafa Khadr, Youssef A. Mehanna, Ammar Basha and Fahima A. Maghraby
Mathematics 2024, 12(21), 3438; https://doi.org/10.3390/math12213438 - 3 Nov 2024
Viewed by 855
Abstract
Autism is a challenging brain disorder affecting children at global and national scales. Applied behavior analysis is commonly conducted as an efficient medical therapy for children. This paper focused on one paradigm of applied behavior analysis, imitation, where children mimic certain lessons to [...] Read more.
Autism is a challenging brain disorder affecting children at global and national scales. Applied behavior analysis is commonly conducted as an efficient medical therapy for children. This paper focused on one paradigm of applied behavior analysis, imitation, where children mimic certain lessons to enhance children’s social behavior and play skills. This paper introduces IMITASD, a practical monitoring assessment model designed to evaluate autistic children’s behaviors efficiently. The proposed model provides an efficient solution for clinics and homes equipped with mid-specification computers attached to webcams. IMITASD automates the scoring of autistic children’s videos while they imitate a series of lessons. The model integrates two core modules: attention estimation and imitation assessment. The attention module monitors the child’s position by tracking the child’s face and determining the head pose. The imitation module extracts a set of crucial key points from both the child’s head and arms to measure the similarity with a reference imitation lesson using dynamic time warping. The model was validated using a refined dataset of 268 videos collected from 11 Egyptian autistic children during conducting six imitation lessons. The analysis demonstrated that IMITASD provides fast scoring, takes less than three seconds, and shows a robust measure as it has a high correlation with scores given by medical therapists, about 0.9, highlighting its effectiveness for children’s training applications. Full article
Show Figures

Figure 1

Figure 1
<p>List of imitation tasks.</p>
Full article ">Figure 2
<p>Landmarks from MediaPipe Hand and Body Pose Tracking module [<a href="#B69-mathematics-12-03438" class="html-bibr">69</a>,<a href="#B70-mathematics-12-03438" class="html-bibr">70</a>].</p>
Full article ">Figure 3
<p>Room setting inside medical clinic.</p>
Full article ">Figure 4
<p>GUI control available for the admin.</p>
Full article ">Figure 5
<p>GUI interface, where the left part (child preview) is visible on the child’s screen.</p>
Full article ">Figure 6
<p>Child attention module.</p>
Full article ">Figure 7
<p>Imitation Assessment Block module.</p>
Full article ">Figure 8
<p>Feature extraction flowchart.</p>
Full article ">Figure 9
<p>Comparison between IMITASD score and medical evaluation.</p>
Full article ">Figure 10
<p>Detailed comparison of distance metrics and expert evaluation scores.</p>
Full article ">Figure 11
<p>Comparison of distance metrics and expert evaluation scores for each imitation task.</p>
Full article ">Figure 12
<p>Running time to process a video segment.</p>
Full article ">Figure 13
<p>Number of videos that were unable to be processed by MediaPipe grouped by participant.</p>
Full article ">Figure 14
<p>Number of videos that were unable to be processed by MediaPipe grouped by participant and task.</p>
Full article ">Figure 15
<p>Number of videos that were unable to be processed by MediaPipe grouped by task.</p>
Full article ">
10 pages, 1248 KiB  
Article
A Non-Contacted Height Measurement Method in Two-Dimensional Space
by Phu Nguyen Trung, Nghien Ba Nguyen, Kien Nguyen Phan, Ha Pham Van, Thao Hoang Van, Thien Nguyen and Amir Gandjbakhche
Sensors 2024, 24(21), 6796; https://doi.org/10.3390/s24216796 - 23 Oct 2024
Viewed by 779
Abstract
Height is an important health parameter employed across domains, including healthcare, aesthetics, and athletics. Numerous non-contact methods for height measurement exist; however, most are limited to assessing height in an upright posture. This study presents a non-contact approach for measuring human height in [...] Read more.
Height is an important health parameter employed across domains, including healthcare, aesthetics, and athletics. Numerous non-contact methods for height measurement exist; however, most are limited to assessing height in an upright posture. This study presents a non-contact approach for measuring human height in 2D space across different postures. The proposed method utilizes computer vision techniques, specifically the MediaPipe library and the YOLOv8 model, to analyze images captured with a smartphone camera. The MediaPipe library identifies and marks joint points on the human body, while the YOLOv8 model facilitates the localization of these points. To determine the actual height of an individual, a multivariate linear regression model was trained using the ratios of distances between the identified joint points. Data from 166 subjects across four distinct postures: standing upright, rotated 45 degrees, rotated 90 degrees, and kneeling were used to train and validate the model. Results indicate that the proposed method yields height measurements with a minimal error margin of approximately 1.2%. Future research will extend this approach to accommodate additional positions, such as lying down, cross-legged, and bent-legged. Furthermore, the method will be improved to account for various distances and angles of capture, thereby enhancing the flexibility and accuracy of height measurement in diverse contexts. Full article
Show Figures

Figure 1

Figure 1
<p>System diagram.</p>
Full article ">Figure 2
<p>Tripod set up and camera.</p>
Full article ">Figure 3
<p>Height measurement in different postures; (<b>a</b>) standing-upright position; (<b>b</b>) 45-degree rotation position; (<b>c</b>) horizontal 90-degree rotation position; and (<b>d</b>) Kneeling position. Lines and points in each figure represent segments and joints determined from the OpenCV and the MediaPipe libraries.</p>
Full article ">
20 pages, 896 KiB  
Article
SWL-LSE: A Dataset of Health-Related Signs in Spanish Sign Language with an ISLR Baseline Method
by Manuel Vázquez-Enríquez, José Luis Alba-Castro, Laura Docío-Fernández and Eduardo Rodríguez-Banga
Technologies 2024, 12(10), 205; https://doi.org/10.3390/technologies12100205 - 18 Oct 2024
Viewed by 1661
Abstract
Progress in automatic sign language recognition and translation has been hindered by the scarcity of datasets available for the training of machine learning algorithms, a challenge that is even more acute for languages with smaller signing communities, such as Spanish. In this paper, [...] Read more.
Progress in automatic sign language recognition and translation has been hindered by the scarcity of datasets available for the training of machine learning algorithms, a challenge that is even more acute for languages with smaller signing communities, such as Spanish. In this paper, we introduce a dataset of 300 isolated signs in Spanish Sign Language, collected online via a web application with contributions from 124 participants, resulting in a total of 8000 instances. This dataset, which is openly available, includes keypoints extracted using MediaPipe Holistic. The goal of this paper is to describe the construction and characteristics of the dataset and to provide a baseline classification method using a spatial–temporal graph convolutional network (ST-GCN) model, encouraging the scientific community to improve upon it. The experimental section offers a comparative analysis of the method’s performance on the new dataset, as well as on two other well-known datasets. The dataset, code, and web app used for data collection are freely available, and the web app can also be used to test classifier performance on-line in real-time. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

Figure 1
<p>Deployment of the SignaMed platform.</p>
Full article ">Figure 2
<p>Iterative process for the construction of the SignaMed dictionary.</p>
Full article ">Figure 3
<p>Subset of 19 out of 33 Mediapipe Pose keypoints used in the experimental framework.</p>
Full article ">Figure 4
<p>Isolated sign language recognition pipeline.</p>
Full article ">
20 pages, 3585 KiB  
Article
A Study of Exergame System Using Hand Gestures for Wrist Flexibility Improvement for Tenosynovitis Prevention
by Yanqi Xiao, Nobuo Funabiki, Irin Tri Anggraini, Cheng-Liang Shih and Chih-Peng Fan
Information 2024, 15(10), 622; https://doi.org/10.3390/info15100622 - 10 Oct 2024
Viewed by 656
Abstract
Currently, as an increasing number of people have been addicted to using cellular phones, smartphone tenosynovitis has become common from long-term use of fingers for their operations. Hand exercise while playing video games, which is called exergame, can be a good solution [...] Read more.
Currently, as an increasing number of people have been addicted to using cellular phones, smartphone tenosynovitis has become common from long-term use of fingers for their operations. Hand exercise while playing video games, which is called exergame, can be a good solution to provide enjoyable daily exercise opportunities for its prevention, particularly, for young people. In this paper, we implemented a simple exergame system with a hand gesture recognition program made in Python using the Mediapipe library. We designed three sets of hand gestures to control the key operations to play the games as different exercises useful for tenosynovitis prevention. For evaluations, we prepared five video games running on a web browser and asked 10 students from Okayama and Hiroshima Universities, Japan, to play them and answer 10 questions in the questionnaire. Their playing results and System Usability Scale (SUS) scores confirmed the usability of the proposal, although we improved one gesture set to reduce its complexity. Moreover, by measuring the angles for maximum wrist movements, we found that the wrist flexibility was improved by playing the games, which verifies the effectiveness of the proposal. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the <span class="html-italic">exergame</span> system with hand gestures.</p>
Full article ">Figure 2
<p>Effective hand exercises for preventing <span class="html-italic">tenosynovitis</span>.</p>
Full article ">Figure 3
<p>Hand gestures in the <span class="html-italic">wrist exercise set</span>.</p>
Full article ">Figure 4
<p>Hand gestures in the <span class="html-italic">thumb exercise set</span>.</p>
Full article ">Figure 5
<p>Hand gestures in the <span class="html-italic">finger exercise set</span>.</p>
Full article ">Figure 6
<p>Twenty-one <span class="html-italic">key points</span> of one hand by <span class="html-italic">Mediapipe</span>.</p>
Full article ">Figure 7
<p>User view with frame rate.</p>
Full article ">Figure 8
<p>Flowchart for operation procedure of <span class="html-italic">exergame</span> system.</p>
Full article ">Figure 9
<p>Flowchart for hand gesture recognition using <span class="html-italic">Mediapipe</span>.</p>
Full article ">Figure 10
<p>Improved <span class="html-italic">space key</span> gesture for <span class="html-italic">wrist exercise set</span>.</p>
Full article ">Figure 11
<p>Four gestures for wrist bending angle measurement.</p>
Full article ">
16 pages, 6813 KiB  
Article
Study on the Wear Performance of Surface Alloy Coating of Inner Lining Pipe under Different Load and Mineralization Conditions
by Yuntao Xi, Yucong Bi, Yang Wang, Lan Wang, Shikai Su, Lei Wang, Liqin Ding, Shanna Xu, Haitao Liu, Xinke Xiao, Ruifan Liu and Jiangtao Ji
Coatings 2024, 14(10), 1274; https://doi.org/10.3390/coatings14101274 - 4 Oct 2024
Viewed by 873
Abstract
Testing was carried out in this study to evaluate the friction and wear performance of 45# steel inner liner pipes with cladding, along with four different types of centralizing materials (45# steel, nylon, polytetrafluoroethylene (PTFE), and surface alloy coating) in oil field conditions. [...] Read more.
Testing was carried out in this study to evaluate the friction and wear performance of 45# steel inner liner pipes with cladding, along with four different types of centralizing materials (45# steel, nylon, polytetrafluoroethylene (PTFE), and surface alloy coating) in oil field conditions. Under dry-friction conditions, the coefficients of friction and rates of wear are significantly higher than their counterparts in aqueous solutions. This is attributed to the lubricating effect provided by the aqueous solution, which reduces direct friction between contact surfaces, thereby lowering wear. As the degree of mineralization in the aqueous solution increases, the coefficient of friction tends to decrease, indicating that an elevated level of mineralization enhances the lubricating properties of the aqueous solution. The wear pattern in an aqueous solution is similar to that in dry-friction conditions under different loads, but with a lower friction coefficient and wear rate. The coating has played an important role in protecting the wear process of 45# steel, and the friction coefficient and wear rate of tubing materials under various environmental media have been significantly reduced. In terms of test load, taking into account the friction coefficient and wear rate, the suggested order for centralizing materials for lining oil pipes with the surface alloy coating is as follows: (i) surface alloy coating, (ii) nylon, (iii) PTFE, and (iv) 45# steel. Full article
Show Figures

Figure 1

Figure 1
<p>Pin disc friction and wear experimental device: (<b>a</b>) schematic diagram, (<b>b</b>) physical image, and (<b>c</b>) control interface; mineralization degree aqueous solution environmental device: (<b>d</b>) schematic diagram and (<b>e</b>) physical image.</p>
Full article ">Figure 1 Cont.
<p>Pin disc friction and wear experimental device: (<b>a</b>) schematic diagram, (<b>b</b>) physical image, and (<b>c</b>) control interface; mineralization degree aqueous solution environmental device: (<b>d</b>) schematic diagram and (<b>e</b>) physical image.</p>
Full article ">Figure 2
<p>Friction coefficient of surface alloy coating of inner lining tubing material under different mineralization degrees: (<b>a</b>) cladded 45# steel inner liner pipes (disc)–45# steel (pin), (<b>b</b>) cladded 45# steel inner liner pipes (disc)–nylon (pin), (<b>c</b>) cladded 45# steel inner liner pipes (disc)–PTFE (pin), and (<b>d</b>) cladded 45# steel inner liner pipes (disc)–surface alloy coating (pin); (<b>e</b>) variation in friction coefficient of cladded 45# steel inner liner pipes with mineralization degree; variation in wear rate with different degrees of mineralization: (<b>f</b>) oil pipe material and (<b>g</b>) centralizing material.</p>
Full article ">Figure 2 Cont.
<p>Friction coefficient of surface alloy coating of inner lining tubing material under different mineralization degrees: (<b>a</b>) cladded 45# steel inner liner pipes (disc)–45# steel (pin), (<b>b</b>) cladded 45# steel inner liner pipes (disc)–nylon (pin), (<b>c</b>) cladded 45# steel inner liner pipes (disc)–PTFE (pin), and (<b>d</b>) cladded 45# steel inner liner pipes (disc)–surface alloy coating (pin); (<b>e</b>) variation in friction coefficient of cladded 45# steel inner liner pipes with mineralization degree; variation in wear rate with different degrees of mineralization: (<b>f</b>) oil pipe material and (<b>g</b>) centralizing material.</p>
Full article ">Figure 3
<p>Friction coefficient of surface-alloy-coating-lined oil pipe material under different test loads (dry friction): (<b>a</b>) cladded 45# steel inner liner pipes (disc)–45# steel (pin), (<b>b</b>) cladded 45# steel inner liner pipes (disc)–nylon (pin), (<b>c</b>) cladded 45# steel inner liner pipes (disc)–PTFE (pin), and (<b>d</b>) cladded 45# steel inner liner pipes (disc)–surface alloy coating (pin); (<b>e</b>) variation in friction coefficient of cladded 45# steel inner liner pipes with applied load (dry friction); variation in wear rate with applied load (dry friction): (<b>f</b>) oil pipe material and (<b>g</b>) centralizing material.</p>
Full article ">Figure 3 Cont.
<p>Friction coefficient of surface-alloy-coating-lined oil pipe material under different test loads (dry friction): (<b>a</b>) cladded 45# steel inner liner pipes (disc)–45# steel (pin), (<b>b</b>) cladded 45# steel inner liner pipes (disc)–nylon (pin), (<b>c</b>) cladded 45# steel inner liner pipes (disc)–PTFE (pin), and (<b>d</b>) cladded 45# steel inner liner pipes (disc)–surface alloy coating (pin); (<b>e</b>) variation in friction coefficient of cladded 45# steel inner liner pipes with applied load (dry friction); variation in wear rate with applied load (dry friction): (<b>f</b>) oil pipe material and (<b>g</b>) centralizing material.</p>
Full article ">Figure 4
<p>Friction coefficient of surface-alloy-coating-lined oil pipe material under different test loads (aqueous solution): (<b>a</b>) cladded 45# steel inner liner pipes (disc)–45# steel (pin), (<b>b</b>) cladded 45# steel inner liner pipes (disc)–nylon (pin), (<b>c</b>) cladded 45# steel inner liner pipes (disc)–PTFE (pin), and (<b>d</b>) cladded 45# steel inner liner pipes (disc)–surface alloy coating (pin); (<b>e</b>) variation in friction coefficient of cladded 45# steel inner liner pipes with applied load; variation in wear rate with applied load (30,000 mg/L mineralization-degree aqueous solution): (<b>f</b>) oil pipe material and (<b>g</b>) centralizing material.</p>
Full article ">Figure 4 Cont.
<p>Friction coefficient of surface-alloy-coating-lined oil pipe material under different test loads (aqueous solution): (<b>a</b>) cladded 45# steel inner liner pipes (disc)–45# steel (pin), (<b>b</b>) cladded 45# steel inner liner pipes (disc)–nylon (pin), (<b>c</b>) cladded 45# steel inner liner pipes (disc)–PTFE (pin), and (<b>d</b>) cladded 45# steel inner liner pipes (disc)–surface alloy coating (pin); (<b>e</b>) variation in friction coefficient of cladded 45# steel inner liner pipes with applied load; variation in wear rate with applied load (30,000 mg/L mineralization-degree aqueous solution): (<b>f</b>) oil pipe material and (<b>g</b>) centralizing material.</p>
Full article ">Figure 5
<p>SEM images of the worn surface of cladded 45# steel inner liner pipes (disc)–PTFE (pin) under different loading conditions in a 30,000 mg/L mineralization-degree aqueous solution: (<b>a</b>) cladded 45# steel inner liner pipes under 50 N, (<b>b</b>) PTFE under 50 N, (<b>c</b>) cladded 45# steel inner liner pipes under 500 N, (<b>d</b>) PTFE under 500 N, (<b>e</b>) cladded 45# steel inner liner pipes under 1000 N, (<b>f</b>) PTFE under 1000 N, (<b>g</b>) cladded 45# steel inner liner pipes under 2000 N, and (<b>h</b>) PTFE under 2000 N.</p>
Full article ">Figure 6
<p>Three-dimensional confocal microscopic images and height contour of cladded 45# steel inner liner pipes (disc)–PTFE (pin) under different loading conditions in a 30,000 mg/L mineralization-degree aqueous solution: (<b>a</b>) cladded 45# steel inner liner pipes under 50 N, (<b>b</b>) cladded 45# steel inner liner pipes under 500 N, (<b>c</b>) cladded 45# steel inner liner pipes 1000 N, (<b>d</b>) cladded 45# steel inner liner pipes under 2000 N, (<b>e</b>) PTFE under 50 N, (<b>f</b>) PTFE under 500 N, (<b>g</b>) PTFE under 500 N, (<b>h</b>) PTFE under 1000 N.</p>
Full article ">Figure 6 Cont.
<p>Three-dimensional confocal microscopic images and height contour of cladded 45# steel inner liner pipes (disc)–PTFE (pin) under different loading conditions in a 30,000 mg/L mineralization-degree aqueous solution: (<b>a</b>) cladded 45# steel inner liner pipes under 50 N, (<b>b</b>) cladded 45# steel inner liner pipes under 500 N, (<b>c</b>) cladded 45# steel inner liner pipes 1000 N, (<b>d</b>) cladded 45# steel inner liner pipes under 2000 N, (<b>e</b>) PTFE under 50 N, (<b>f</b>) PTFE under 500 N, (<b>g</b>) PTFE under 500 N, (<b>h</b>) PTFE under 1000 N.</p>
Full article ">Figure 6 Cont.
<p>Three-dimensional confocal microscopic images and height contour of cladded 45# steel inner liner pipes (disc)–PTFE (pin) under different loading conditions in a 30,000 mg/L mineralization-degree aqueous solution: (<b>a</b>) cladded 45# steel inner liner pipes under 50 N, (<b>b</b>) cladded 45# steel inner liner pipes under 500 N, (<b>c</b>) cladded 45# steel inner liner pipes 1000 N, (<b>d</b>) cladded 45# steel inner liner pipes under 2000 N, (<b>e</b>) PTFE under 50 N, (<b>f</b>) PTFE under 500 N, (<b>g</b>) PTFE under 500 N, (<b>h</b>) PTFE under 1000 N.</p>
Full article ">
15 pages, 1892 KiB  
Article
Smart Physiotherapy: Advancing Arm-Based Exercise Classification with PoseNet and Ensemble Models
by Shahzad Hussain, Hafeez Ur Rehman Siddiqui, Adil Ali Saleem, Muhammad Amjad Raza, Josep Alemany-Iturriaga, Álvaro Velarde-Sotres, Isabel De la Torre Díez and Sandra Dudley
Sensors 2024, 24(19), 6325; https://doi.org/10.3390/s24196325 - 29 Sep 2024
Cited by 1 | Viewed by 1572
Abstract
Telephysiotherapy has emerged as a vital solution for delivering remote healthcare, particularly in response to global challenges such as the COVID-19 pandemic. This study seeks to enhance telephysiotherapy by developing a system capable of accurately classifying physiotherapeutic exercises using PoseNet, a state-of-the-art pose [...] Read more.
Telephysiotherapy has emerged as a vital solution for delivering remote healthcare, particularly in response to global challenges such as the COVID-19 pandemic. This study seeks to enhance telephysiotherapy by developing a system capable of accurately classifying physiotherapeutic exercises using PoseNet, a state-of-the-art pose estimation model. A dataset was collected from 49 participants (35 males, 14 females) performing seven distinct exercises, with twelve anatomical landmarks then extracted using the Google MediaPipe library. Each landmark was represented by four features, which were used for classification. The core challenge addressed in this research involves ensuring accurate and real-time exercise classification across diverse body morphologies and exercise types. Several tree-based classifiers, including Random Forest, Extra Tree Classifier, XGBoost, LightGBM, and Hist Gradient Boosting, were employed. Furthermore, two novel ensemble models called RandomLightHist Fusion and StackedXLightRF are proposed to enhance classification accuracy. The RandomLightHist Fusion model achieved superior accuracy of 99.6%, demonstrating the system’s robustness and effectiveness. This innovation offers a practical solution for providing real-time feedback in telephysiotherapy, with potential to improve patient outcomes through accurate monitoring and assessment of exercise performance. Full article
(This article belongs to the Special Issue IMU and Innovative Sensors for Healthcare)
Show Figures

Figure 1

Figure 1
<p>Diagram showing the proposed methodology for exercise classification.</p>
Full article ">Figure 2
<p>Subjects performing exercises.</p>
Full article ">Figure 3
<p>Frame distributions in the training and testing sets.</p>
Full article ">Figure 4
<p>Comparison with existing studies.</p>
Full article ">
16 pages, 4954 KiB  
Article
Real-Time Hand Gesture Monitoring Model Based on MediaPipe’s Registerable System
by Yuting Meng, Haibo Jiang, Nengquan Duan and Haijun Wen
Sensors 2024, 24(19), 6262; https://doi.org/10.3390/s24196262 - 27 Sep 2024
Viewed by 1707
Abstract
Hand gesture recognition plays a significant role in human-to-human and human-to-machine interactions. Currently, most hand gesture detection methods rely on fixed hand gesture recognition. However, with the diversity and variability of hand gestures in daily life, this paper proposes a registerable hand gesture [...] Read more.
Hand gesture recognition plays a significant role in human-to-human and human-to-machine interactions. Currently, most hand gesture detection methods rely on fixed hand gesture recognition. However, with the diversity and variability of hand gestures in daily life, this paper proposes a registerable hand gesture recognition approach based on Triple Loss. By learning the differences between different hand gestures, it can cluster them and identify newly added gestures. This paper constructs a registerable gesture dataset (RGDS) for training registerable hand gesture recognition models. Additionally, it proposes a normalization method for transforming hand gesture data and a FingerComb block for combining and extracting hand gesture data to enhance features and accelerate model convergence. It also improves ResNet and introduces FingerNet for registerable single-hand gesture recognition. The proposed model performs well on the RGDS dataset. The system is registerable, allowing users to flexibly register their own hand gestures for personalized gesture recognition. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Gesture classification implementation process.</p>
Full article ">Figure 2
<p>Gesture data. (1) and (2) represent gesture photographs for two different gestures.</p>
Full article ">Figure 3
<p>MediaPipe finger landmark. The red dots are the 21 key points selected for the hand, which are connected by a green line to form a complete line of identification of the hand.</p>
Full article ">Figure 4
<p>FingerComb block.</p>
Full article ">Figure 5
<p>Structure of FingerNet.</p>
Full article ">Figure 6
<p>Training process.</p>
Full article ">Figure 7
<p>Test results box diagram. The parts marked in red are the gestures in this class that have the smallest L2 distance compared to the other gestures.</p>
Full article ">Figure 8
<p>Real-time gesture detection.</p>
Full article ">
14 pages, 3505 KiB  
Article
Enhancing Capillary Pressure of Porous Aluminum Wicks by Controlling Bi-Porous Structure Using Different-Sized NaCl Space Holders
by Hongfei Shen, Asuka Suzuki, Naoki Takata and Makoto Kobashi
Materials 2024, 17(19), 4729; https://doi.org/10.3390/ma17194729 - 26 Sep 2024
Viewed by 538
Abstract
Capillary pressure and permeability of porous media are important for heat transfer devices, including loop heat pipes. In general, smaller pore sizes enhance capillary pressure but decrease permeability. Introducing a bi-porous structure is promising for solving this trade-off relation. In this study, the [...] Read more.
Capillary pressure and permeability of porous media are important for heat transfer devices, including loop heat pipes. In general, smaller pore sizes enhance capillary pressure but decrease permeability. Introducing a bi-porous structure is promising for solving this trade-off relation. In this study, the bi-porous aluminum was fabricated by the space holder method using two different-sized NaCl particles (approximately 400 and 40 μm). The capillary pressure and permeability of the bi-porous Al were evaluated and compared with those of mono-porous Al fabricated by the space holder method. Increasing the porosity of the mono-porous Al improved the permeability but reduced the capillary pressure because of better-connected pores and increased effective pore size. The fraction of large and small pores in the bi-porous Al was successfully controlled under a constant porosity of 70%. The capillary pressure of the bi-porous Al with 40% large and 30% small pores was higher than the mono-porous Al with 70% porosity without sacrificing the permeability. However, the bi-porous Al with other fractions of large and small pores did not exhibit properties superior to the mono-porous Al. Thus, accurately controlling the fractions of large and small pores is required to enhance the capillary performance by introducing the bi-porous structure. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>SEM images of raw materials and fabrication process of bi-porous Al using space holder method.</p>
Full article ">Figure 2
<p>Schematic illustration of setup for measuring the capillary performance of porous samples in this study [<a href="#B27-materials-17-04729" class="html-bibr">27</a>].</p>
Full article ">Figure 3
<p>Representative SEM images of mono-porous Al: (<b>a</b>) S50, (<b>b</b>) S60, and (<b>c</b>) S70.</p>
Full article ">Figure 4
<p>(<b>a</b>) Time evolution of the capillary rising height of mono-porous Al. (<b>b</b>) Relationship between capillary rising rate and reciprocal height.</p>
Full article ">Figure 5
<p>SEM images of bi-porous Al: (<b>a</b>) L60S10, (<b>b</b>) L50S20, (<b>c</b>) L40S30, and (<b>d</b>) L30S40.</p>
Full article ">Figure 6
<p>(<b>a</b>) Time evolution of the capillary rising height of bi-porous Al. (<b>b</b>) Relationship between capillary rising rate and reciprocal height.</p>
Full article ">Figure 7
<p>Change in (<b>a</b>) permeability, capillary pressure, and (<b>b</b>) their product with the volume fraction of small NaCl particles.</p>
Full article ">Figure 8
<p>Plot of permeability and capillary pressure of mono-porous and bi-porous Al.</p>
Full article ">Figure 9
<p>Changes in the average sizes of large, small, and overall pores as a function of the volume fraction of small NaCl particles. The flow channel size calculated from Equation (1) is also shown in this figure.</p>
Full article ">Figure 10
<p>Hypothetical schematic illustration of capillary rising behaviors in (<b>a</b>) L60S10, L50S20, (<b>b</b>) L40S30, and (<b>c</b>) L30S40. The complex porous structures are simplified into straight channel models in (<b>d</b>–<b>f</b>).</p>
Full article ">
19 pages, 5886 KiB  
Article
Innovative Chair and System Designs to Enhance Resistance Training Outcomes for the Elderly
by Teng Qi, Miyuki Iwamoto, Dongeun Choi, Siriaraya Panote and Noriaki Kuwahara
Healthcare 2024, 12(19), 1926; https://doi.org/10.3390/healthcare12191926 - 26 Sep 2024
Viewed by 1175
Abstract
Introduction: This study aims to provide a safe, effective, and sustainable resistance training environment for the elderly by modifying chairs and movement systems used during training, particularly under unsupervised conditions. Materials and Methods: The research focused on investigating the effect of modified chair [...] Read more.
Introduction: This study aims to provide a safe, effective, and sustainable resistance training environment for the elderly by modifying chairs and movement systems used during training, particularly under unsupervised conditions. Materials and Methods: The research focused on investigating the effect of modified chair designs on enhancing physical stability during resistance training by involving 19 elderly participants (mean 72.1, SD 4.7). The study measured changes in the body’s acceleration during movements to compare the effectiveness of the modified chairs with those commonly used in chair-based exercise (CBE) training in maintaining physical stability. A system was developed based on experimental video data, which leverages MediaPipe to analyze the videos and compute joint angles, identifying whether the actions are executed correctly. Results and Conclusions: Comparisons revealed that modified chairs offered better stability during sitting (p < 0.001) and stand-up (p < 0.001) resistance training. According to the questionnaire survey results, compared to the regular chair without an armrest, the modified chair provided a greater sense of security and a better user experience for the elderly. Video observations indicated that the correct completion rate for most exercises, except stand-up resistance training, was only 59.75%, highlighting the insufficiency of modified chairs alone in ensuring accurate movement execution. Consequently, the introduction of an automatic system to verify proper exercise performance is essential. The model developed in this study for recognizing the correctness of movements achieved an accuracy rate of 97.68%. This study proposes a new chair design that enhances physical stability during resistance training and opens new avenues for utilizing advanced technology to assist the elderly in their training. Full article
Show Figures

Figure 1

Figure 1
<p>The figure shows the use of a modified chair while performing seated resistance training.</p>
Full article ">Figure 2
<p>The figure shows the use of a modified chair while performing standing resistance training.</p>
Full article ">Figure 3
<p>The figure shows the process of standing up in the correct way (<b>a</b>) and an inadequate center of gravity transfer to stand up (<b>b</b>).</p>
Full article ">Figure 4
<p>The figure shows the use of a modified chair while performing stand-up resistance training.</p>
Full article ">Figure 5
<p>This figure shows the research process.</p>
Full article ">Figure 6
<p>This figure shows a modified chair (<b>a</b>) and a regular chair without an armrest (<b>b</b>).</p>
Full article ">Figure 7
<p>The figure shows how joint angle data can be extracted from a video.</p>
Full article ">Figure 8
<p>The figure shows the statistical results of the acceleration RMS values.</p>
Full article ">Figure 9
<p>The figure shows the average scores for each movement using each type of chair, based on body stability during the training.</p>
Full article ">Figure 10
<p>The figure shows the average scores for each movement using each type of chair, based on the comfort of use.</p>
Full article ">Figure 11
<p>This figure displays a PCA analysis aimed at extracting key features that effectively differentiate between accurate and inaccurate movements from complex motion data. For the left side, the data include left shoulder-hip angle metrics such as Mean, Min, Max, Std, Median, and Power Spectrum Statistics; similar metrics for the left hip-knee angle are denoted as Mean.1 through Power_Median.1; left knee-ankle angle data extend from Mean.2 through Power_Median.2. On the right side, the statistics shown are for the right shoulder-hip angle (Mean.3 through Power_Median.3), right hip-knee angle (Mean.4 through Power_Median.4), and right knee-ankle angle (Mean.5 through Power_Median.5).</p>
Full article ">
Back to TopTop