[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (208)

Search Parameters:
Keywords = data glove

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
51 pages, 26899 KiB  
Review
Robotic Systems for Hand Rehabilitation—Past, Present and Future
by Bogdan Gherman, Ionut Zima, Calin Vaida, Paul Tucan, Adrian Pisla, Iosif Birlescu, Jose Machado and Doina Pisla
Technologies 2025, 13(1), 37; https://doi.org/10.3390/technologies13010037 - 16 Jan 2025
Viewed by 639
Abstract
Background: Cerebrovascular accident, commonly known as stroke, Parkinson’s disease, and multiple sclerosis represent significant neurological conditions affecting millions globally. Stroke remains the third leading cause of death worldwide and significantly impacts patients’ hand functionality, making hand rehabilitation crucial for improving quality of life. [...] Read more.
Background: Cerebrovascular accident, commonly known as stroke, Parkinson’s disease, and multiple sclerosis represent significant neurological conditions affecting millions globally. Stroke remains the third leading cause of death worldwide and significantly impacts patients’ hand functionality, making hand rehabilitation crucial for improving quality of life. Methods: A comprehensive literature review was conducted analyzing over 300 papers, and categorizing them based on mechanical design, mobility, and actuation systems. To evaluate each device, a database with 45 distinct criteria was developed to systematically assess their characteristics. Results: The analysis revealed three main categories of devices: rigid exoskeletons, soft exoskeletons, and hybrid devices. Electric actuation represents the most common source of power. The dorsal placement of the mechanism is predominant, followed by glove-based, lateral, and palmar configurations. A correlation between mass and functionality was observed during the analysis; an increase in the number of actuated fingers or in functionality automatically increases the mass of the device. The research shows significant technological evolution with considerable variation in design complexity, with 29.4% of devices using five or more actuators while 24.8% employ one or two actuators. Conclusions: While substantial progress has been made in recent years, several challenges persist, including missing information or incomplete data from source papers and a limited number of clinical studies to evaluate device effectiveness. Significant opportunities remain to improve device functionality, usability, and therapeutic effectiveness, as well as to implement advanced power systems for portable devices. Full article
Show Figures

Figure 1

Figure 1
<p>Skeletal model of the human hand.</p>
Full article ">Figure 2
<p>Finger motions: (<b>a</b>) Hyperextension-extension-flexion, (<b>b</b>) Abduction, (<b>c</b>) Adduction, (<b>d</b>) Circumduction.</p>
Full article ">Figure 3
<p>Classification Framework.</p>
Full article ">Figure 4
<p>Flow Diagram of Literature Search and Selection Process.</p>
Full article ">Figure 5
<p>Schematic representation of the three classes of robotic hand rehabilitation exoskeletons: (<b>a</b>) rigid, (<b>b</b>) soft, and (<b>c</b>) hybrid.</p>
Full article ">Figure 6
<p>Classification of rigid exoskeletons based on linkage type.</p>
Full article ">Figure 7
<p>Schematic representation of the main types of linkages. (<b>a</b>) Remote center of motion, (<b>b</b>) coinciding joint axes, (<b>c</b>) redundant links, (<b>d</b>) underactuated device, (<b>e</b>) coupled linkage device, and (<b>f</b>) fingertip device.</p>
Full article ">Figure 8
<p>Exoskeleton types: (<b>a</b>) Underactuated device [<a href="#B83-technologies-13-00037" class="html-bibr">83</a>], (<b>b</b>) coinciding joint axes [<a href="#B84-technologies-13-00037" class="html-bibr">84</a>], and (<b>c</b>) fingertip linkage device [<a href="#B85-technologies-13-00037" class="html-bibr">85</a>].</p>
Full article ">Figure 9
<p>Different configurations based on hand mobility.</p>
Full article ">Figure 10
<p>Exoskeletons classification based on mechanism placement.</p>
Full article ">Figure 11
<p>Classification by mechanism placement: (<b>a</b>) palmar, (<b>b</b>) lateral, (<b>c</b>) dorsal, (<b>d</b>) glove.</p>
Full article ">Figure 12
<p>Classification by mechanism placement: (<b>a</b>) Palmar [<a href="#B146-technologies-13-00037" class="html-bibr">146</a>], (<b>b</b>) lateral [<a href="#B132-technologies-13-00037" class="html-bibr">132</a>], (<b>c</b>) dorsal [<a href="#B144-technologies-13-00037" class="html-bibr">144</a>], (<b>d</b>) glove [<a href="#B358-technologies-13-00037" class="html-bibr">358</a>].</p>
Full article ">Figure 13
<p>Classification by actuator type.</p>
Full article ">Figure 14
<p>Classification by type of actuator: (<b>a</b>) DC Motor [<a href="#B194-technologies-13-00037" class="html-bibr">194</a>], (<b>b</b>) linear actuator [<a href="#B354-technologies-13-00037" class="html-bibr">354</a>], (<b>c</b>) pneumatic [<a href="#B134-technologies-13-00037" class="html-bibr">134</a>], (<b>d</b>) servomotor [<a href="#B144-technologies-13-00037" class="html-bibr">144</a>].</p>
Full article ">Figure 15
<p>Classification by transmission system.</p>
Full article ">Figure 16
<p>Classification by type of transmission: (<b>a</b>) Linkage [<a href="#B144-technologies-13-00037" class="html-bibr">144</a>], (<b>b</b>) silicone-rubber [<a href="#B364-technologies-13-00037" class="html-bibr">364</a>], (<b>c</b>) cable [<a href="#B285-technologies-13-00037" class="html-bibr">285</a>], (<b>d</b>) tendon [<a href="#B178-technologies-13-00037" class="html-bibr">178</a>].</p>
Full article ">Figure 17
<p>Distribution of publication types (<b>a</b>) temporal distribution of publications related to hand exoskeleton rehabilitation devices, (<b>b</b>) distribution of publication types.</p>
Full article ">Figure 18
<p>Country contribution based on publication output.</p>
Full article ">Figure 19
<p>Overall distribution of publication types across all years.</p>
Full article ">Figure 20
<p>The distribution of hand exoskeleton applications across categories.</p>
Full article ">Figure 21
<p>The distribution of actuation types.</p>
Full article ">Figure 22
<p>The distribution of electric actuator types.</p>
Full article ">Figure 23
<p>Transmission types (<b>a</b>) distribution of main transmission, (<b>b</b>) combined transmission systems.</p>
Full article ">Figure 24
<p>Distribution of the design topologies.</p>
Full article ">Figure 25
<p>Distribution of actuators number.</p>
Full article ">Figure 26
<p>Distribution of mechanism placement.</p>
Full article ">Figure 27
<p>Type of motions: (<b>a</b>) Finger assisted motion, (<b>b</b>) independent or coupled.</p>
Full article ">Figure 28
<p>Distribution of finger coverage.</p>
Full article ">Figure 29
<p>Range of motion (ROM) for finger joints.</p>
Full article ">Figure 30
<p>Range of motion (ROM) for thumb joints.</p>
Full article ">Figure 31
<p>Distribution of Total DoF.</p>
Full article ">Figure 32
<p>Distribution of safety features across studied hand exoskeletons.</p>
Full article ">Figure 33
<p>Distribution of adaptability features across studied hand exoskeletons.</p>
Full article ">Figure 34
<p>Distribution of weights by number of fingers assisted.</p>
Full article ">Figure 35
<p>Distribution of Hand exoskeleton weights by number of fingers assisted and type.</p>
Full article ">
18 pages, 11743 KiB  
Article
The Design and Validation of an Open-Palm Data Glove for Precision Finger and Wrist Tracking
by Olivia Hosie, Mats Isaksson, John McCormick, Oren Tirosh and Chrys Hensman
Sensors 2025, 25(2), 367; https://doi.org/10.3390/s25020367 - 9 Jan 2025
Viewed by 440
Abstract
Wearable motion capture gloves enable the precise analysis of hand and finger movements for a variety of uses, including robotic surgery, rehabilitation, and most commonly, virtual augmentation. However, many motion capture gloves restrict natural hand movement with a closed-palm design, including fabric over [...] Read more.
Wearable motion capture gloves enable the precise analysis of hand and finger movements for a variety of uses, including robotic surgery, rehabilitation, and most commonly, virtual augmentation. However, many motion capture gloves restrict natural hand movement with a closed-palm design, including fabric over the palm and fingers. In order to alleviate slippage, improve comfort, reduce sizing issues, and eliminate movement restrictions, this paper presents a new low-cost data glove with an innovative open-palm and finger-free design. The new design improves usability and overall functionality by addressing the limitations of traditional closed-palm designs. It is especially beneficial in capturing movements in fields such as physical therapy and robotic surgery. The new glove incorporates resistive flex sensors (RFSs) at each finger and an inertial measurement unit (IMU) at the wrist joint to measure wrist flexion, extension, ulnar and radial deviation, and rotation. Initially the sensors were tested individually for drift, synchronisation delays, and linearity. The results show a drift of 6.60°/h in the IMU and no drift in the RFSs. There was a 0.06 s delay in the data captured by the IMU compared to the RFSs. The glove’s performance was tested with a collaborate robot testing setup. In static conditions, it was found that the IMU had a worst case error across three trials of 7.01° and a mean absolute error (MAE) averaged over three trials of 4.85°, while RFSs had a worst case error of 3.77° and a MAE of 1.25° averaged over all five RFSs used. There was no clear correlation between measurement error and speed. Overall, the new glove design proved to accurately measure joint angles. Full article
Show Figures

Figure 1

Figure 1
<p>Images of the glove.</p>
Full article ">Figure 2
<p>Block diagram of the system electronics.</p>
Full article ">Figure 3
<p>Robotic testing setup.</p>
Full article ">Figure 4
<p>Hand in testing setup showing the glove placement. The red line shows the alignment of the fourth robot axis with the wrist joint.</p>
Full article ">Figure 5
<p>Start and end angles of linear mapping of RFSs.</p>
Full article ">Figure 6
<p>Calibration positions of hand.</p>
Full article ">Figure 7
<p>Mapping of the resistance (k<math display="inline"><semantics> <mo>Ω</mo> </semantics></math>) to the angle of deflection (°). Each curve represents a different RFS used in the glove: dotted for thumb, solid for index, dashed for middle, dash-dotted for ring, and solid with stars for little finger. The graph shows the calibration of resistance to deflection angles, essential for accurate motion tracking.</p>
Full article ">Figure 8
<p>Comparison of measured wrist angle and real wrist angle as measured by IMU. The data points (o) represent the average value at each angle, with standard deviations across each trial shown by error bars. The dotted trendline shows the linearity of the relationship between measured and real angles. The close alignment of the data points with the trendline suggests minimal deviation, reinforcing the reliability of the IMU for wrist angle measurements.</p>
Full article ">Figure 9
<p>The comparison of the measured finger angle and real finger angle as measured by RFS. Data points (o) represent the average value at each angle, with standard deviations across each trial shown by error bars. The coloured dotted trendline shows the linearity of the relationship between measured and real angles for all the fingers: blue for index, yellow for middle, green for ring, and red for little finger.</p>
Full article ">Figure 10
<p>The comparison of measured thumb angle and real thumb angle as measured by RFS. Data points (o) represent the average value at each angle, with standard deviations across each trial shown by error bars. The dotted trendline shows the linearity of the relationship between measured and real angles.</p>
Full article ">Figure 11
<p>Progression of IMU angles (°) over a 6 s period, showing the change in angle as a function of time. The plot highlights the sensor’s responsiveness and consistency during continuous motion. Non-continuous motion at the start and end of the trial is included in this graph to demonstrate why it is excluded from further calculations.</p>
Full article ">
12 pages, 1863 KiB  
Article
Machine Learning-Assisted Prediction of Ambient-Processed Perovskite Solar Cells’ Performances
by Dowon Pyun, Seungtae Lee, Solhee Lee, Seok-Hyun Jeong, Jae-Keun Hwang, Kyunghwan Kim, Youngmin Kim, Jiyeon Nam, Sujin Cho, Ji-Seong Hwang, Wonkyu Lee, Sangwon Lee, Hae-Seok Lee, Donghwan Kim and Yoonmook Kang
Energies 2024, 17(23), 5998; https://doi.org/10.3390/en17235998 - 28 Nov 2024
Viewed by 657
Abstract
As we move towards the commercialization and upscaling of perovskite solar cells, it is essential to fabricate them in ambient environment rather than in the conventional glove box environment. The efficiency of ambient-processed perovskite solar cells lags behind those fabricated in controlled environments, [...] Read more.
As we move towards the commercialization and upscaling of perovskite solar cells, it is essential to fabricate them in ambient environment rather than in the conventional glove box environment. The efficiency of ambient-processed perovskite solar cells lags behind those fabricated in controlled environments, primarily owing to external environmental factors such as humidity and temperature. In the case of device fabrication in ambient environments, relying solely on a single parameter, such as temperature or humidity, is insufficient for accurately characterizing environmental conditions. Therefore, the dew point is introduced as a parameter which accounts for both temperature and humidity. In this study, a machine learning model was developed to predict the efficiency of ambient-processed perovskite solar cells based on meteorological data, particularly the dew point. A total of 238 perovskite solar cells were fabricated, and their photovoltaic parameters and dew points were collected from March to December 2023. The collected data were used to train various tree-based machine learning models, with the random forest model achieving the highest accuracy. The efficiencies of the perovskite solar cells fabricated in January and February 2024 were predicted with a MAPE of 4.44%. An additional Shapley Additive exPlanations analysis confirmed the significance of the dew point in the performance of perovskite solar cells. Full article
Show Figures

Figure 1

Figure 1
<p>Entire flowchart for dew-point-based efficiency prediction in this study.</p>
Full article ">Figure 2
<p>Photovoltaic parameters of perovskite solar cells fabricated under various dew points. A total of 238 devices were collected and measured. (<b>a</b>) V<sub>OC</sub>, (<b>b</b>) J<sub>SC</sub>, (<b>c</b>) FF, and (<b>d</b>) efficiency.</p>
Full article ">Figure 3
<p>Effects of bulk defect density in MAPbI<sub>3</sub> simulated with SCAPS-1D. (<b>a</b>) J-V curve, (<b>b</b>) V<sub>OC</sub>, (<b>c</b>) J<sub>SC</sub>, (<b>d</b>) FF, and (<b>e</b>) efficiency. The red arrow in (<b>a</b>) indicates an increase of bulk defect density in MAPbI<sub>3</sub>.</p>
Full article ">Figure 4
<p>Prediction results using the trained random forest model. (<b>a</b>) Graph showing the actual values on the <span class="html-italic">x</span>-axis and the predicted values on the <span class="html-italic">y</span>-axis; closer alignment to the y = x line indicates more accurate predictions. The light red dots represent training dataset predictions, while dark red dots represent test dataset predictions. (<b>b</b>) The efficiency distribution (box chart) of the fabricated devices in January and February 2024, with predicted results (red stars) obtained using our model.</p>
Full article ">Figure 5
<p>(<b>a</b>) Feature importance in the model. Dew point shows the highest feature importance, suggesting that dew point is the crucial factor with a substantial impact on efficiency. (<b>b</b>) Relationship between SHAPs values and dew points. Blue background region implies the trend of the obtained data points. The point where SHAP is zero is indicated with a red line, and the dashed line indicates the criteria for dew point suggested in this work that exhibits a positive SHAP value.</p>
Full article ">
16 pages, 5893 KiB  
Article
Development of Rehabilitation Glove: Soft Robot Approach
by Tomislav Bazina, Marko Kladarić, Ervin Kamenar and Goran Gregov
Actuators 2024, 13(12), 472; https://doi.org/10.3390/act13120472 - 22 Nov 2024
Viewed by 683
Abstract
This study describes the design, simulation, and development process of a rehabilitation glove driven by soft pneumatic actuators. A new, innovative finger soft actuator design has been developed through detailed kinematic and workspace analysis of anatomical fingers and their actuators. The actuator design [...] Read more.
This study describes the design, simulation, and development process of a rehabilitation glove driven by soft pneumatic actuators. A new, innovative finger soft actuator design has been developed through detailed kinematic and workspace analysis of anatomical fingers and their actuators. The actuator design combines cylindrical and ribbed geometries with a reinforcing element—a thicker, less extensible structure—resulting in an asymmetric cylindrical bellow actuator driven by positive pressure. The performance of the newly designed actuator for the rehabilitation glove was validated through numerical simulation in open-source software. The simulation results indicate actuators’ compatibility with human finger trajectories. Additionally, a rehabilitation glove was 3D-printed from soft materials, and the actuator’s flexibility and airtightness were analyzed across different wall thicknesses. The 0.8 mm wall thickness and thermoplastic polyurethane (TPU) material were chosen for the final design. Experiments confirmed a strong linear relationship between bending angle and pressure variations, as well as joint elongation and pressure changes. Next, pseudo-rigid kinematic models were developed for the index and little finger soft actuators, based solely on pressure and link lengths. The workspace of the soft actuator, derived through forward kinematics, was visually compared to that of the anatomical finger and experimentally recorded data. Finally, an ergonomic assessment of the complete rehabilitation glove in interaction with the human hand was conducted. Full article
(This article belongs to the Special Issue Modelling and Motion Control of Soft Robots)
Show Figures

Figure 1

Figure 1
<p>The process of design, development, and experimental assessment of the rehabilitation glove: (<b>A</b>) circular grasping example; (<b>B</b>) finger ROM; (<b>C</b>) finger kinematics; (<b>D</b>) tuning of construction parameters in the design process; (<b>E</b>) final 3D model; (<b>F</b>) SOFA simulation; (<b>G</b>) 3D-printed segments made from TPU, featuring varying dimensions and wall thicknesses for design analysis; (<b>H</b>) experimental assessment and validation of the soft robot’s ROM; and (<b>I</b>) the developed rehabilitation glove fitted onto the user’s hand.</p>
Full article ">Figure 2
<p>Kinematic analysis: (<b>a</b>) workspace of the index finger in FE plane with finger joint (MCP, PIP, DIP, TIP) trajectories during circular grasping according to [<a href="#B16-actuators-13-00472" class="html-bibr">16</a>] and (<b>b</b>) soft finger actuator kinematic chain with modified DH approach. The diagram displays revolute and prismatic joints along the robot’s segments, with symbols indicating points of rotation (POP), revolute joints, and prismatic joints. Each joint is labeled with corresponding DH parameters, including joint angle (<span class="html-italic">θ<sub>i</sub></span>) and elongation (Δ<span class="html-italic">d<sub>i</sub></span>).</p>
Full article ">Figure 3
<p>A 3D model of the rehabilitation glove: (<b>a</b>) cross-sectional view of a single actuating element; (<b>b</b>) finger actuator composed of three segments; (<b>c</b>) cross-sectional view of cylindrical channels for compressed air supply; and (<b>d</b>) assembly of the 3D model of the rehabilitation glove.</p>
Full article ">Figure 4
<p>Soft actuator simulation: (<b>a</b>) volumetric mesh, (<b>b</b>) index finger simulation at 0 bar pressure (initial position), and (<b>c</b>) index finger simulation at 8 bar pressure.</p>
Full article ">Figure 5
<p>Soft-robotic glove fitted to the user’s hand: (<b>a</b>) all soft actuators in initial position and (<b>b</b>) all soft actuators activated.</p>
Full article ">Figure 6
<p>Laboratory experiments demonstrating angular motion of soft actuators under varying pressure levels (0, 2, 4, and 7 bar): (<b>a</b>) soft actuators for the index finger with overlaid kinematic representation for <span class="html-italic">p</span> = 0 bar and (<b>b</b>) soft actuators for the little finger.</p>
Full article ">Figure 7
<p>Experimentally obtained linear joint constraints for index and little finger depending on pressure (95% confidence intervals colored in gray): (<b>a</b>) revolute joint angle and (<b>b</b>) link offset vs. pressure.</p>
Full article ">Figure 8
<p>The workspace in the FE plane for: (<b>a</b>) I-finger soft actuator and (<b>b</b>) L-finger soft actuator. Eight different kinematic positions corresponding to the experimental pressures have been additionally indicated.</p>
Full article ">
19 pages, 5019 KiB  
Article
Fusion Text Representations to Enhance Contextual Meaning in Sentiment Classification
by Komang Wahyu Trisna, Jinjie Huang, Hengyu Liang and Eddy Muntina Dharma
Appl. Sci. 2024, 14(22), 10420; https://doi.org/10.3390/app142210420 - 12 Nov 2024
Viewed by 1436
Abstract
Sentiment classification plays a crucial role in evaluating user feedback. Today, online media users can freely provide their reviews with few restrictions. User reviews on social media are often disorganized and challenging to classify as positive or negative comments. This task becomes even [...] Read more.
Sentiment classification plays a crucial role in evaluating user feedback. Today, online media users can freely provide their reviews with few restrictions. User reviews on social media are often disorganized and challenging to classify as positive or negative comments. This task becomes even more difficult when dealing with large amounts of data, making sentiment classification necessary. Automating sentiment classification involves text classification processes, commonly performed using deep learning methods. The classification process using deep learning models is closely tied to text representation. This step is critical as it affects the quality of the data being processed by the deep learning model. Traditional text representation methods often overlook the contextual meaning of sentences, leading to potential misclassification by the model. In this study, we propose a novel fusion text representation model, GloWord_biGRU, designed to enhance the contextual understanding of sentences for sentiment classification. Firstly, we combine the advantages of GloVe and Word2Vec to obtain richer and more meaningful word representations. GloVe provides word representations based on global frequency statistics within a large corpus, while Word2Vec generates word vectors that capture local contextual relationships. By integrating these two approaches, we enhance the quality of word representations used in our model. During the classification stage, we employ biGRU, considering the use of fewer parameters, which consequently reduces computational requirements. We evaluate the proposed model using the IMDB dataset. Several scenarios demonstrate that our proposed model achieves superior performance, with an F1 score of 90.21%. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Proposed GloWord_biGRU Architecture.</p>
Full article ">Figure 2
<p>Accuracy performance on training and validation data among various deep learning model.</p>
Full article ">Figure 3
<p>Loss during training on training and validation data compare with other deep learning models.</p>
Full article ">Figure 4
<p>Accuracy performance compare with single word embedding.</p>
Full article ">
12 pages, 1634 KiB  
Article
A Highly Sensitive Strain Sensor with Self-Assembled MXene/Multi-Walled Carbon Nanotube Sliding Networks for Gesture Recognition
by Fei Wang, Hongchen Yu, Xingyu Ma, Xue Lv, Yijian Liu, Hanning Wang, Zhicheng Wang and Da Chen
Micromachines 2024, 15(11), 1301; https://doi.org/10.3390/mi15111301 - 25 Oct 2024
Cited by 1 | Viewed by 989
Abstract
Flexible electronics is pursuing a new generation of electronic skin and human–computer interaction. However, effectively detecting large dynamic ranges and highly sensitive human movements remains a challenge. In this study, flexible strain sensors with a self-assembled PDMS/MXene/MWCNT structure are fabricated, in which MXene [...] Read more.
Flexible electronics is pursuing a new generation of electronic skin and human–computer interaction. However, effectively detecting large dynamic ranges and highly sensitive human movements remains a challenge. In this study, flexible strain sensors with a self-assembled PDMS/MXene/MWCNT structure are fabricated, in which MXene particles are wrapped and bridged by dense MWCNTs, forming complex sliding conductive networks. Therefore, the strain sensor possesses an impressive sensitivity (gauge factor = 646) and 40% response range. Moreover, a fast response time of 280 ms and detection limit of 0.05% are achieved. The high performance enables good prospects in human detection, like human movement and pulse signals for healthcare. It is also applied to wearable smart data gloves, in which the CNN algorithm is utilized to identify 15 gestures, and the final recognition rate is up to 95%. This comprehensive performance strain sensor is designed for a wide array of human body detection applications and wearable intelligent systems. Full article
(This article belongs to the Special Issue 2D-Materials Based Fabrication and Devices)
Show Figures

Figure 1

Figure 1
<p>The preparation method of PDMS/MXene/MWCNT strain sensor. (<b>a</b>) The preparation of MXene and MWCNT solutions. (<b>b</b>) The PDMS films prepared by plasma treatment. (<b>c</b>) The procedure of the self-assembling method to prepare the conducting layers. (<b>d</b>) An actual image of PDMS/MXene/MWCNT strain sensor and the images of the sensor stretched, twisted, and folded.</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>c</b>) The top-view SEM images of the PDMS/MXene/MWCNT strain sensor. (<b>d</b>) The schematic representation of the MXene/MWCNT structure. (<b>e</b>) The Raman spectra and (<b>f</b>) X-ray diffraction (XRD) results for MWCNTs, MXene, and the conductive layers composed of MXene/MWCNTs.</p>
Full article ">Figure 3
<p>(<b>a</b>) The real-time response and (<b>b</b>) sensitivity under the gradually increasing micro-strain step of 0–5%. (<b>c</b>) The real-time response and (<b>d</b>) sensitivity of the strain sensors under a gradually increasing load, exhibiting a strain range from 0% to 40%. (<b>e</b>) The real-time response and (<b>f</b>) sensitivity of the strain sensors with varying ratios of MXene to MWCNTs. The only variable in (<b>a</b>–<b>d</b>) is the number of self-assembled layers, and (<b>e</b>–<b>f</b>) are based on sensors with 12 cycle self-assembled layers, whose variable is the ratio of materials.</p>
Full article ">Figure 4
<p>(<b>a</b>) The response time under a strain of 1%. (<b>b</b>) The relative resistance changes as a function of time under a minimal strain of 0.05%. (<b>c</b>) The real-time relative resistance response curve of the strain sensor during stretching and releasing. (<b>d</b>) The cyclic variation in relative resistance of the strain sensors subjected to different strains. (<b>e</b>) The long-term durability test with 1800 stretch and release cycles under a 10% strain.</p>
Full article ">Figure 5
<p>In the PDMS/MXene/MWCNT strain sensor, the relative changes in resistance were measured on (<b>a</b>) the finger, (<b>b</b>) the leg, (<b>c</b>) the muscle, and (<b>d</b>) the throat. (<b>e</b>) The sensing performance recorded during speaking “S D U S T”. (<b>f</b>) The pulse signal in the strain sensor.</p>
Full article ">Figure 6
<p>(<b>a</b>) The conceptual diagram of the designed data glove. (<b>b</b>) The actual image of the data glove displaying 15 different gestures. (<b>c</b>) The evolution process of accuracy and training loss during 100 epochs. (<b>d</b>) The confusion matrix illustrating the prediction outcomes generated by the CNN model.</p>
Full article ">
16 pages, 5805 KiB  
Article
Numerical and Experimental Study of a Wearable Exo-Glove for Telerehabilitation Application Using Shape Memory Alloy Actuators
by Mohammad Sadeghi, Alireza Abbasimoshaei, Jose Pedro Kitajima Borges and Thorsten Alexander Kern
Actuators 2024, 13(10), 409; https://doi.org/10.3390/act13100409 - 11 Oct 2024
Viewed by 1331
Abstract
Hand paralysis, caused by conditions such as spinal cord injuries, strokes, and arthritis, significantly hinders daily activities. Wearable exo-gloves and telerehabilitation offer effective hand training solutions to aid the recovery process. This study presents the development of lightweight wearable exo-gloves designed for finger [...] Read more.
Hand paralysis, caused by conditions such as spinal cord injuries, strokes, and arthritis, significantly hinders daily activities. Wearable exo-gloves and telerehabilitation offer effective hand training solutions to aid the recovery process. This study presents the development of lightweight wearable exo-gloves designed for finger telerehabilitation. The prototype uses NiTi shape memory alloy (SMA) actuators to control five fingers. Specialized end effectors target the metacarpophalangeal (MCP), proximal interphalangeal (PIP), and distal interphalangeal (DIP) joints, mimicking human finger tendon actions. A variable structure controller, managed through a web-based Human–Machine Interface (HMI), allows remote adjustments. Thermal behavior, dynamics, and overall performance were modeled in MATLAB Simulink, with experimental validation confirming the model’s efficacy. The phase transformation characteristics of NiTi shape memory wire were studied using the Souza–Auricchio model within COMSOL Multiphysics 6.2 software. Comparing the simulation to trial data showed an average error of 2.76°. The range of motion for the MCP, PIP, and DIP joints was 21°, 65°, and 60.3°, respectively. Additionally, a minimum torque of 0.2 Nm at each finger joint was observed, which is sufficient to overcome resistance and meet the torque requirements. Results demonstrate that integrating SMA actuators with telerehabilitation addresses the need for compact and efficient wearable devices, potentially improving patient outcomes through remote therapy. Full article
(This article belongs to the Special Issue Shape Memory Alloy (SMA) Actuators and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Illustration of the human finger movement mechanism and various joint structures.</p>
Full article ">Figure 2
<p>(<b>a</b>) Fabricated exoskeleton glove, (<b>b</b>) Control and power system, (<b>c</b>–<b>e</b>) Various end effectors designed for the treatment of the MCP, PIP, and DIP joints, respectively.</p>
Full article ">Figure 3
<p>Linkage mechanism: (<b>a</b>) Side view, (<b>b</b>) Four-bar model, (<b>c</b>) Hollow disks friction model.</p>
Full article ">Figure 4
<p>Schematic representation of the Simulink system model.</p>
Full article ">Figure 5
<p>Measurement apparatus for evaluating dynamic finger movements.</p>
Full article ">Figure 6
<p>(<b>a</b>) Schematic depiction of the Grip Sensor and test objects, (<b>b</b>) Calibration results.</p>
Full article ">Figure 7
<p>Comparison of simulation and experimental test for a profile input.</p>
Full article ">Figure 8
<p>Stress–temperature phase diagrams for NiTi shape memory alloy wire: (<b>a</b>) Under different constant DC voltage stimulation, (<b>b</b>) Under PWM stimulation signals. The color legend indicates the martensite volume fraction.</p>
Full article ">Figure 9
<p>Experimental results of finger movement measurements at different input speeds, with transparent margins indicating the measurement error bands.</p>
Full article ">Figure 10
<p>Experimental results of the joint displacements for all fingers: (<b>a</b>) Metacarpophalangeal (MCP) joint, (<b>b</b>) Proximal Interphalangeal (PIP) joint, and (<b>c</b>) Distal Interphalangeal/Interphalangeal (DIP/IP) joint.</p>
Full article ">Figure 11
<p>Experimental results of the torque measurement for all fingers: (<b>a</b>) Metacarpophalangeal (MCP) joint; (<b>b</b>) Proximal Interphalangeal (PIP) joint, and (<b>c</b>) Distal Interphalangeal/Interphalangeal (DIP/IP) joint.</p>
Full article ">
26 pages, 4673 KiB  
Article
Utilizing IoMT-Based Smart Gloves for Continuous Vital Sign Monitoring to Safeguard Athlete Health and Optimize Training Protocols
by Mustafa Hikmet Bilgehan Ucar, Arsene Adjevi, Faruk Aktaş and Serdar Solak
Sensors 2024, 24(20), 6500; https://doi.org/10.3390/s24206500 - 10 Oct 2024
Viewed by 1505
Abstract
This paper presents the development of a vital sign monitoring system designed specifically for professional athletes, with a focus on runners. The system aims to enhance athletic performance and mitigate health risks associated with intense training regimens. It comprises a wearable glove that [...] Read more.
This paper presents the development of a vital sign monitoring system designed specifically for professional athletes, with a focus on runners. The system aims to enhance athletic performance and mitigate health risks associated with intense training regimens. It comprises a wearable glove that monitors key physiological parameters such as heart rate, blood oxygen saturation (SpO2), body temperature, and gyroscope data used to calculate linear speed, among other relevant metrics. Additionally, environmental variables, including ambient temperature, are tracked. To ensure accuracy, the system incorporates an onboard filtering algorithm to minimize false positives, allowing for timely intervention during instances of physiological abnormalities. The study demonstrates the system’s potential to optimize performance and protect athlete well-being by facilitating real-time adjustments to training intensity and duration. The experimental results show that the system adheres to the classical “220-age” formula for calculating maximum heart rate, responds promptly to predefined thresholds, and outperforms a moving average filter in noise reduction, with the Gaussian filter delivering superior performance. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Sports devices and wearables with integrated sensors.</p>
Full article ">Figure 2
<p>The proposed IoMT-empowered athlete health monitoring and alert system.</p>
Full article ">Figure 3
<p>The front (<b>a</b>) and back (<b>b</b>) views of the prototype IoMT-based athlete health monitoring and alert system.</p>
Full article ">Figure 4
<p>Web interface for real-time data visualization.</p>
Full article ">Figure 5
<p>Acceleration coordinate systems used to calculate the linear speed. (<b>a</b>) Gyroscope rotation. (<b>b</b>) Athlete movement illustration.</p>
Full article ">Figure 6
<p>Heart rate values during different phases.</p>
Full article ">Figure 7
<p>SpO2 values during different phases.</p>
Full article ">Figure 8
<p>Body temperature values during different phases.</p>
Full article ">Figure 9
<p>Speed values during different phases.</p>
Full article ">Figure 10
<p>Alert signal during different phases.</p>
Full article ">Figure 11
<p>Heart rate values during different phases (moving average).</p>
Full article ">Figure 12
<p>SpO2 values during different phases (moving average).</p>
Full article ">Figure 13
<p>Speed values during different phases (moving average).</p>
Full article ">Figure 14
<p>Heart rate values during different phases (Gaussian filter).</p>
Full article ">Figure 15
<p>SpO2 values during different phases (Gaussian filter).</p>
Full article ">Figure 16
<p>Speed values during different phases (Gaussian filter).</p>
Full article ">Figure 17
<p>Heart rate values during the resting phase.</p>
Full article ">Figure 18
<p>Speed values during the resting phase.</p>
Full article ">Figure 19
<p>SpO2 values during the resting phase.</p>
Full article ">Figure 20
<p>Body temperature values during the resting phase.</p>
Full article ">Figure 21
<p>Heart rate values during the walking phase.</p>
Full article ">Figure 22
<p>Speed values during the walking phase.</p>
Full article ">Figure 23
<p>SpO2 values during the walking phase.</p>
Full article ">Figure 24
<p>Body temperature values during the walking phase.</p>
Full article ">Figure 25
<p>Heart rate values during the running phase.</p>
Full article ">Figure 26
<p>Speed values during the running phase.</p>
Full article ">Figure 27
<p>SpO2 values during the running phase.</p>
Full article ">Figure 28
<p>Body temperature values during the running phase.</p>
Full article ">
24 pages, 3036 KiB  
Article
Comparing Machine Learning Models for Sentiment Analysis and Rating Prediction of Vegan and Vegetarian Restaurant Reviews
by Sanja Hanić, Marina Bagić Babac, Gordan Gledec and Marko Horvat
Computers 2024, 13(10), 248; https://doi.org/10.3390/computers13100248 - 1 Oct 2024
Viewed by 1307
Abstract
The paper investigates the relationship between written reviews and numerical ratings of vegan and vegetarian restaurants, aiming to develop a predictive model that accurately determines numerical ratings based on review content. The dataset was obtained by scraping reviews from November 2022 until January [...] Read more.
The paper investigates the relationship between written reviews and numerical ratings of vegan and vegetarian restaurants, aiming to develop a predictive model that accurately determines numerical ratings based on review content. The dataset was obtained by scraping reviews from November 2022 until January 2023 from the TripAdvisor website. The study applies multidimensional scaling and clustering using the KNN algorithm to visually represent the textual data. Sentiment analysis and rating predictions are conducted using neural networks, support vector machines (SVM), random forest, Naïve Bayes, and BERT models. Text vectorization is accomplished through term frequency-inverse document frequency (TF-IDF) and global vectors (GloVe). The analysis identified three main topics related to vegan and vegetarian restaurant experiences: (1) restaurant ambiance, (2) personal feelings towards the experience, and (3) the food itself. The study processed a total of 33,439 reviews, identifying key aspects of the dining experience and testing various machine learning methods for sentiment and rating predictions. Among the models tested, BERT outperformed the others, and TF-IDF proved slightly more effective than GloVe for word representation. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram describing data collection and preprocessing steps.</p>
Full article ">Figure 2
<p>Distribution of the star-ratings in the dataset.</p>
Full article ">Figure 3
<p>The implementation of neural network for rating prediction with tfidf.</p>
Full article ">Figure 4
<p>The elbow method.</p>
Full article ">Figure 5
<p>A silhouette method for k = 3 clusters.</p>
Full article ">Figure 6
<p>A silhouette method for k = 4 clusters.</p>
Full article ">Figure 7
<p>A silhouette method for k = 5 clusters.</p>
Full article ">Figure 8
<p>Silhouette score based on the number of clusters k.</p>
Full article ">Figure 9
<p>Visualization of three topics within the top 200 words.</p>
Full article ">
22 pages, 5199 KiB  
Article
Machine Learning-Based Gesture Recognition Glove: Design and Implementation
by Anna Filipowska, Wojciech Filipowski, Paweł Raif, Marcin Pieniążek, Julia Bodak, Piotr Ferst, Kamil Pilarski, Szymon Sieciński, Rafał Jan Doniec, Julia Mieszczanin, Emilia Skwarek, Katarzyna Bryzik, Maciej Henkel and Marcin Grzegorzek
Sensors 2024, 24(18), 6157; https://doi.org/10.3390/s24186157 - 23 Sep 2024
Cited by 1 | Viewed by 3034
Abstract
In the evolving field of human–computer interaction (HCI), gesture recognition has emerged as a critical focus, with smart gloves equipped with sensors playing one of the most important roles. Despite the significance of dynamic gesture recognition, most research on data gloves has concentrated [...] Read more.
In the evolving field of human–computer interaction (HCI), gesture recognition has emerged as a critical focus, with smart gloves equipped with sensors playing one of the most important roles. Despite the significance of dynamic gesture recognition, most research on data gloves has concentrated on static gestures, with only a small percentage addressing dynamic gestures or both. This study explores the development of a low-cost smart glove prototype designed to capture and classify dynamic hand gestures for game control and presents a prototype of data gloves equipped with five flex sensors, five force sensors, and one inertial measurement unit (IMU) sensor. To classify dynamic gestures, we developed a neural network-based classifier, utilizing a convolutional neural network (CNN) with three two-dimensional convolutional layers and rectified linear unit (ReLU) activation where its accuracy was 90%. The developed glove effectively captures dynamic gestures for game control, achieving high classification accuracy, precision, and recall, as evidenced by the confusion matrix and training metrics. Despite limitations in the number of gestures and participants, the solution offers a cost-effective and accurate approach to gesture recognition, with potential applications in VR/AR environments. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Activity Monitoring)
Show Figures

Figure 1

Figure 1
<p>Electrical schematic of constructed glove.</p>
Full article ">Figure 2
<p>Gesture recognition glove with sensor placement.</p>
Full article ">Figure 3
<p>Flowchart of the data acquisition and processing.</p>
Full article ">Figure 4
<p>Game control gestures: (<b>a</b>) fist; (<b>b</b>) double tap; (<b>c</b>) finger spread; (<b>d</b>) wave left; (<b>e</b>)—wave right.</p>
Full article ">Figure 5
<p>Structure of the deep neural network.</p>
Full article ">Figure 6
<p>Confusion matrix.</p>
Full article ">Figure 7
<p>Loss and accuracy of the deep neural network.</p>
Full article ">
23 pages, 3964 KiB  
Article
Geometry of Textual Data Augmentation: Insights from Large Language Models
by Sherry J. H. Feng, Edmund M-K. Lai and Weihua Li
Electronics 2024, 13(18), 3781; https://doi.org/10.3390/electronics13183781 - 23 Sep 2024
Cited by 1 | Viewed by 1518
Abstract
Data augmentation is crucial for enhancing the performance of text classification models when labelled training data are scarce. For natural language processing (NLP) tasks, large language models (LLMs) are able to generate high-quality augmented data. But a fundamental understanding of the reasons for [...] Read more.
Data augmentation is crucial for enhancing the performance of text classification models when labelled training data are scarce. For natural language processing (NLP) tasks, large language models (LLMs) are able to generate high-quality augmented data. But a fundamental understanding of the reasons for their effectiveness remains limited. This paper presents a geometric and topological perspective on textual data augmentation using LLMs. We compare the augmentation data generated by GPT-J with those generated through cosine similarity from Word2Vec and GloVe embeddings. Topological data analysis reveals that GPT-J generated data maintains label coherence. Convex hull analysis of such data represented by their two principal components shows that they lie within the spatial boundaries of the original training data. Delaunay triangulation reveals that increasing the number of augmented data points that are connected within these boundaries correlates with improved classification accuracy. These findings provide insights into the superior performance of LLMs in data augmentation. A framework for predicting the usefulness of augmentation data based on geometric properties could be formed based on these techniques. Full article
(This article belongs to the Special Issue Emerging Theory and Applications in Natural Language Processing)
Show Figures

Figure 1

Figure 1
<p>Scree plots of post-augmentation PCA components of SST2 and TREC datasets.</p>
Full article ">Figure 1 Cont.
<p>Scree plots of post-augmentation PCA components of SST2 and TREC datasets.</p>
Full article ">Figure 2
<p>Persistence diagrams of SST2: (<b>a</b>) base model using Word2Vec embeddings; (<b>b</b>) base model using GloVe embeddings; (<b>c</b>) base model using GPT-J embeddings; (<b>d</b>) augmented data using Word2Vec embeddings; (<b>e</b>) augmented data using GloVe embeddings; and (<b>f</b>) augmented data visualization using GPT-J embeddings.</p>
Full article ">Figure 3
<p>Persistence diagrams of TREC: (<b>a</b>) base model using Word2Vec embeddings; (<b>b</b>) base model using GloVe embeddings; (<b>c</b>) base model using GPT-J embeddings; (<b>d</b>) augmented data using Word2Vec embeddings. (<b>e</b>) augmented data using GloVe embeddings; and (<b>f</b>) augmented data using GPT-J embeddings.</p>
Full article ">Figure 4
<p>H1 values for SST2 and TREC datasets.</p>
Full article ">Figure 5
<p>Bottleneck distance analysis for different models and datasets. Red lines indicate the largest bottleneck distance between pairs of matched points, suggesting more significant topological changes due to augmentation. Green indicates that the distance between matched points is smaller than the bottleneck.</p>
Full article ">Figure 6
<p>Comparison of augmented word shapes used for Word2Vec CNN embedding. The lines indicate the convex hull.</p>
Full article ">Figure 7
<p>Comparison of augmented word shapes used for GloVe embedding. The lines indicate the convex hull.</p>
Full article ">Figure 8
<p>DT visualization of word embeddings’ data points with GPT-J augmented data.</p>
Full article ">Figure 9
<p>Relation between number of edges in the triangulation and the classifier accuracy.</p>
Full article ">
14 pages, 1507 KiB  
Article
Enhanced 2D Hand Pose Estimation for Gloved Medical Applications: A Preliminary Model
by Adam W. Kiefer, Dominic Willoughby, Ryan P. MacPherson, Robert Hubal and Stephen F. Eckel
Sensors 2024, 24(18), 6005; https://doi.org/10.3390/s24186005 - 17 Sep 2024
Viewed by 1076
Abstract
(1) Background: As digital health technology evolves, the role of accurate medical-gloved hand tracking is becoming more important for the assessment and training of practitioners to reduce procedural errors in clinical settings. (2) Method: This study utilized computer vision for hand pose estimation [...] Read more.
(1) Background: As digital health technology evolves, the role of accurate medical-gloved hand tracking is becoming more important for the assessment and training of practitioners to reduce procedural errors in clinical settings. (2) Method: This study utilized computer vision for hand pose estimation to model skeletal hand movements during in situ aseptic drug compounding procedures. High-definition video cameras recorded hand movements while practitioners wore medical gloves of different colors. Hand poses were manually annotated, and machine learning models were developed and trained using the DeepLabCut interface via an 80/20 training/testing split. (3) Results: The developed model achieved an average root mean square error (RMSE) of 5.89 pixels across the training data set and 10.06 pixels across the test set. When excluding keypoints with a confidence value below 60%, the test set RMSE improved to 7.48 pixels, reflecting high accuracy in hand pose tracking. (4) Conclusions: The developed hand pose estimation model effectively tracks hand movements across both controlled and in situ drug compounding contexts, offering a first-of-its-kind medical glove hand tracking method. This model holds potential for enhancing clinical training and ensuring procedural safety, particularly in tasks requiring high precision such as drug compounding. Full article
(This article belongs to the Special Issue Wearable Sensors for Continuous Health Monitoring and Analysis)
Show Figures

Figure 1

Figure 1
<p>A schematic of the camera placement across both data collections of the in situ data collection. For the initial training data collection, the left (L) and overhead (O) camera views were used, while only the L camera was used for the in situ testing data collection. The right (R) camera was used for the initial training data collection; however, no frames were coded for inclusion in the training data set from this perspective.</p>
Full article ">Figure 2
<p>Comparisons between camera views: (<b>A</b>) camera position in the lower right corner of the LAFW, (<b>B</b>) camera position in the upper left of the LAFW, (<b>C</b>) overhead camera position within the LAFW.</p>
Full article ">Figure 3
<p>Twenty-two keypoints identified and labeled on the right hand. Labels are mirrored on the contralateral hand and follow an identical naming convention.</p>
Full article ">Figure 4
<p>Representation of average inference error in pixels. The center of each black cross indicates the ground-truth marker location, while the white circle area indicates average RMSE in pixels. Note, while the image is magnified for visibility, the circles are to scale relative to an RMSE based on the full 1920 × 1080 pixel image.</p>
Full article ">Figure 5
<p>Error in manual labeling shown by overlaying two discrete images of the right hand from two different video frames, with the lighter and darker circles indicating independent labeling efforts of the same keypoint by the same human coder.</p>
Full article ">Figure 6
<p>Video frame with annotations when the left hand occludes the right.</p>
Full article ">
20 pages, 5140 KiB  
Article
MOVING: A Multi-Modal Dataset of EEG Signals and Virtual Glove Hand Tracking
by Enrico Mattei, Daniele Lozzi, Alessandro Di Matteo, Alessia Cipriani, Costanzo Manes and Giuseppe Placidi
Sensors 2024, 24(16), 5207; https://doi.org/10.3390/s24165207 - 11 Aug 2024
Viewed by 2203
Abstract
Brain–computer interfaces (BCIs) are pivotal in translating neural activities into control commands for external assistive devices. Non-invasive techniques like electroencephalography (EEG) offer a balance of sensitivity and spatial-temporal resolution for capturing brain signals associated with motor activities. This work introduces MOVING, a Multi-Modal [...] Read more.
Brain–computer interfaces (BCIs) are pivotal in translating neural activities into control commands for external assistive devices. Non-invasive techniques like electroencephalography (EEG) offer a balance of sensitivity and spatial-temporal resolution for capturing brain signals associated with motor activities. This work introduces MOVING, a Multi-Modal dataset of EEG signals and Virtual Glove Hand Tracking. This dataset comprises neural EEG signals and kinematic data associated with three hand movements—open/close, finger tapping, and wrist rotation—along with a rest period. The dataset, obtained from 11 subjects using a 32-channel dry wireless EEG system, also includes synchronized kinematic data captured by a Virtual Glove (VG) system equipped with two orthogonal Leap Motion Controllers. The use of these two devices allows for fast assembly (∼1 min), although introducing more noise than the gold standard devices for data acquisition. The study investigates which frequency bands in EEG signals are the most informative for motor task classification and the impact of baseline reduction on gesture recognition. Deep learning techniques, particularly EEGnetV4, are applied to analyze and classify movements based on the EEG data. This dataset aims to facilitate advances in BCI research and in the development of assistive devices for people with impaired hand mobility. This study contributes to the repository of EEG datasets, which is continuously increasing with data from other subjects, which is hoped to serve as benchmarks for new BCI approaches and applications. Full article
Show Figures

Figure 1

Figure 1
<p>EEG raw data before (<b>a</b>) and after (<b>b</b>) a cleaning band-pass [1–45 Hz] filtering. In (<b>a</b>), the high noise level makes it difficult to visualize high-amplitude brain signals. Vertical lines represent the triggers for rest, fixation cross, and open/close movement, respectively.</p>
Full article ">Figure 2
<p>VG system while tracking the hand movements. The hand positioning system is united with the one of the VG.</p>
Full article ">Figure 3
<p>Model based on data collected by VG. The left part shows the vertical view; the right part shows the horizontal view. The Cartesian reference system is reported in the center.</p>
Full article ">Figure 4
<p>Acquisition environment scheme (<b>a</b>) and real acquisition environment (<b>b</b>) used in this work.</p>
Full article ">Figure 5
<p>The analyzed movements. The dotted line represents the movement acquired but not analyzed in this work. The class “movement” is created by merging the open/close and wrist rotation classes.</p>
Full article ">Figure 6
<p>The used hand movement protocol.</p>
Full article ">Figure 7
<p>Instructions shown to participants for both MI and ME. The flow stops when 8 repetitions are reached.</p>
Full article ">Figure 8
<p>Preprocessing for sample generation and training. The colored boxes represent the parameters that change to explore different frequency bands (in <span style="color: #ff8000">orange</span>) and the baseline reduction method (in <span style="color: #ffc7c1">pink</span>), for each combination of movement/rest. The movement class (<span style="color: #a31d80">violet</span> box) represents the merging of the open/close and wrist rotation. In the lower part of the scheme, the training process of the model is described.</p>
Full article ">Figure 9
<p>x (<b>a</b>), y (<b>b</b>), and z (<b>c</b>) components of the fingertip trajectory for the Horizontal LMC. All fingertips are reported in a single plot.</p>
Full article ">Figure 10
<p>x (<b>a</b>), y (<b>b</b>), and z (<b>c</b>) components of the fingertip trajectory for the Vertical LMC. All fingertips are reported in a single plot.</p>
Full article ">Figure 11
<p>x (<b>a</b>), y (<b>b</b>), and z (<b>c</b>) components of the fingertip velocity for the Horizontal LMC. All fingertips are reported in a single plot.</p>
Full article ">Figure 12
<p>x (<b>a</b>), y (<b>b</b>), and z (<b>c</b>) components of the fingertip velocity for the Vertical LMC. All fingertips are reported in a single plot.</p>
Full article ">Figure 13
<p>Spatial distribution of PSD for each task in a single subject. The timeline is the same as used in <a href="#sensors-24-05207-f009" class="html-fig">Figure 9</a>, <a href="#sensors-24-05207-f010" class="html-fig">Figure 10</a>, <a href="#sensors-24-05207-f011" class="html-fig">Figure 11</a> and <a href="#sensors-24-05207-f012" class="html-fig">Figure 12</a>.</p>
Full article ">
20 pages, 4716 KiB  
Article
Novel Wearable System to Recognize Sign Language in Real Time
by İlhan Umut and Ümit Can Kumdereli
Sensors 2024, 24(14), 4613; https://doi.org/10.3390/s24144613 - 16 Jul 2024
Viewed by 1653
Abstract
The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems [...] Read more.
The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems developed using different technologies, including cameras, armbands, and gloves. However, the system we propose in this study stands out for its practicality, utilizing surface electromyography (muscle activity) and inertial measurement unit (motion dynamics) data from both arms. We address the drawbacks of other methods, such as high costs, low accuracy due to ambient light and obstacles, and complex hardware requirements, which have limited their practical application. Our software can run on different operating systems using digital signal processing and machine learning methods specific to this study. For the test, we created a dataset of 80 words based on their frequency of use in daily life and performed a thorough feature extraction process. We tested the recognition performance using various classifiers and parameters and compared the results. The random forest algorithm emerged as the most successful, achieving a remarkable 99.875% accuracy, while the naïve Bayes algorithm had the lowest success rate with 87.625% accuracy. The new system promises to significantly improve communication for people with hearing disabilities and ensures seamless integration into daily life without compromising user comfort or lifestyle quality. Full article
Show Figures

Figure 1

Figure 1
<p>Myo armband.</p>
Full article ">Figure 2
<p>Flow chart of the software: (<b>a</b>) test and (<b>b</b>) train.</p>
Full article ">Figure 3
<p>An example of the graphical user interfaces: (<b>a</b>) Windows main user interface; (<b>b</b>) Android main user interface; (<b>c</b>) settings user interface; (<b>d</b>) records user interface; (<b>e</b>) train user interface; (<b>f</b>) dictionary user interface.</p>
Full article ">Figure 4
<p>The five gestures recognized by the Myo armband [<a href="#B43-sensors-24-04613" class="html-bibr">43</a>].</p>
Full article ">Figure 5
<p>Testing the system with a sign language instructor.</p>
Full article ">Figure 6
<p>An example Weka training user graphical interface (only Windows).</p>
Full article ">
18 pages, 5076 KiB  
Article
Gesture-Controlled Robotic Arm for Agricultural Harvesting Using a Data Glove with Bending Sensor and OptiTrack Systems
by Zeping Yu, Chenghong Lu, Yunhao Zhang and Lei Jing
Micromachines 2024, 15(7), 918; https://doi.org/10.3390/mi15070918 - 16 Jul 2024
Cited by 2 | Viewed by 1533
Abstract
This paper presents a gesture-controlled robotic arm system designed for agricultural harvesting, utilizing a data glove equipped with bending sensors and OptiTrack systems. The system aims to address the challenges of labor-intensive fruit harvesting by providing a user-friendly and efficient solution. The data [...] Read more.
This paper presents a gesture-controlled robotic arm system designed for agricultural harvesting, utilizing a data glove equipped with bending sensors and OptiTrack systems. The system aims to address the challenges of labor-intensive fruit harvesting by providing a user-friendly and efficient solution. The data glove captures hand gestures and movements using bending sensors and reflective markers, while the OptiTrack system ensures high-precision spatial tracking. Machine learning algorithms, specifically a CNN+BiLSTM model, are employed to accurately recognize hand gestures and control the robotic arm. Experimental results demonstrate the system’s high precision in replicating hand movements, with a Euclidean Distance of 0.0131 m and a Root Mean Square Error (RMSE) of 0.0095 m, in addition to robust gesture recognition accuracy, with an overall accuracy of 96.43%. This hybrid approach combines the adaptability and speed of semi-automated systems with the precision and usability of fully automated systems, offering a promising solution for sustainable and labor-efficient agricultural practices. Full article
Show Figures

Figure 1

Figure 1
<p>System block diagram. Solid arrows means that the data are transmitted through a wire, dashed arrows mean that the data are transmitted wirelessly, and dotted arrows mean that the data are transmitted within the system.</p>
Full article ">Figure 2
<p>BS-65 bending sensor response curve. When bent in the direction that stretches the sensor surface, the resistance increases. When bent in the direction that compresses the sensor surface, the resistance decreases. (<b>a</b>) Tensile strength of the sensor surface. (<b>b</b>) Compressive strength of the sensor surface.</p>
Full article ">Figure 3
<p>The custom-designed ESP32 board. The board features an ESP32S3-WROOM1 module (left) for Wi-Fi and ADC functionalities, operational amplifiers (U7, U8, and U9), a real-time clock (U5), and a battery management system (U3).</p>
Full article ">Figure 4
<p>Bending sensor data glove, consisting of ten bend sensors fixed on cloth.</p>
Full article ">Figure 5
<p>The resistance measurement circuit; these circuits are used to measure and convert resistance changes in the bending sensor to voltage changes for easy measurement. (<b>a</b>) The inverting signal amplifier circuit. (<b>b</b>) The reference voltage circuit.</p>
Full article ">Figure 6
<p>The output voltage (<math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </semantics></math>) versus the resistance of the bending sensor (<math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>b</mi> <mi>s</mi> </mrow> </msub> </semantics></math>) for different feedback resistor values (<math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>f</mi> <mi>b</mi> </mrow> </msub> </semantics></math>) in the inverting signal amplifier circuit. The graph shows the response curves for <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>f</mi> <mi>b</mi> </mrow> </msub> </semantics></math> values of 90 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">k</mi> <mi mathvariant="sans-serif">Ω</mi> </mrow> </semantics></math> (black), 110 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">k</mi> <mi mathvariant="sans-serif">Ω</mi> </mrow> </semantics></math> (red), and 130 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">k</mi> <mi mathvariant="sans-serif">Ω</mi> </mrow> </semantics></math> (blue).</p>
Full article ">Figure 7
<p>Schematic diagram of the robotic arm showing its dimensions and range of motion. (<b>a</b>) The figure illustrates the lengths of each segment of the robotic arm and their respective joints (J1 to J6), with measurements provided in millimeters. (<b>b</b>) The figure illustrates the range of motion.</p>
Full article ">Figure 8
<p>System architecture and data flow for the gesture-controlled robotic arm. The architecture comprises the following four main programs: the ESP32 board program, data processing program, deep learning program, and robotic arm program.</p>
Full article ">Figure 9
<p>CNN+BiLSTM network architecture.</p>
Full article ">Figure 10
<p>Experimental setup showing a hand wearing the data glove and the robotic arm.</p>
Full article ">Figure 11
<p>Data comparison before and after eliminating systematic errors. (<b>a</b>) Discrepancies between the robotic arm and human hand coordinates. (<b>b</b>) Corrected data demonstrating improved alignment after addressing systematic errors.</p>
Full article ">Figure 12
<p>Gestures: ① rest, ② show 1, ③ show 2, ④ claw, ⑤ fist, ⑥ pinch with index finger and thumb, and ⑦ all-finger pinch.</p>
Full article ">Figure 13
<p>CNN+BiLSTM confusion matrix. (The darker the blue, the higher the recognition rate percentage).</p>
Full article ">Figure 14
<p>CNN+BiLSTM loss and accuracy.</p>
Full article ">
Back to TopTop