[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 24, June-2
Previous Issue
Volume 24, May-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 11 (June-1 2024) – 440 articles

Cover Story (view full-size image): Soft robotic grippers mimic the dexterity of human hands. To improve their performance, integrated sensors using flexible electronics enable real-time measurements, contributing to shape sensing, gesture recognition, and pressure mapping, allowing them to adjust their grip based on the object's shape. This paper shows the design, fabrication, and characterization of piezoresistive sensors arranged in a Wheatstone bridge configuration on a flexible polyimide substrate to detect curvature and bending. These sensors are embedded in PDMS with SMA foil (to mimic a human hand's finger movements controlled by temperature) and connected in a five-finger-shaped PCB, providing a voltage output while measuring the resulting finger shapes for future applications in soft robotics and prosthetics. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 4031 KiB  
Article
Self-Calibration for Star Sensors
by Jingneng Fu, Ling Lin and Qiang Li
Sensors 2024, 24(11), 3698; https://doi.org/10.3390/s24113698 - 6 Jun 2024
Viewed by 1295
Abstract
Aiming to address the chicken-and-egg problem in star identification and the intrinsic parameter determination processes of on-orbit star sensors, this study proposes an on-orbit self-calibration method for star sensors that does not depend on star identification. First, the self-calibration equations of a star [...] Read more.
Aiming to address the chicken-and-egg problem in star identification and the intrinsic parameter determination processes of on-orbit star sensors, this study proposes an on-orbit self-calibration method for star sensors that does not depend on star identification. First, the self-calibration equations of a star sensor are derived based on the invariance of the interstar angle of a star pair between image frames, without any requirements for the true value of the interstar angle of the star pair. Then, a constant constraint of the optical path from the star spot to the center of the star sensor optical system is defined to reduce the biased estimation in self-calibration. Finally, a scaled nonlinear least square method is developed to solve the self-calibration equations, thus accelerating iteration convergence. Our simulation and analysis results show that the bias of the focal length estimation in on-orbit self-calibration with a constraint is two orders of magnitude smaller than that in on-orbit self-calibration without a constraint. In addition, it is shown that convergence can be achieved in 10 iterations when the scaled nonlinear least square method is used to solve the self-calibration equations. The calibrated intrinsic parameters obtained by the proposed method can be directly used in traditional star map identification methods. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The pinhole imaging model of a star sensor.</p>
Full article ">Figure 2
<p>(<b>a</b>) Invariance of the interstar angle of a star pair in the object and image spaces, which represent the basis for traditional on-orbit calibration; (<b>b</b>) invariance of the interstar angle of a star pair in the image space at different times, which is the basis for on-orbit self-calibration.</p>
Full article ">Figure 3
<p>Illustration of the motion decomposition process of a star spot on the image plane: (<b>a</b>) rotation around the optical axis; (<b>b</b>) motion in the radial direction.</p>
Full article ">Figure 4
<p>The convergence curves obtained by different calculation methods; the number of iterations and the value of scale parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> were used to calculate intrinsic parameters using a typical set of simulation data (the parameter settings were as follows: the number of image frames was <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>F</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, the star spot extraction error was <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>0.10</mn> </mrow> </semantics></math> pixel, the number of star spots was <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, the initial position of the principal point of the optical system was (1000, 1000) pixel, and the focal length was 20 mm): (<b>a</b>–<b>c</b>) TCOC and TCWC; (<b>d</b>–<b>f</b>) SCOC and SCWC.</p>
Full article ">Figure 5
<p>The intrinsic parameter biases calculated by different calibration methods under the conditions of a number of image frames of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>F</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, a star spot extraction error of <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>0.01</mn> <mo> </mo> <mi>pixel</mi> <mo>–</mo> <mn>0.50</mn> </mrow> </semantics></math> pixel, and a number of star spots of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>20</mn> <mo>–</mo> <mn>100</mn> </mrow> </semantics></math>: (<b>a</b>–<b>c</b>) TCOC; (<b>d</b>–<b>f</b>) TCWC; (<b>g</b>–<b>i</b>) SCOC; (<b>j</b>–<b>l</b>) SCWC.</p>
Full article ">Figure 6
<p>The intrinsic parameter total errors of different calibration methods under the conditions of a number of image frames of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>F</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, a star spot extraction error of <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>0.01</mn> <mi>pixel</mi> <mo>–</mo> <mn>0.50</mn> </mrow> </semantics></math> pixel, and a number of star spots of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>20</mn> <mo>–</mo> <mn>100</mn> </mrow> </semantics></math>: (<b>a</b>–<b>c</b>) TCOC; (<b>d</b>–<b>f</b>) TCWC; (<b>g</b>–<b>i</b>) SCOC; (<b>j</b>–<b>l</b>) SCWC.</p>
Full article ">Figure 7
<p>The intrinsic parameter biases obtained by different calibration methods under the conditions of a star spot extraction error of <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>0.10</mn> </mrow> </semantics></math> pixel, a number of image frames of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>F</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>–</mo> <mn>100</mn> </mrow> </semantics></math>, and a number of star spots of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>20</mn> <mo>–</mo> <mn>100</mn> </mrow> </semantics></math>: (<b>a</b>–<b>c</b>) TCOC; (<b>d</b>–<b>f</b>) TCWC; (<b>g</b>–<b>i</b>) SCOC; (<b>j</b>–<b>l</b>) SCWC.</p>
Full article ">Figure 8
<p>The results of the intrinsic parameter biases calculated by different calibration methods under the conditions of a star spot extraction error of <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>0.10</mn> </mrow> </semantics></math> pixel, a number of image frames of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>F</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>–</mo> <mn>100</mn> </mrow> </semantics></math>, and a number of star spots of <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>20</mn> <mo>–</mo> <mn>100</mn> </mrow> </semantics></math>: (<b>a</b>–<b>c</b>) TCOC; (<b>d</b>–<b>f</b>) TCWC; (<b>g</b>–<b>i</b>) SCOC; (<b>j</b>–<b>l</b>) SCWC.</p>
Full article ">
22 pages, 5274 KiB  
Article
Development of a Personalized Multiclass Classification Model to Detect Blood Pressure Variations Associated with Physical or Cognitive Workload
by Andrea Valerio, Danilo Demarchi, Brendan O’Flynn, Paolo Motto Ros and Salvatore Tedesco
Sensors 2024, 24(11), 3697; https://doi.org/10.3390/s24113697 - 6 Jun 2024
Viewed by 922
Abstract
Comprehending the regulatory mechanisms influencing blood pressure control is pivotal for continuous monitoring of this parameter. Implementing a personalized machine learning model, utilizing data-driven features, presents an opportunity to facilitate tracking blood pressure fluctuations in various conditions. In this work, data-driven photoplethysmograph features [...] Read more.
Comprehending the regulatory mechanisms influencing blood pressure control is pivotal for continuous monitoring of this parameter. Implementing a personalized machine learning model, utilizing data-driven features, presents an opportunity to facilitate tracking blood pressure fluctuations in various conditions. In this work, data-driven photoplethysmograph features extracted from the brachial and digital arteries of 28 healthy subjects were used to feed a random forest classifier in an attempt to develop a system capable of tracking blood pressure. We evaluated the behavior of this latter classifier according to the different sizes of the training set and degrees of personalization used. Aggregated accuracy, precision, recall, and F1-score were equal to 95.1%, 95.2%, 95%, and 95.4% when 30% of a target subject’s pulse waveforms were combined with five randomly selected source subjects available in the dataset. Experimental findings illustrated that incorporating a pre-training stage with data from different subjects made it viable to discern morphological distinctions in beat-to-beat pulse waveforms under conditions of cognitive or physical workload. Full article
(This article belongs to the Special Issue Wearable Technologies and Sensors for Healthcare and Wellbeing)
Show Figures

Figure 1

Figure 1
<p>System employed to collect PPG raw data from the selected sites, elbow (brachial artery) and thumb (digital artery).</p>
Full article ">Figure 2
<p>Data collection protocol followed in this study along with the evolution of the averaged pulse waveforms morphology according to each section of the data capture.</p>
Full article ">Figure 3
<p>(<b>a</b>) Feature extracted from a PPG waveform. (<b>b</b>) Maximum of the first derivative (ms) detected on the velocity plethysmography (VPG). (<b>c</b>) Fiducial points detected on the acceleration plethysmography (APG).</p>
Full article ">Figure 4
<p>Overview of the tested training strategies. (Left) Workflow employed for the generalized approach (PIM). (Center) Tested combinations for person-specific strategies (PSMs). (Right) Workflow adopted by every <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>S</mi> <msub> <mi>M</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Definition of true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) instances in a multiclass problem. (<b>a</b>) REST class. (<b>b</b>) Cognitive task (CT) class. (<b>c</b>) After-exercise (AE) class.</p>
Full article ">Figure 6
<p>Values of evaluation metrics (accuracy, precision, recall, and <span class="html-italic">F</span>1-score) according to the training strategy denoted as <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>S</mi> <msub> <mi>M</mi> <mrow> <mi>S</mi> <mi>D</mi> </mrow> </msub> </mrow> </semantics></math>. A 90% threshold (indicated by the gray dashed line) is used to identify subjects whose performance drops by more than 10% compared to the training phase.</p>
Full article ">Figure 7
<p>Averaged values of accuracy score according to different combinations of number of source subjects and diverse fractions of data employed to personalize the RF model.</p>
Full article ">Figure 8
<p>Evaluation metrics computed for each individual employing a fraction of the target subject data set equal to 30% and a diverse number of source subjects (N). A 90% threshold (indicated by the gray dashed line) is used to identify subjects whose performance drops by more than 10% compared to the training phase. (<b>a</b>) N = 5. (<b>b</b>) N = 10. (<b>c</b>) N = 15.</p>
Full article ">Figure 9
<p>Evaluation metrics computed for each individual employing a fraction the target subject data set equal to 50% and a diverse number of source subjects (N). A 90% threshold (indicated by the gray dashed line) is used to identify subjects whose performance drops by more than 10% compared to the training phase. (<b>a</b>) N = 5. (<b>b</b>) N = 10. (<b>c</b>) N = 15.</p>
Full article ">
15 pages, 6188 KiB  
Article
Investigation of Appropriate Scaling of Networks and Images for Convolutional Neural Network-Based Nerve Detection in Ultrasound-Guided Nerve Blocks
by Takaaki Sugino, Shinya Onogi, Rieko Oishi, Chie Hanayama, Satoki Inoue, Shinjiro Ishida, Yuhang Yao, Nobuhiro Ogasawara, Masahiro Murakawa and Yoshikazu Nakajima
Sensors 2024, 24(11), 3696; https://doi.org/10.3390/s24113696 - 6 Jun 2024
Viewed by 1479
Abstract
Ultrasound imaging is an essential tool in anesthesiology, particularly for ultrasound-guided peripheral nerve blocks (US-PNBs). However, challenges such as speckle noise, acoustic shadows, and variability in nerve appearance complicate the accurate localization of nerve tissues. To address this issue, this study introduces a [...] Read more.
Ultrasound imaging is an essential tool in anesthesiology, particularly for ultrasound-guided peripheral nerve blocks (US-PNBs). However, challenges such as speckle noise, acoustic shadows, and variability in nerve appearance complicate the accurate localization of nerve tissues. To address this issue, this study introduces a deep convolutional neural network (DCNN), specifically Scaled-YOLOv4, and investigates an appropriate network model and input image scaling for nerve detection on ultrasound images. Utilizing two datasets, a public dataset and an original dataset, we evaluated the effects of model scale and input image size on detection performance. Our findings reveal that smaller input images and larger model scales significantly improve detection accuracy. The optimal configuration of model size and input image size not only achieved high detection accuracy but also demonstrated real-time processing capabilities. Full article
(This article belongs to the Collection Biomedical Imaging and Sensing)
Show Figures

Figure 1

Figure 1
<p>Examples of ultrasound images and their corresponding labeled images in (<b>a</b>) Public dataset and (<b>b</b>) Original dataset.</p>
Full article ">Figure 2
<p>An overview of the Scaled-YOLOv4 models used in this study. The pink and blue dashed arrows indicate replacing the corresponding CSPUp block with a CSPSPP block for YOLOv4-P5 and -P6, respectively.</p>
Full article ">Figure 3
<p>Computation blocks in the Scaled-YOLOv4 models.</p>
Full article ">Figure 4
<p>Examples of how to count true positives (TP), false positives (FP), and false negatives (FN) in the Original dataset.</p>
Full article ">Figure 5
<p>Detection results (F1-measure [%]) for anatomical structures identified with YOLOv4-CSP, -P5, -P6, and -P7 trained on input images of 384 × 384, 640 × 640, 896 × 896, and 1152 × 1152 pixels on the Public and Original datasets.</p>
Full article ">Figure 6
<p>Visual comparison of detection results among YOLOv4-CSP, -P5, -P6, and -P7 trained on 384 × 384-pixel input images in the Public and Original datasets.</p>
Full article ">Figure 7
<p>Visual comparison of detection results among YOLO-P7 models trained on input images of 384 × 384, 640 × 640, 896 × 896, and 1152 × 1152 pixels in Public and Original datasets.</p>
Full article ">
12 pages, 1527 KiB  
Article
A Flexible Ammonia Gas Sensor Based on a Grafted Polyaniline Grown on a Polyethylene Terephthalate Film
by Masanobu Matsuguchi, Kaito Horio, Atsuya Uchida, Rui Kakunaka and Shunsuke Shiba
Sensors 2024, 24(11), 3695; https://doi.org/10.3390/s24113695 - 6 Jun 2024
Cited by 1 | Viewed by 1283
Abstract
A novel NH3 gas sensor is introduced, employing polyaniline (PANI) with a unique structure called a graft film. The preparation method was simple: polydopamine (PD) was coated on a flexible polyethylene terephthalate (PET) film and PANI graft chains were grown on its [...] Read more.
A novel NH3 gas sensor is introduced, employing polyaniline (PANI) with a unique structure called a graft film. The preparation method was simple: polydopamine (PD) was coated on a flexible polyethylene terephthalate (PET) film and PANI graft chains were grown on its surface. This distinctive three-layer sensor showed a response value of 12 for 50 ppm NH3 in a dry atmosphere at 50 °C. This value surpasses those of previously reported sensors using structurally controlled PANI films. Additionally, it is on par with sensors that combine PANI with metal oxide semiconductors or carbon materials, the high sensitivity of which have been reported. To confirm our film’s potential as a flexible sensor, the effect of bending on the its characteristics was investigated. This revealed that although bending decreased the response value, it had no effect on the response time or recovery. This indicated that the sensor film itself was not broken by bending and had sufficient mechanical strength. Full article
(This article belongs to the Special Issue Nano/Micro-Structured Materials for Gas Sensor)
Show Figures

Figure 1

Figure 1
<p>Illustrations of (<b>a</b>) graft polymer conformations, (<b>b</b>) grafted PANI chains, and (<b>c</b>) the diffusion of ammonia molecules within the PANI graft chains.</p>
Full article ">Figure 2
<p>Three-layered structure of the flexible polyaniline graft film sensor.</p>
Full article ">Figure 3
<p>Illustration of PANI graft film sensor preparation: (<b>a</b>) polyaniline graft film; (<b>b</b>) sensor structure.</p>
Full article ">Figure 4
<p>SEM images of the surfaces of (<b>a</b>) PET film, (<b>b</b>) PD-coated PET film, (<b>c</b>) PANI grafted on PET film, and (<b>d</b>) cross-section of PANI grafted film.</p>
Full article ">Figure 5
<p>The characteristics of the PANI graft film sensor: (<b>a</b>) response–recovery curves to 250 ppm of NH<sub>3</sub> measured at 30 °C and 50 °C, (<b>b</b>) response–recovery curve to 50–250 ppm NH<sub>3</sub> at 50 °C, (<b>c</b>) calibration curve measured at 50 °C, and (<b>d</b>) comparison of response–recovery curves of sensors with different PANI forms.</p>
Full article ">Figure 6
<p>Illustration showing how the sensor is bent into shape.</p>
Full article ">Figure 7
<p>Effects of the bending angle on the response recovery curves of the PANI grafted film sensor to 250 ppm NH<sub>3</sub> at 50 °C: (<b>a</b>) sensor bent 60° outward; (<b>b</b>) sensor bent 60° inward.</p>
Full article ">
17 pages, 570 KiB  
Article
Localization Performance Analysis and Algorithm Design of Reconfigurable Intelligent Surface-Assisted D2D Systems
by Mengke Wang, Tiejun Lv, Pingmu Huang and Zhipeng Lin
Sensors 2024, 24(11), 3694; https://doi.org/10.3390/s24113694 - 6 Jun 2024
Viewed by 692
Abstract
The research on high-precision and all-scenario localization using the millimeter-wave (mmWave) band is of great urgency. Due to the characteristics of mmWave, blockages make the localization task more complex. This paper proposes a cooperative localization system among user equipment (UEs) assisted by reconfigurable [...] Read more.
The research on high-precision and all-scenario localization using the millimeter-wave (mmWave) band is of great urgency. Due to the characteristics of mmWave, blockages make the localization task more complex. This paper proposes a cooperative localization system among user equipment (UEs) assisted by reconfigurable intelligent surfaces (RISs), which considers device-to-device (D2D) communication. RISs are used as anchor points, and position estimation is achieved through signal exchanges between UEs. Firstly, we establish a localization model based on this system and derive the UEs’ positioning error bound (PEB) as a performance metric. Then, a UE-RIS joint beamforming design is proposed to optimize channel state information (CSI) with the objective of achieving the minimum PEB. Finally, simulation analysis demonstrates the advantages of the proposed scheme over RIS-assisted base station positioning, achieving centimeter-level accuracy with a 10 dBm lower transmission power. Full article
Show Figures

Figure 1

Figure 1
<p>Localization system with one <span class="html-italic">L</span>-antenna RIS and two <math display="inline"><semantics> <msub> <mi>N</mi> <mi mathvariant="normal">u</mi> </msub> </semantics></math>-antenna UEs.</p>
Full article ">Figure 2
<p>PEB versus SNRs for different BF design.</p>
Full article ">Figure 3
<p>PEB versus different <math display="inline"><semantics> <msub> <mi>Q</mi> <mi mathvariant="normal">r</mi> </msub> </semantics></math> for different <math display="inline"><semantics> <msub> <mi>Q</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>PEB versus different SNRs for different <span class="html-italic">N</span>.</p>
Full article ">Figure 5
<p>PEB versus different <math display="inline"><semantics> <msub> <mi>N</mi> <mi mathvariant="normal">u</mi> </msub> </semantics></math> for different <span class="html-italic">L</span>.</p>
Full article ">Figure 6
<p>PEB versus different <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi mathvariant="normal">q</mi> <mo>,</mo> <mi mathvariant="normal">p</mi> </mrow> </msub> </semantics></math> between UE1 and UE2 for different <span class="html-italic">P</span>.</p>
Full article ">Figure 7
<p>Scenario difference between ours and the comparison.</p>
Full article ">Figure 8
<p>PEB versus different SNRs for different scheme.</p>
Full article ">
11 pages, 2304 KiB  
Article
The Accuracy of Evaluation of the Requirements of the Standards IEC 61000-3-2(12) with the Application of the Wideband Current Transducer
by Ernest Stano and Slawomir Wiak
Sensors 2024, 24(11), 3693; https://doi.org/10.3390/s24113693 - 6 Jun 2024
Cited by 1 | Viewed by 695
Abstract
The aim of this paper is to determine the conversion accuracy of the Danisense DC200IF (Danisense A/S, Taastrup, Denmark) wideband current transducer for its possible application to test electromagnetic compatibility requirements of the standards IEC 61000-3-2 and IEC 61000-3-12 with the digital power [...] Read more.
The aim of this paper is to determine the conversion accuracy of the Danisense DC200IF (Danisense A/S, Taastrup, Denmark) wideband current transducer for its possible application to test electromagnetic compatibility requirements of the standards IEC 61000-3-2 and IEC 61000-3-12 with the digital power meter Yokogawa WT5000 (Yokogawa Electric Corporation, Tokyo, Japan). To obtain this goal for distorted current of main frequency equal to 50 Hz and in the frequencies range of higher harmonics from 100 Hz to 2500 Hz its amplitude error and phase shift are evaluated. Moreover, the measurable level of higher harmonics with the rated accuracy of the used precision power analyzer is also investigated. Finally, the measuring system is applied to determine the RMS values of current harmonics produced by the audio power amplifier in order to assess its compliance with the standard IEC 61000-3-12. Full article
(This article belongs to the Special Issue Innovative Devices and MEMS for Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>The measuring setup for evaluating the accuracy of tested current transducer. In <a href="#sensors-24-03693-f001" class="html-fig">Figure 1</a> the following notations are used: DPM is digital power meter/analyzer (V is voltage terminals, CS is current sense terminal, and A is current terminals), CT is current transducer, DC is DC power supply for CT, PPS is programmable power source, R<sub>L</sub> is load resistor of the PPS, and IT is insulation transformer.</p>
Full article ">Figure 2
<p>The amplitude errors of transducer determined for distorted current harmonics presented in <a href="#sensors-24-03693-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 3
<p>The phase shift of transducer determined for distorted current harmonics presented in <a href="#sensors-24-03693-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 4
<p>The values of amplitude errors of transducer determined for distorted current with each harmonic equal to 5% of the main component of frequency 50 Hz.</p>
Full article ">Figure 5
<p>The values of phase shift of transducer determined for distorted current with each harmonic equal to 5% of the main component of frequency 50 Hz.</p>
Full article ">Figure 6
<p>The measuring system for evaluation of the RMS values of harmonics current produced by the audio power amplifier. In <a href="#sensors-24-03693-f006" class="html-fig">Figure 6</a>, the same abbreviation is used as in <a href="#sensors-24-03693-f001" class="html-fig">Figure 1</a>; in addition, APA is audio power amplifier, R<sub>L</sub> is load of the APA, and PG is power grid.</p>
Full article ">Figure 7
<p>The waveform of the distorted current recorded during emission test of the power amplifier.</p>
Full article ">Figure 8
<p>The percentage level of harmonic current emission of the tested power amplifier.</p>
Full article ">
13 pages, 3206 KiB  
Article
Dependence of Magnetic Properties of As-Prepared Nanocrystalline Ni2MnGa Glass-Coated Microwires on the Geometrical Aspect Ratio
by Mohamed Salaheldeen, Valentina Zhukova, Ricardo Lopez Anton and Arcady Zhukov
Sensors 2024, 24(11), 3692; https://doi.org/10.3390/s24113692 - 6 Jun 2024
Cited by 3 | Viewed by 871
Abstract
We have prepared NiMnGa glass-coated microwires with different geometrical aspect ratios, ρ = dmetal/Dtotal (dmetal—diameter of metallic nucleus, and Dtotal—total diameter). The structure and magnetic properties are investigated in a wide range of temperatures [...] Read more.
We have prepared NiMnGa glass-coated microwires with different geometrical aspect ratios, ρ = dmetal/Dtotal (dmetal—diameter of metallic nucleus, and Dtotal—total diameter). The structure and magnetic properties are investigated in a wide range of temperatures and magnetic fields. The XRD analysis illustrates stable microstructure in the range of ρ from 0.25 to 0.60. The estimations of average grain size and crystalline phase content evidence a remarkable variation as the ρ-ratio sweeps from 0.25 to 0.60. Thus, the microwires with the lowest aspect ratio, i.e., ρ = 0.25, show the smallest average grain size and the highest crystalline phase content. This change in the microstructural properties correlates with dramatic changes in the magnetic properties. Hence, the sample with the lowest ρ-ratio exhibits an extremely high value of the coercivity, Hc, compared to the value for the sample with the largest ρ-ratio (2989 Oe and 10 Oe, respectively, i.e., almost 300 times higher). In addition, a similar trend is observed for the spontaneous exchange bias phenomena, with an exchange bias field, Hex, of 120 Oe for the sample with ρ = 0.25 compared to a Hex = 12.5 Oe for the sample with ρ = 0.60. However, the thermomagnetic curves (field-cooled—FC and field-heating—FH) show similar magnetic behavior for all the samples. Meanwhile, FC and FH curves measured at low magnetic fields show negative values for ρ = 0.25, whereas positive values are found for the other samples. The obtained results illustrate the substantial effect of the internal stresses on microstructure and magnetic properties, which leads to magnetic hardening of samples with low aspect ratio. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Cross-sections of selected Ni<sub>2</sub>MnGa microwires with an aspect ratio of 0.25. (<b>b</b>,<b>c</b>) SEM image for single Ni<sub>2</sub>MnGa microwire at different magnifications. (<b>d</b>–<b>f</b>) show the chemical composition mapping obtained using EDX analysis in one of microwires.</p>
Full article ">Figure 2
<p>(<b>a</b>) X-ray diffraction (XRD) patterns obtained at room temperature for Ni<sub>2</sub>MnGa glass-coated microwires with varying aspect ratios. (<b>b</b>) Detail of X the Bragg Peak at <span class="html-italic">2θ</span> = 45° is shown as yellow area.</p>
Full article ">Figure 3
<p>The average grain size, <span class="html-italic">D<sub>g</sub></span>, of the two crystalline phases Ga<sub>4</sub>Ni<sub>3</sub>-BCC and Ni<sub>2</sub>MnGa-FCC of as-prepared Ni<sub>2</sub>MnGa-based glass-coated microwires with different aspect ratios. The red lines indicate the error bars.</p>
Full article ">Figure 4
<p>(<b>a</b>) Hysteresis loops, measured in magnetic field applied parallel to the axis of microwires for as-prepared Ni<sub>2</sub>MnGa glass-coated microwires with different <span class="html-italic">ρ</span>-ratios (<b>a</b>) measured at 5 K and (<b>b</b>) measured at 300 K.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>d</b>) The variation of coercivity with temperature for Ni<sub>2</sub>MnGa-based glass-coated microwires with different aspect ratios varied from 0.25 to 0.60. Red lines indicate the error bars.</p>
Full article ">Figure 6
<p>Temperature dependence of the spontaneous exchange bias for Ni<sub>2</sub>MnGa glass-coated microwires with different aspect ratios (lines for eye guide).</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>d</b>) Field cooling (FC) of Ni<sub>2</sub>MnGa glass-coated microwires at temperature range 400 K to 5 K with different applied magnetic field H = 10 kOe to 20 kOe. The green area points out the region where the T<sub>M</sub> is expected to be observed.</p>
Full article ">
25 pages, 26872 KiB  
Article
Lightweight Ghost Enhanced Feature Attention Network: An Efficient Intelligent Fault Diagnosis Method under Various Working Conditions
by Huaihao Dong, Kai Zheng, Siguo Wen, Zheng Zhang, Yuyang Li and Bobin Zhu
Sensors 2024, 24(11), 3691; https://doi.org/10.3390/s24113691 - 6 Jun 2024
Viewed by 1077
Abstract
Recent advancements in applications of deep neural network for bearing fault diagnosis under variable operating conditions have shown promising outcomes. However, these approaches are limited in practical applications due to the complexity of neural networks, which require substantial computational resources, thereby hindering the [...] Read more.
Recent advancements in applications of deep neural network for bearing fault diagnosis under variable operating conditions have shown promising outcomes. However, these approaches are limited in practical applications due to the complexity of neural networks, which require substantial computational resources, thereby hindering the advancement of automated diagnostic tools. To overcome these limitations, this study introduces a new fault diagnosis framework that incorporates a tri-channel preprocessing module for multidimensional feature extraction, coupled with an innovative diagnostic architecture known as the Lightweight Ghost Enhanced Feature Attention Network (GEFA-Net). This system is adept at identifying rolling bearing faults across diverse operational conditions. The FFE module utilizes advanced techniques such as Fast Fourier Transform (FFT), Frequency Weighted Energy Operator (FWEO), and Signal Envelope Analysis to refine signal processing in complex environments. Concurrently, GEFA-Net employs the Ghost Module and the Efficient Pyramid Squared Attention (EPSA) mechanism, which enhances feature representation and generates additional feature maps through linear operations, thereby reducing computational demands. This methodology not only significantly lowers the parameter count of the model, promoting a more streamlined architectural framework, but also improves diagnostic speed. Additionally, the model exhibits enhanced diagnostic accuracy in challenging conditions through the effective synthesis of local and global data contexts. Experimental validation using datasets from the University of Ottawa and our dataset confirms that the framework not only achieves superior diagnostic accuracy but also reduces computational complexity and accelerates detection processes. These findings highlight the robustness of the framework for bearing fault diagnosis under varying operational conditions, showcasing its broad applicational potential in industrial settings. The parameter count was decreased by 63.74% compared to MobileVit, and the recorded diagnostic accuracies were 98.53% and 99.98% for the respective datasets. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Adaptive bearing fault diagnosis framework with the preprocessing module FFE and Lightweight GEFA-Net.</p>
Full article ">Figure 2
<p>Workflow diagram of FFE.</p>
Full article ">Figure 3
<p>Lightweight GEFA-Net.</p>
Full article ">Figure 4
<p>The Ghost module’s linear procedure.</p>
Full article ">Figure 5
<p>The structure of the EPSA module.</p>
Full article ">Figure 6
<p>Context broadcasting module.</p>
Full article ">Figure 7
<p>Structure of CB-TransformerEncoder.</p>
Full article ">Figure 8
<p>Test rig of Ottawa.</p>
Full article ">Figure 9
<p>Our test rig.</p>
Full article ">Figure 10
<p>Our experimental datasets.</p>
Full article ">Figure 11
<p>RGB image after three-channel feature extraction. (<b>a</b>) presents the images transformed via FFT, (<b>b</b>) displays images transformed by the FWEO, and (<b>c</b>) shows the images processed through envelope detection.</p>
Full article ">Figure 12
<p>Experimental comparison of three preprocessing methods (Original signal, CWT, FFE) across different models.</p>
Full article ">Figure 13
<p>Comparison of model accuracy with Ottawa datasets. (<b>a</b>) Valid accuracy for 50 training epochs per model. (<b>b</b>) Train loss for 50 training epochs per model.</p>
Full article ">Figure 14
<p>Confusion matrix for the Ottawa datasets. (<b>a</b>) presents the GEFA-Net Confusion matrix, (<b>b</b>) displays the MobileNetV3 Confusion matrix, (<b>c</b>) shows the ResNet Confusion matrix, (<b>d</b>) presents the Vision transformer Confusion matrix, (<b>e</b>) displays the MobileVit Confusion matrix, (<b>f</b>) shows the Swin transformer Confusion matrix.</p>
Full article ">Figure 15
<p>Experimental comparison of three preprocessing methods (Original signal, CWT, FFE) using our dataset across different models.</p>
Full article ">Figure 16
<p>Comparison of model accuracy with our datasets. (<b>a</b>) Valid accuracy for 50 training epochs per model. (<b>b</b>) Train loss for 50 training epochs per model.</p>
Full article ">Figure 17
<p>Confusion matrix for our datasets. (<b>a</b>) presents the GEFA-Net Confusion matrix, (<b>b</b>) displays the MobileNetV3 Confusion matrix, (<b>c</b>) shows the ResNet Confusion matrix, (<b>d</b>) presents the Vision transformer Confusion matrix, (<b>e</b>) displays the MobileVit Confusion matrix, (<b>f</b>) shows the Swin transformer Confusion matrix.</p>
Full article ">Figure 18
<p>Comparison results of ablation experimental models. (<b>a</b>) Ottawa datasets. (<b>b</b>) Our datasets.</p>
Full article ">
19 pages, 4527 KiB  
Tutorial
A Tutorial on Mechanical Sensors in the 70th Anniversary of the Piezoresistive Effect
by Ferran Reverter
Sensors 2024, 24(11), 3690; https://doi.org/10.3390/s24113690 - 6 Jun 2024
Viewed by 3837
Abstract
An outstanding event related to the understanding of the physics of mechanical sensors occurred and was announced in 1954, exactly seventy years ago. This event was the discovery of the piezoresistive effect, which led to the development of semiconductor strain gauges with a [...] Read more.
An outstanding event related to the understanding of the physics of mechanical sensors occurred and was announced in 1954, exactly seventy years ago. This event was the discovery of the piezoresistive effect, which led to the development of semiconductor strain gauges with a sensitivity much higher than that obtained before in conventional metallic strain gauges. In turn, this motivated the subsequent development of the earliest micromachined silicon devices and the corresponding MEMS devices. The science and technology related to sensors has experienced noteworthy advances in the last decades, but the piezoresistive effect is still the main physical phenomenon behind many mechanical sensors, both commercial and in research models. On this 70th anniversary, this tutorial aims to explain the operating principle, subtypes, input–output characteristics, and limitations of the three main types of mechanical sensor: strain gauges, capacitive sensors, and piezoelectric sensors. These three sensor technologies are also compared with each other, highlighting the main advantages and disadvantages of each one. Full article
Show Figures

Figure 1

Figure 1
<p>A sensor acquiring information from different energy domains and converting it to the electrical domain.</p>
Full article ">Figure 2
<p>Historic scientific events related to strain gauges (in red), capacitive sensors (in green), and piezoelectric sensors (in blue).</p>
Full article ">Figure 3
<p>Bar exposed to an external force generating a longitudinal and a transverse strain.</p>
Full article ">Figure 4
<p>Typical commercial metallic strain gauge with a serpentine shape.</p>
Full article ">Figure 5
<p>Piezoresistive pressure sensor based on a membrane including four piezoresistors. (<b>a</b>) Top view. (<b>b</b>) Cross-section A-A’.</p>
Full article ">Figure 6
<p>Piezoresistive acceleration sensor based on a flexure beam–seismic mass structure.</p>
Full article ">Figure 7
<p>Bar subjected to a longitudinal force including two strain gauges: <span class="html-italic">R</span><sub>L</sub> in a longitudinal direction, and <span class="html-italic">R</span><sub>T</sub> in a transverse direction.</p>
Full article ">Figure 8
<p>Cantilever beam subjected to a bending force including two strain gauges, one at the top and the other at the bottom at the root of the flexure.</p>
Full article ">Figure 9
<p>Capacitance with a parallel plate topology.</p>
Full article ">Figure 10
<p>Capacitive sensors with electrodes A and B in (<b>a</b>) co-planar, (<b>b</b>) cylindrical, and (<b>c</b>) interdigital topologies.</p>
Full article ">Figure 11
<p>Single-element capacitive sensor where the displacement to be measured causes a variation in (<b>a</b>) the overlap area, (<b>b</b>) the distance between electrodes, and (<b>c</b>) the properties of the intermediate dielectric.</p>
Full article ">Figure 12
<p>Differential capacitive sensor where the displacement to be measured causes a variation in (<b>a</b>) the overlap area, and (<b>b</b>) the distance between electrodes.</p>
Full article ">Figure 13
<p>Acceleration sensor based on a capacitive MEMS with a differential topology.</p>
Full article ">Figure 14
<p>Two-dimensional example of the lattice of a piezoelectric material (<b>a</b>) without mechanical stress, (<b>b</b>) under a transversal piezoelectric effect, and (<b>c</b>) under a longitudinal piezoelectric effect.</p>
Full article ">Figure 15
<p>Piezoelectric material subjected to a (<b>a</b>) compression, (<b>b</b>) shear, and (<b>c</b>) bending force. Topologies for the acceleration measurement using a (<b>d</b>) compression, (<b>e</b>) shear, and (<b>f</b>) bending force; the arrow indicates the sensitive axes of the accelerometer.</p>
Full article ">Figure 16
<p>Typical frequency response of a piezoelectric sensor.</p>
Full article ">Figure 17
<p>(<b>a</b>) Typical X-Y-Z orthogonal system. (<b>b</b>) Orthogonal system adapted to the analysis of the piezoelectric effect.</p>
Full article ">
19 pages, 14168 KiB  
Article
Evaluation of Depth Size Based on Layered Magnetization by Double-Sided Scanning for Internal Defects
by Zhiyang Deng, Dingkun Qian, Haifei Hong, Xiaochun Song and Yihua Kang
Sensors 2024, 24(11), 3689; https://doi.org/10.3390/s24113689 - 6 Jun 2024
Cited by 1 | Viewed by 751
Abstract
The quantitative evaluation of defects is extremely important, as it can avoid harm caused by underevaluation or losses caused by overestimation, especially for internal defects. The magnetic permeability perturbation testing (MPPT) method performs well for thick-walled steel pipes, but the burial depth of [...] Read more.
The quantitative evaluation of defects is extremely important, as it can avoid harm caused by underevaluation or losses caused by overestimation, especially for internal defects. The magnetic permeability perturbation testing (MPPT) method performs well for thick-walled steel pipes, but the burial depth of the defect is difficult to access directly from a single time-domain signal, which is not conducive to the evaluation of defects. In this paper, the phenomenon of layering of magnetization that occurs in ferromagnetic materials under an unsaturated magnetizing field is described. Different magnetization depths are achieved by applying step magnetization. The relationship curves between the magnetization characteristic currents and the magnetization depths are established by finite element simulations. The spatial properties of each layering can be detected by different magnetization layering. The upper and back boundaries of the defect are then localized by a double-sided scan to finally arrive at the depth size of the defect. Defects with depth size of 2 mm are evaluated experimentally. The maximum relative error is 5%. Full article
(This article belongs to the Special Issue Sensors in Nondestructive Testing)
Show Figures

Figure 1

Figure 1
<p>Magnetization model.</p>
Full article ">Figure 2
<p>Ferromagnetic member <math display="inline"><semantics> <mrow> <mi>B</mi> <mo>−</mo> <mi>H</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>−</mo> <mi>H</mi> </mrow> </semantics></math> curves.</p>
Full article ">Figure 3
<p>Schematic diagram of layered magnetization evaluation principle.</p>
Full article ">Figure 4
<p>Simulation model.</p>
Full article ">Figure 5
<p>Effect of magnetizing current on magnetization depth.</p>
Full article ">Figure 6
<p>Perturbation graph of magnetic permeability for defects of different depth sizes.</p>
Full article ">Figure 7
<p>Extraction path.</p>
Full article ">Figure 8
<p>Signals of defects buried at 10 mm depth under different magnetizing currents.</p>
Full article ">Figure 9
<p>The relationship between signal peak and magnetization current.</p>
Full article ">Figure 10
<p>Curve of characteristic current versus depth of defect burial.</p>
Full article ">Figure 11
<p>Double-sided scanning method.</p>
Full article ">Figure 12
<p>Detection signal for defects with a burial depth of 2 mm.</p>
Full article ">Figure 13
<p>Detection signal for defects with a burial depth of 5 mm.</p>
Full article ">Figure 14
<p>Detection signal for defects with a burial depth of 10 mm.</p>
Full article ">Figure 15
<p>Depth size measurement charts for different burial depths.</p>
Full article ">Figure 16
<p>Experimental platform.</p>
Full article ">Figure 17
<p>Probe and filter circuitry.</p>
Full article ">Figure 18
<p>Experimental specimens.</p>
Full article ">Figure 19
<p>Detection signals for different depth sizes (depth of buried = 3 mm).</p>
Full article ">Figure 20
<p>Depth size measurement charts for different burial depths.</p>
Full article ">Figure 21
<p>Different defect widths.</p>
Full article ">Figure 22
<p>Different width detection signals.</p>
Full article ">Figure 23
<p>Curves of magnetization depth as a function of the characteristic current for different defect widths.</p>
Full article ">Figure 24
<p>Different length detection signals.</p>
Full article ">Figure 25
<p>Curves of magnetization depth as a function of the characteristic current for different defect lengths.</p>
Full article ">
20 pages, 9178 KiB  
Article
Factors Affecting the Situational Awareness of Armored Vehicle Occupants
by Zihan Pei, Wenyu Zhao, Long Hu, Ziye Zhang, Yang Luo, Yixiang Wu and Xiaoping Jin
Sensors 2024, 24(11), 3688; https://doi.org/10.3390/s24113688 - 6 Jun 2024
Viewed by 938
Abstract
In the field of armored vehicles, up to 70% of accidents are associated with low levels of situational awareness among the occupants, highlighting the importance of situational awareness in improving task performance. In this study, we explored the mechanisms influencing situational awareness by [...] Read more.
In the field of armored vehicles, up to 70% of accidents are associated with low levels of situational awareness among the occupants, highlighting the importance of situational awareness in improving task performance. In this study, we explored the mechanisms influencing situational awareness by simulating an armored vehicle driving platform with 14 levels of experimentation in terms of five factors: experience, expectations, attention, the cueing channel, and automation. The experimental data included SART and SAGAT questionnaire scores, eye movement indicators, and electrocardiographic and electrodermal signals. Data processing and analysis revealed the following conclusions: (1) Experienced operators have higher levels of situational awareness. (2) Operators with certain expectations have lower levels of situational awareness. (3) Situational awareness levels are negatively correlated with information importance affiliations and the frequency of anomalous information in non-primary tasks. (4) Dual-channel cues lead to higher levels of situational awareness than single-channel cues. (5) Operators’ situational awareness is lower at high automation levels. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Test instruments and equipment.</p>
Full article ">Figure 2
<p>Test design.</p>
Full article ">Figure 3
<p>Interface area division.</p>
Full article ">Figure 4
<p>Different fonts.</p>
Full article ">Figure 5
<p>Expectations—different scenes.</p>
Full article ">Figure 6
<p>Experimental flowchart.</p>
Full article ">Figure 7
<p>Experience comparison.</p>
Full article ">Figure 8
<p>Experience comparison—eye movement.</p>
Full article ">Figure 9
<p>Information anomaly probability.</p>
Full article ">Figure 10
<p>Information anomaly probability—eye movement.</p>
Full article ">Figure 11
<p>Heat map—different information anomaly probability.</p>
Full article ">Figure 12
<p>Information salience—questionnaire score and performance.</p>
Full article ">Figure 13
<p>Information salience—eye movement.</p>
Full article ">Figure 14
<p>Expectations questionnaire—score and performance.</p>
Full article ">Figure 15
<p>Expectations—eye movement.</p>
Full article ">Figure 16
<p>Prompt channel—questionnaire score and performance.</p>
Full article ">Figure 17
<p>Cue channel—eye movement.</p>
Full article ">Figure 18
<p>Cue channel—interest area gaze proportion.</p>
Full article ">Figure 19
<p>Heat map—interest area gaze proportion.</p>
Full article ">Figure 20
<p>Cue channel electrodermal signal—visual cue (<b>left</b>), audiovisual cue (<b>middle</b>), and auditory cue (<b>right</b>).</p>
Full article ">Figure 21
<p>Automation levels—eye movement.</p>
Full article ">Figure 22
<p>Automation levels—electrodermal signal for (<b>a</b>) low automation levels and (<b>b</b>) high automation levels.</p>
Full article ">
13 pages, 1765 KiB  
Article
5G AI-IoT System for Bird Species Monitoring and Song Classification
by Jaume Segura-Garcia, Sean Sturley, Miguel Arevalillo-Herraez, Jose M. Alcaraz-Calero, Santiago Felici-Castell and Enrique A. Navarro-Camba
Sensors 2024, 24(11), 3687; https://doi.org/10.3390/s24113687 - 6 Jun 2024
Cited by 1 | Viewed by 1450
Abstract
Identification of different species of animals has become an important issue in biology and ecology. Ornithology has made alliances with other disciplines in order to establish a set of methods that play an important role in the birds’ protection and the evaluation of [...] Read more.
Identification of different species of animals has become an important issue in biology and ecology. Ornithology has made alliances with other disciplines in order to establish a set of methods that play an important role in the birds’ protection and the evaluation of the environmental quality of different ecosystems. In this case, the use of machine learning and deep learning techniques has produced big progress in birdsong identification. To make an approach from AI-IoT, we have used different approaches based on image feature comparison (through CNNs trained with Imagenet weights, such as EfficientNet or MobileNet) using the feature spectrogram for the birdsong, but also the use of the deep CNN (DCNN) has shown good performance for birdsong classification for reduction of the model size. A 5G IoT-based system for raw audio gathering has been developed, and different CNNs have been tested for bird identification from audio recordings. This comparison shows that Imagenet-weighted CNN shows a relatively high performance for most species, achieving 75% accuracy. However, this network contains a large number of parameters, leading to a less energy efficient inference. We have designed two DCNNs to reduce the amount of parameters, to keep the accuracy at a certain level, and to allow their integration into a small board computer (SBC) or a microcontroller unit (MCU). Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Schema of the AI-IoT system.</p>
Full article ">Figure 2
<p>Photo of the ESP32-S3 MCU connected to the INMP441 microphone (Oakland, CA, USA).</p>
Full article ">Figure 3
<p>Examples of bird species with audio augmentation techniques using the criteria established.</p>
Full article ">Figure 4
<p>Structure of the proposed Deep CNN with four convolutional layers with MaxPool (with 128, 64, 32, and 16 filters, respectively).</p>
Full article ">Figure 5
<p>Structure of of the proposed Deep CNN with four convolutional layers with MaxPool (with 64, 32, 16, and 8 filters, respectively).</p>
Full article ">Figure 6
<p>Votation algorithm schema for post-processing predictions to increase global accuracy for different predictions.</p>
Full article ">Figure 7
<p>Comparison of the global accuracy in the test for the different networks selected and designed and the weight of the TensorFlow-Lite file.</p>
Full article ">
27 pages, 1416 KiB  
Systematic Review
Accuracy, Validity, and Reliability of Markerless Camera-Based 3D Motion Capture Systems versus Marker-Based 3D Motion Capture Systems in Gait Analysis: A Systematic Review and Meta-Analysis
by Sofia Scataglini, Eveline Abts, Cas Van Bocxlaer, Maxime Van den Bussche, Sara Meletani and Steven Truijen
Sensors 2024, 24(11), 3686; https://doi.org/10.3390/s24113686 - 6 Jun 2024
Cited by 5 | Viewed by 4025
Abstract
(1) Background: Marker-based 3D motion capture systems (MBS) are considered the gold standard in gait analysis. However, they have limitations for which markerless camera-based 3D motion capture systems (MCBS) could provide a solution. The aim of this systematic review and meta-analysis is to [...] Read more.
(1) Background: Marker-based 3D motion capture systems (MBS) are considered the gold standard in gait analysis. However, they have limitations for which markerless camera-based 3D motion capture systems (MCBS) could provide a solution. The aim of this systematic review and meta-analysis is to compare the accuracy, validity, and reliability of MCBS and MBS. (2) Methods: A total of 2047 papers were systematically searched according to PRISMA guidelines on 7 February 2024, in two different databases: Pubmed (1339) and WoS (708). The COSMIN-tool and EBRO guidelines were used to assess risk of bias and level of evidence. (3) Results: After full text screening, 22 papers were included. Spatiotemporal parameters showed overall good to excellent accuracy, validity, and reliability. For kinematic variables, hip and knee showed moderate to excellent agreement between the systems, while for the ankle joint, poor concurrent validity and reliability were measured. The accuracy and concurrent validity of walking speed were considered excellent in all cases, with only a small bias. The meta-analysis of the inter-rater reliability and concurrent validity of walking speed, step time, and step length resulted in a good-to-excellent intraclass correlation coefficient (ICC) (0.81; 0.98). (4) Discussion and conclusions: MCBS are comparable in terms of accuracy, concurrent validity, and reliability to MBS in spatiotemporal parameters. Additionally, kinematic parameters for hip and knee in the sagittal plane are considered most valid and reliable but lack valid and accurate measurement outcomes in transverse and frontal planes. Customization and standardization of methodological procedures are necessary for future research to adequately compare protocols in clinical settings, with more attention to patient populations. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Figure 1
<p>Prisma Flow Chart.</p>
Full article ">Figure 2
<p>Meta-analysis data for inter-rater reliability and concurrent validity (ICC) [<a href="#B31-sensors-24-03686" class="html-bibr">31</a>,<a href="#B33-sensors-24-03686" class="html-bibr">33</a>,<a href="#B34-sensors-24-03686" class="html-bibr">34</a>,<a href="#B35-sensors-24-03686" class="html-bibr">35</a>].</p>
Full article ">
20 pages, 11084 KiB  
Article
Computer Vision and Augmented Reality for Human-Centered Fatigue Crack Inspection
by Rushil Mojidra, Jian Li, Ali Mohammadkhorasani, Fernando Moreu, Caroline Bennett and William Collins
Sensors 2024, 24(11), 3685; https://doi.org/10.3390/s24113685 - 6 Jun 2024
Viewed by 1342
Abstract
A significant percentage of bridges in the United States are serving beyond their 50-year design life, and many of them are in poor condition, making them vulnerable to fatigue cracks that can result in catastrophic failure. However, current fatigue crack inspection practice based [...] Read more.
A significant percentage of bridges in the United States are serving beyond their 50-year design life, and many of them are in poor condition, making them vulnerable to fatigue cracks that can result in catastrophic failure. However, current fatigue crack inspection practice based on human vision is time-consuming, labor intensive, and prone to error. We present a novel human-centered bridge inspection methodology to enhance the efficiency and accuracy of fatigue crack detection by employing advanced technologies including computer vision and augmented reality (AR). In particular, a computer vision-based algorithm is developed to enable near-real-time fatigue crack detection by analyzing structural surface motion in a short video recorded by a moving camera of the AR headset. The approach monitors structural surfaces by tracking feature points and measuring variations in distances between feature point pairs to recognize the motion pattern associated with the crack opening and closing. Measuring distance changes between feature points, as opposed to their displacement changes before this improvement, eliminates the need of camera motion compensation and enables reliable and computationally efficient fatigue crack detection using the nonstationary AR headset. In addition, an AR environment is created and integrated with the computer vision algorithm. The crack detection results are transmitted to the AR headset worn by the bridge inspector, where they are converted into holograms and anchored on the bridge surface in the 3D real-world environment. The AR environment also provides virtual menus to support human-in-the-loop decision-making to determine optimal crack detection parameters. This human-centered approach with improved visualization and human–machine collaboration aids the inspector in making well-informed decisions in the field in a near-real-time fashion. The proposed crack detection method is comprehensively assessed using two laboratory test setups for both in-plane and out-of-plane fatigue cracks. Finally, using the integrated AR environment, a human-centered bridge inspection is conducted to demonstrate the efficacy and potential of the proposed methodology. Full article
(This article belongs to the Special Issue Non-destructive Inspection with Sensors)
Show Figures

Figure 1

Figure 1
<p>The proposed human-centered bridge inspection process empowered by AR and computer vision.</p>
Full article ">Figure 2
<p>Illustration of the proposed crack detection algorithm based on surface distance tracking.</p>
Full article ">Figure 3
<p>The main virtual menu of the developed AR environment.</p>
Full article ">Figure 4
<p>Virtual menu with threshold options for human-in-the-loop decision-making.</p>
Full article ">Figure 5
<p>(<b>a</b>) Test setup for in-plane fatigue crack detection in a C(T) specimen, and (<b>b</b>) close-up view of the test setup.</p>
Full article ">Figure 6
<p>(<b>a</b>) Test setup for out-of-plane fatigue crack detection in a bridge girder specimen, and (<b>b</b>) close-up view of web-gap region and the fatigue crack.</p>
Full article ">Figure 7
<p>(<b>a</b>) Crack detection outcome using a low threshold value; (<b>b</b>) Clustering of the detected feature points on a C(T) specimen.</p>
Full article ">Figure 8
<p>Ground truth labeling: (<b>a</b>) C(T) specimen; (<b>b</b>) bridge girder specimen.</p>
Full article ">Figure 9
<p>Illustration of different values of IOU.</p>
Full article ">Figure 10
<p>Crack detection using a 2D video: (<b>a</b>) The initial frame of the 2D video with the selected ROI; (<b>b</b>) all feature points detected by the Shi–Tomasi algorithm; (<b>c</b>–<b>f</b>) in-plane fatigue crack detection results under different threshold values; (<b>g</b>,<b>h</b>) locations and distance–time histories of feature point pairs for cracked and uncracked regions.</p>
Full article ">Figure 11
<p>Quantification of crack detection in the C(T) specimen using the previous displacement-based method [<a href="#B12-sensors-24-03685" class="html-bibr">12</a>]: (<b>a</b>) detected crack by feature points; (<b>b</b>) clustering result; and (<b>c</b>) ground truth and the clustering result; and quantification of detected crack using the proposed distance-based method (this study): (<b>d</b>) detected crack by feature points (<b>e</b>) clustering result; and (<b>f</b>) ground truth and clustering result.</p>
Full article ">Figure 12
<p>Crack detection using a 3D video: (<b>a</b>) The initial frame of the 3D video with ROI, all feature points detected by the Shi–Tomasi algorithm; (<b>b</b>–<b>f</b>) Out-of-plane fatigue crack detection results of the bridge girder specimen under various threshold values; (<b>g</b>,<b>h</b>) Location and distance–time histories of feature point pairs for cracked and uncracked region. Note that the brightness of the images in (<b>b</b>–<b>f</b>) is enhanced to highlight the feature points.</p>
Full article ">Figure 13
<p>Quantification of crack detection in the bridge girder specimen using the previous displacement-based method [<a href="#B12-sensors-24-03685" class="html-bibr">12</a>]: (<b>a</b>) detected crack by feature points; (<b>b</b>) clustering result; and (<b>c</b>) ground truth and the clustering result; and quantification of detected crack using the distance-based method (this study): (<b>d</b>) detected crack by feature points; (<b>e</b>) clustering result; and (<b>f</b>) ground truth and clustering result.</p>
Full article ">Figure 14
<p>Experimental setup and hardware used in AR-based fatigue crack inspection.</p>
Full article ">Figure 15
<p>Demonstration of the integrated AR environment for human-centered bridge inspection: (<b>a</b>) Inspector starting the AR software, (<b>b</b>) inspector examining results for zero threshold, and (<b>c</b>) inspector examining the detected crack with the final threshold value.</p>
Full article ">
14 pages, 4273 KiB  
Article
Highly Sensitive Balloon-like Fiber Interferometer Based on Ethanol Coated for Temperature Measurement
by Xin Ding, Qiao Lin, Shen Liu, Lianzhen Zhang, Nan Chen, Yuping Zhang and Yiping Wang
Sensors 2024, 24(11), 3684; https://doi.org/10.3390/s24113684 - 6 Jun 2024
Viewed by 813
Abstract
A highly sensitivity balloon-like fiber interferometer based on ethanol coating is presented in this paper. The Mach–Zehnder interferometer is formed by bending a single-mode fiber to a balloon-like structure and nested in the Teflon tube. Then, an ethanol solution was filled into the [...] Read more.
A highly sensitivity balloon-like fiber interferometer based on ethanol coating is presented in this paper. The Mach–Zehnder interferometer is formed by bending a single-mode fiber to a balloon-like structure and nested in the Teflon tube. Then, an ethanol solution was filled into the tube of the balloon-like fiber interferometer by the capillary effect. Due to the high sensitivity of the refractive index (RI) of ethanol solutions to temperature, when the external temperature varies, the optical path difference changes. The change in temperature can be detected by the shift in the interference spectrum. Limited by the size of the balloon-like structure, three kinds of these structures with different sensitive lengths were prepared to select the best parameters. The sensitive lengths were 10, 15 and 20 mm, respectively, and the RI detection performance of each structure in 10~26% NaCl solutions was investigated experimentally. The results show that when the sensitive length is 20 mm, the RI sensitivity of the sensor is the highest, which is 212.88 nm/RIU. Ultimately, the sensitive length filled with ethanol is 20 mm. The experimental results show that the temperature sensitivity of the structure is 1.145 nm/°C in the range of 28.1 °C~35 °C, which is 10.3 times higher than that of an unfilled balloon-like structure (0.111 nm/°C). The system has the advantages of low cost and easy fabrication, which can potentially be used in high-precision temperature monitoring processes. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of proposed structure. (<b>a</b>) Schematic diagram; (<b>b</b>) Enlarged diagram of leak point; (<b>c</b>) Overall structure diagram; (<b>d</b>) Enlarged diagram of recoupling points.</p>
Full article ">Figure 2
<p>The balloon-like structure filling with ethanol.</p>
Full article ">Figure 3
<p>Transmission spectrum of sensor with/without ethanol.</p>
Full article ">Figure 4
<p>Experimental setup. (<b>a</b>) RI measurements; (<b>b</b>) temperature measurements.</p>
Full article ">Figure 5
<p>RI measurements with 10 mm sensitive length. (<b>a</b>) Transmission spectral evolution; (<b>b</b>) linear fitting curve.</p>
Full article ">Figure 6
<p>RI measurements with 15 mm sensitive length. (<b>a</b>) Transmission spectral evolution; (<b>b</b>) linear fitting curve.</p>
Full article ">Figure 7
<p>RI measurements with 20 mm sensitive length. (<b>a</b>) Transmission spectral evolution; (<b>b</b>) linear fitting curve.</p>
Full article ">Figure 8
<p>Temperature measurements of filled ethanol solution. (<b>a</b>) Transmission spectral evolution; (<b>b</b>) linear fitting curve.</p>
Full article ">Figure 9
<p>Linear fitting curves of dip 2 shifts against temperature variation.</p>
Full article ">Figure 10
<p>Temperature measurements of filled ethanol solution. (<b>a</b>) Transmission spectral evolution; (<b>b</b>) linear fitting curve.</p>
Full article ">Figure 11
<p>Linear fitting curves of dip 2 shifts against temperature variation.</p>
Full article ">Figure 12
<p>Stability response of proposed temperature sensor.</p>
Full article ">
24 pages, 3473 KiB  
Article
Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model
by Talal H. Noor, Ayman Noor, Ahmed F. Alharbi, Ahmed Faisal, Rakan Alrashidi, Ahmed S. Alsaedi, Ghada Alharbi, Tawfeeq Alsanoosy and Abdullah Alsaeedi
Sensors 2024, 24(11), 3683; https://doi.org/10.3390/s24113683 - 6 Jun 2024
Cited by 4 | Viewed by 2612
Abstract
Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived [...] Read more.
Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired. Full article
(This article belongs to the Special Issue Deep Learning Technology and Image Sensing)
Show Figures

Figure 1

Figure 1
<p>Real-time ArSL system architecture.</p>
Full article ">Figure 2
<p>CNN model structure.</p>
Full article ">Figure 3
<p>LSTM model structure.</p>
Full article ">Figure 4
<p>CNN sub-model performance evaluation. (<b>a</b>) CNN sub-model accuracy; (<b>b</b>) CNN sub-model precision; (<b>c</b>) CNN sub-model recall; (<b>d</b>) CNN sub-model F1 score.</p>
Full article ">Figure 5
<p>CNN sub-model confusion matrix.</p>
Full article ">Figure 6
<p>CNN sub-model validation loss.</p>
Full article ">Figure 7
<p>LSTM sub-model performance evaluation. (<b>a</b>) LSTM sub-model accuracy; (<b>b</b>) LSTM sub-model precision; (<b>c</b>) LSTM sub-model recall; (<b>d</b>) LSTM sub-model F1 score.</p>
Full article ">Figure 8
<p>LSTM sub-model confusion matrix.</p>
Full article ">Figure 9
<p>LSTM sub-model validation loss.</p>
Full article ">
22 pages, 14584 KiB  
Article
An Integrated Smart Pond Water Quality Monitoring and Fish Farming Recommendation Aquabot System
by Md. Moniruzzaman Hemal, Atiqur Rahman, Nurjahan, Farhana Islam, Samsuddin Ahmed, M. Shamim Kaiser and Muhammad Raisuddin Ahmed
Sensors 2024, 24(11), 3682; https://doi.org/10.3390/s24113682 - 6 Jun 2024
Cited by 2 | Viewed by 5461
Abstract
The integration of cutting-edge technologies such as the Internet of Things (IoT), robotics, and machine learning (ML) has the potential to significantly enhance the productivity and profitability of traditional fish farming. Farmers using traditional fish farming methods incur enormous economic costs owing to [...] Read more.
The integration of cutting-edge technologies such as the Internet of Things (IoT), robotics, and machine learning (ML) has the potential to significantly enhance the productivity and profitability of traditional fish farming. Farmers using traditional fish farming methods incur enormous economic costs owing to labor-intensive schedule monitoring and care, illnesses, and sudden fish deaths. Another ongoing issue is automated fish species recommendation based on water quality. On the one hand, the effective monitoring of abrupt changes in water quality may minimize the daily operating costs and boost fish productivity, while an accurate automatic fish recommender may aid the farmer in selecting profitable fish species for farming. In this paper, we present AquaBot, an IoT-based system that can automatically collect, monitor, and evaluate the water quality and recommend appropriate fish to farm depending on the values of various water quality indicators. A mobile robot has been designed to collect parameter values such as the pH, temperature, and turbidity from all around the pond. To facilitate monitoring, we have developed web and mobile interfaces. For the analysis and recommendation of suitable fish based on water quality, we have trained and tested several ML algorithms, such as the proposed custom ensemble model, random forest (RF), support vector machine (SVM), decision tree (DT), K-nearest neighbor (KNN), logistic regression (LR), bagging, boosting, and stacking, on a real-time pond water dataset. The dataset has been preprocessed with feature scaling and dataset balancing. We have evaluated the algorithms based on several performance metrics. In our experiment, our proposed ensemble model has delivered the best result, with 94% accuracy, 94% precision, 94% recall, a 94% F1-score, 93% MCC, and the best AUC score for multi-class classification. Finally, we have deployed the best-performing model in a web interface to provide cultivators with recommendations for suitable fish farming. Our proposed system is projected to not only boost production and save money but also reduce the time and intensity of the producer’s manual labor. Full article
(This article belongs to the Special Issue AI, IoT and Smart Sensors for Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Integrated system architecture for the proposed model.</p>
Full article ">Figure 2
<p>Flowchart of mobile robotic agent.</p>
Full article ">Figure 3
<p>Flowchart of the proposed model.</p>
Full article ">Figure 4
<p>Circuit diagram of the proposed system.</p>
Full article ">Figure 5
<p>Conceptual diagram of ML model.</p>
Full article ">Figure 6
<p>Distribution of fish species before and after applying SMOTE.</p>
Full article ">Figure 7
<p>System prototype and portable display system of the prototype.</p>
Full article ">Figure 8
<p>Confusion matrix of ensemble model before and after SMOTE.</p>
Full article ">Figure 9
<p>ROC curves of different ML algorithms before applying SMOTE.</p>
Full article ">Figure 10
<p>ROC curves of different ML algorithms after applying SMOTE.</p>
Full article ">Figure 10 Cont.
<p>ROC curves of different ML algorithms after applying SMOTE.</p>
Full article ">Figure 11
<p>Influence of the parameters on the output using SHAP.</p>
Full article ">Figure 12
<p>Fish Recommender System.</p>
Full article ">
12 pages, 3735 KiB  
Article
Detection of α-Galactosidase A Reaction in Samples Extracted from Dried Blood Spots Using Ion-Sensitive Field Effect Transistors
by Alexander Kuznetsov, Andrey Sheshil, Eugene Smolin, Vitaliy Grudtsov, Dmitriy Ryazantsev, Mark Shustinskiy, Tatiana Tikhonova, Irakli Kitiashvili, Valerii Vechorko and Natalia Komarova
Sensors 2024, 24(11), 3681; https://doi.org/10.3390/s24113681 - 6 Jun 2024
Cited by 1 | Viewed by 1024
Abstract
Fabry disease is a lysosomal storage disorder caused by a significant decrease in the activity or absence of the enzyme α-galactosidase A. The diagnostics of Fabry disease during newborn screening are reasonable, due to the availability of enzyme replacement therapy. This paper presents [...] Read more.
Fabry disease is a lysosomal storage disorder caused by a significant decrease in the activity or absence of the enzyme α-galactosidase A. The diagnostics of Fabry disease during newborn screening are reasonable, due to the availability of enzyme replacement therapy. This paper presents an electrochemical method using complementary metal-oxide semiconductor (CMOS)-compatible ion-sensitive field effect transistors (ISFETs) with hafnium oxide-sensitive surfaces for the detection of α-galactosidase A activity in dried blood spot extracts. The capability of ISFETs to detect the reaction catalyzed by α-galactosidase A was demonstrated. The buffer composition was optimized to provide suitable conditions for both enzyme and ISFET performance. The use of ISFET structures as sensor elements allowed for the label-free detection of enzymatic reactions with melibiose, a natural substrate of α-galactosidase A, instead of a synthetic fluorogenic one. ISFET chips were packaged with printed circuit boards and microfluidic reaction chambers to enable long-term signal measurement using a custom device. The packaged sensors were demonstrated to discriminate between normal and inhibited GLA activity in dried blood spots extracts. The described method offers a promising solution for increasing the widespread distribution of newborn screening of Fabry disease. Full article
(This article belongs to the Special Issue Advances in Electrochemical Sensors for Bioanalysis)
Show Figures

Figure 1

Figure 1
<p>Technological route of ISFET fabrication with post-BEOL processing. Si*—polycrystalline silicon.</p>
Full article ">Figure 2
<p>(<b>a</b>) Dependence of GLA activity on buffer pH; (<b>b</b>) dependence of GLA activity on buffer molarity.</p>
Full article ">Figure 3
<p>(<b>a</b>) Dependence of ISFET response to GLA reaction in 100 mM citrate–phosphate buffer with different pH values; (<b>b</b>) dependence of ISFET response to GLA reaction in citrate–phosphate buffer (pH 4.5) with different molarities.</p>
Full article ">Figure 4
<p>(<b>a</b>) Dependence of ISFET response to GLA reaction with different melibiose concentrations; (<b>b</b>) calibration curve for GLA detection using ISFETs (20 mM citrate–phosphate buffer, pH 4.5, 30 mM melibiose). The limit of detection was calculated to be 9.56 × 10<sup>−11</sup> M using 3.3 standard deviations.</p>
Full article ">Figure 5
<p>(<b>a</b>) Packaged chip; (<b>b</b>) I–V curves of the packaged ISFETs.</p>
Full article ">Figure 6
<p>Stabilization of ISFET baseline in 20 mM citrate–phosphate buffer with a pH of 4.5.</p>
Full article ">Figure 7
<p>Fluorescence assay of GLA activity in DBS extracts. S—substrate (4-MU-α-Gal), I—inhibitor (deoxygalactonojirimycin).</p>
Full article ">Figure 8
<p>Stabilization of ISFETs baseline with DBS extracts.</p>
Full article ">Figure 9
<p>ISFET measurement of GLA activity in DBS extracts. S—substrate (melibiose), I—inhibitor (deoxygalactonojirimycin), *—<span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">
24 pages, 4289 KiB  
Article
Identification of a Person in a Trajectory Based on Wearable Sensor Data Analysis
by Jinzhe Yan, Masahiro Toyoura and Xiangyang Wu
Sensors 2024, 24(11), 3680; https://doi.org/10.3390/s24113680 - 6 Jun 2024
Cited by 1 | Viewed by 1153
Abstract
Human trajectories can be tracked by the internal processing of a camera as an edge device. This work aims to match peoples’ trajectories obtained from cameras to sensor data such as acceleration and angular velocity, obtained from wearable devices. Since human trajectory and sensor [...] Read more.
Human trajectories can be tracked by the internal processing of a camera as an edge device. This work aims to match peoples’ trajectories obtained from cameras to sensor data such as acceleration and angular velocity, obtained from wearable devices. Since human trajectory and sensor data differ in modality, the matching method is not straightforward. Furthermore, complete trajectory information is unavailable; it is difficult to determine which fragments belong to whom. To solve this problem, we newly proposed the SyncScore model to find the similarity between a unit period trajectory and the corresponding sensor data. We also propose a Likelihood Fusion algorithm that systematically updates the similarity data and integrates it over time while keeping other trajectories in mind. We confirmed that the proposed method can match human trajectories and sensor data with an accuracy, a sensitivity, and an F1 of 0.725. Our models achieved decent results on the UEA dataset. Full article
(This article belongs to the Special Issue Advances in 3D Imaging and Multimodal Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>A situation where a specific person wants to distinguish his/her trajectories from all other trajectories. An AI camera provides a mixture of all trajectories. By extracting the person’s trajectory from these trajectories, it is possible to analyze the person’s behavior. In particular, the person’s own trajectory provides a record of which room the person was moving in and how he/she was moving.</p>
Full article ">Figure 2
<p>Overview of service and matching process. By matching the trajectories with the sensor signals of the wearable device, only the trajectory of the person who owns the wearable device is extracted. The owner of the device can extract his or her own trajectories from a large number of trajectories. The trajectories are observed as a mixture of the segmented trajectories. The proposed method computes a match score between the sensor signal and all corresponding segment trajectories, and aims to retrieve all corresponding trajectories. (<b>a</b>) Overview of the services envisioned. (<b>b</b>) Overview of the matching process.</p>
Full article ">Figure 3
<p>Sampling data and different period likelihood.</p>
Full article ">Figure 4
<p>Structure of the SyncScore network.</p>
Full article ">Figure 5
<p>Fusion Feature architecture.</p>
Full article ">Figure 6
<p>SecAttention model extended from the Transformer model.</p>
Full article ">Figure 7
<p>Definition of moderate period by start and end times of all trajectories.</p>
Full article ">Figure 8
<p>The overview of Update Rules 1 and 2 for segmented likelihoods of trajectories.</p>
Full article ">Figure 9
<p>Computation of <math display="inline"><semantics> <msub> <mi>R</mi> <mi>j</mi> </msub> </semantics></math> by considering the lengths of the elements of <math display="inline"><semantics> <msub> <mi>Q</mi> <mi>j</mi> </msub> </semantics></math>.</p>
Full article ">Figure 10
<p>Ablation results of ROC curve in <span class="html-italic">Long Dataset</span>.</p>
Full article ">Figure 11
<p>Likelihood results for each trajectory in Sample 1. The trajectories that should be matched correctly are indicated with underlines.</p>
Full article ">Figure 12
<p>Likelihood results for each trajectory in Sample 2. The trajectories that should be matched correctly are indicated with underlines.</p>
Full article ">Figure 13
<p>Likelihood results for each trajectory in Sample 3. The trajectories that should be matched correctly are indicated with underlines.</p>
Full article ">Figure 14
<p>Visualization of integration results. Each row is a different video visualization result: (<b>left</b>) target, (<b>right-first</b>) the result of Update Rule 2 processing, (<b>right-middle</b>) the result of Update Rule 1 processing, and (<b>right-most</b>) the result of the fusion of both update rules.</p>
Full article ">
11 pages, 3182 KiB  
Communication
Micro-Ring Resonator Assisted Photothermal Spectroscopy of Water Vapor
by Maria V. Kotlyar, Jenitta Johnson Mapranathukaran, Gabriele Biagi, Anton Walsh, Bernhard Lendl and Liam O’Faolain
Sensors 2024, 24(11), 3679; https://doi.org/10.3390/s24113679 - 6 Jun 2024
Viewed by 983
Abstract
We demonstrated, for the first time, micro-ring resonator assisted photothermal spectroscopy measurement of a gas phase sample. The experiment used a telecoms wavelength probe laser that was coupled to a silicon nitride photonic integrated circuit using a fibre array. We excited the photothermal [...] Read more.
We demonstrated, for the first time, micro-ring resonator assisted photothermal spectroscopy measurement of a gas phase sample. The experiment used a telecoms wavelength probe laser that was coupled to a silicon nitride photonic integrated circuit using a fibre array. We excited the photothermal effect in the water vapor above the micro-ring using a 1395 nm diode laser. We measured the 1f and 2f wavelength modulation response versus excitation laser wavelength and verified the power scaling behaviour of the signal. Full article
(This article belongs to the Special Issue Photonics for Advanced Spectroscopy and Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic of the experimental set-up showing the light from the telecom wavelength probe laser delivered via input fibre through the sport size converter (SU-8) to the micro-ring resonator (MRR) and out again through the output fibre; the excitation beam at 1395 nm is delivered via the out-of-plane fibre, which is placed over the MRR. The insert shows the sensing principle where the shift in the resonance of the MRR (original in red, shifted curve in blue) is caused by the change in the refractive index of the air above the ring, which is caused in its turn by the change in the temperature. The probe laser at the telecom wavelength (black) is matched to the inflection point of the resonance.</p>
Full article ">Figure 2
<p>(<b>a</b>) A SEM image of the fabricated MRR with looped access waveguides, (<b>b</b>) The transmission spectra of the MRR, (<b>c</b>) The selected resonance of the MRR with a quality factor Q of 20,000.</p>
Full article ">Figure 3
<p>Near-IR absorption spectrum of air at normal pressure. The absorption of the main IR active gases present at their natural abundances are shown. Over this wavenumber range, the absorption of methane and carbon dioxide at atmospheric concentrations is very small compared to water (~5 orders and 7 orders of magnitude less, respectively). The data was taken from the HITRAN database [<a href="#B30-sensors-24-03679" class="html-bibr">30</a>].</p>
Full article ">Figure 4
<p>A schematic of the experimental setup for the investigated gas sensor system (BOA = booster optical amplifier, CD = current driver, TEC = thermo-electric cooler, IR = infra-red, FA = fibre array, MRR = micro-ring resonator).</p>
Full article ">Figure 5
<p>Spectra of 1f-WM PTS (<b>a</b>) and 2f-WM PTS (<b>b</b>) signals recorded as a function of the excitation laser wavelength. Black lines are the theoretically predicted 1f and 2f spectra.</p>
Full article ">Figure 6
<p>(<b>a</b>) 2f-WM PTS signal at different powers of the amplified excitation source. (<b>b</b>) Maximum 2f-WM PTS signal as a function of excitation power.</p>
Full article ">Figure 7
<p>(<b>a</b>) 2f-WM PTS signal as a function of the probe wavelength where points marked as A and B correspond to the inflection points of the resonance dip and the point C is at the resonance wavelength. (<b>b</b>) The resonance dip of the MRR along which the 2f-WM PTS signal from part (<b>a</b>) was scanned.</p>
Full article ">
14 pages, 4011 KiB  
Article
Electrochemical Diffusion Study in Poly(Ethylene Glycol) Dimethacrylate-Based Hydrogels
by Eva Melnik, Steffen Kurzhals, Giorgio C. Mutinati, Valerio Beni and Rainer Hainberger
Sensors 2024, 24(11), 3678; https://doi.org/10.3390/s24113678 - 6 Jun 2024
Viewed by 987
Abstract
Hydrogels are of great importance for functionalizing sensors and microfluidics, and poly(ethylene glycol) dimethacrylate (PEG-DMA) is often used as a viscosifier for printable hydrogel precursor inks. In this study, 1–10 kDa PEG-DMA based hydrogels were characterized by gravimetric and electrochemical methods to investigate [...] Read more.
Hydrogels are of great importance for functionalizing sensors and microfluidics, and poly(ethylene glycol) dimethacrylate (PEG-DMA) is often used as a viscosifier for printable hydrogel precursor inks. In this study, 1–10 kDa PEG-DMA based hydrogels were characterized by gravimetric and electrochemical methods to investigate the diffusivity of small molecules and proteins. Swelling ratios (SRs) of 14.43–9.24, as well as mesh sizes ξ of 3.58–6.91 nm were calculated, and it was found that the SR correlates with the molar concentration of PEG-DMA in the ink (MCI) (SR = 0.1127 × MCI + 8.3256, R2 = 0.9692) and ξ correlates with the molecular weight (Mw) (ξ = 0.3382 × Mw + 3.638, R2 = 0.9451). To investigate the sensing properties, methylene blue (MB) and MB-conjugated proteins were measured on electrochemical sensors with and without hydrogel coating. It was found that on sensors with 10 kDa PEG-DMA hydrogel modification, the DPV peak currents were reduced to 92 % for MB, 73 % for MB-BSA, and 23 % for MB-IgG. To investigate the diffusion properties of MB(-conjugates) in hydrogels with 1–10 kDa PEG-DMA, diffusivity was calculated from the current equation. It was found that diffusivity increases with increasing ξ. Finally, the release of MB-BSA was detected after drying the MB-BSA-containing hydrogel, which is a promising result for the development of hydrogel-based reagent reservoirs for biosensing. Full article
(This article belongs to the Special Issue Eurosensors 2023 Selected Papers)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) Sensors covered with a hydrogel are overlayed with (<b>B</b>) MB(-conjugate) solution for (<b>C</b>) electrochemical diffusion monitoring with DPV.</p>
Full article ">Figure 2
<p>(<b>A</b>) Graphite sensor, (<b>B</b>) screen-printed sheet, and (<b>C</b>) peak currents measured on nine sensors of the screen-printed sheet.</p>
Full article ">Figure 3
<p>SEM images of vacuum-dried and lyophilized hydrogel structures.</p>
Full article ">Figure 4
<p>(<b>A</b>) Concentration-dependent measurement of MB-BSA and MB-IgG on unmodified sensors. (<b>B</b>) Normalization of the DPV peak current I<sub>peak</sub> with the DOL of 5.8 for MB-BSA and 3.8 for MB-IgG.</p>
Full article ">Figure 5
<p>(<b>A</b>) Detection of MB with and without a hydrogel coating. (<b>B</b>) Detection of MB-BSA (19.9 µmol/L) and MB-IgG (10 µmol/L) with and without a hydrogel coating.</p>
Full article ">Figure 6
<p>Plots of (<b>A</b>) I<sub>norm_max</sub> versus diffusivity and (<b>B</b>) diffusivity versus mesh size ξ.</p>
Full article ">Figure 7
<p>Diffusion study of MB-BSA in the wet (<b>A</b>), in the dry hydrogels (<b>B</b>), and out of the dry hydrogel (<b>C</b>).</p>
Full article ">
21 pages, 5602 KiB  
Article
EMR-HRNet: A Multi-Scale Feature Fusion Network for Landslide Segmentation from Remote Sensing Images
by Yuanhang Jin, Xiaosheng Liu and Xiaobin Huang
Sensors 2024, 24(11), 3677; https://doi.org/10.3390/s24113677 - 6 Jun 2024
Viewed by 1003
Abstract
Landslides constitute a significant hazard to human life, safety and natural resources. Traditional landslide investigation methods demand considerable human effort and expertise. To address this issue, this study introduces an innovative landslide segmentation framework, EMR-HRNet, aimed at enhancing accuracy. Initially, a novel data [...] Read more.
Landslides constitute a significant hazard to human life, safety and natural resources. Traditional landslide investigation methods demand considerable human effort and expertise. To address this issue, this study introduces an innovative landslide segmentation framework, EMR-HRNet, aimed at enhancing accuracy. Initially, a novel data augmentation technique, CenterRep, is proposed, not only augmenting the training dataset but also enabling the model to more effectively capture the intricate features of landslides. Furthermore, this paper integrates a RefConv and Multi-Dconv Head Transposed Attention (RMA) feature pyramid structure into the HRNet model, augmenting the model’s capacity for semantic recognition and expression at various levels. Last, the incorporation of the Dilated Efficient Multi-Scale Attention (DEMA) block substantially widens the model’s receptive field, bolstering its capability to discern local features. Rigorous evaluations on the Bijie dataset and the Sichuan and surrounding area dataset demonstrate that EMR-HRNet outperforms other advanced semantic segmentation models, achieving mIoU scores of 81.70% and 71.68%, respectively. Additionally, ablation studies conducted across the comprehensive dataset further corroborate the enhancements’ efficacy. The results indicate that EMR-HRNet excels in processing satellite and UAV remote sensing imagery, showcasing its significant potential in multi-source optical remote sensing for landslide segmentation. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of landslide segmentation techniques.</p>
Full article ">Figure 2
<p>Partial landslide data image. Row 1 is the Bijie dataset, and row 2 is the Sichuan and surrounding area dataset.</p>
Full article ">Figure 3
<p>Raw images and labels enhanced with CenterRep data.</p>
Full article ">Figure 4
<p>EMR-HRNet model structure.</p>
Full article ">Figure 5
<p>RAD block.</p>
Full article ">Figure 6
<p>RefConv structure. The * symbol indicates the convolution operator.</p>
Full article ">Figure 7
<p>Structure of the MDTA attention mechanism.</p>
Full article ">Figure 8
<p>DEMA block structure.</p>
Full article ">Figure 9
<p>Structure of the EMA attention mechanism.</p>
Full article ">Figure 10
<p>Bijie dataset visualization results.</p>
Full article ">Figure 11
<p>Sichuan and surrounding area dataset visualization results.</p>
Full article ">Figure 12
<p>Luding County dataset visualization results.</p>
Full article ">
10 pages, 3207 KiB  
Communication
Visual Strain Sensors Based on Fabry–Perot Structures for Structural Integrity Monitoring
by Qingyuan Chen, Furong Liu, Guofeng Xu, Boshuo Yin, Ming Liu, Yifei Xiong and Feiying Wang
Sensors 2024, 24(11), 3676; https://doi.org/10.3390/s24113676 - 6 Jun 2024
Viewed by 838
Abstract
Strain sensors that can rapidly and efficiently detect strain distribution and magnitude are crucial for structural health monitoring and human–computer interactions. However, traditional electrical and optical strain sensors make access to structural health information challenging because data conversion is required, and they have [...] Read more.
Strain sensors that can rapidly and efficiently detect strain distribution and magnitude are crucial for structural health monitoring and human–computer interactions. However, traditional electrical and optical strain sensors make access to structural health information challenging because data conversion is required, and they have intricate, delicate designs. Drawing inspiration from the moisture-responsive coloration of beetle wing sheaths, we propose using Ecoflex as a flexible substrate. This substrate is coated with a Fabry–Perot (F–P) optical structure, comprising a “reflective layer/stretchable interference cavity/reflective layer”, creating a dynamic color-changing visual strain sensor. Upon the application of external stress, the flexible interference chamber of the sensor stretches and contracts, prompting a blue-shift in the structural reflection curve and displaying varying colors that correlate with the applied strain. The innovative flexible sensor can be attached to complex-shaped components, enabling the visual detection of structural integrity. This biomimetic visual strain sensor holds significant promise for real-time structural health monitoring applications. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Fabrication of F–P-based flexible visual strain sensor. (<b>a</b>) Principle of the phenomenon of hygroscopic coloration in the longhorn beetle; (<b>b</b>) Interference effects of incident light in the F-P structure; (<b>c</b>) Demonstration of different colors produced by SBS thickness variation; (<b>d</b>) Preparation process of F-P-based visualized strain sensors.</p>
Full article ">Figure 2
<p>(<b>a</b>) Color transition upon strain application; (<b>b</b>) reflection curves and colors at varying SBS thicknesses; (<b>c</b>) color patterns for different SBS thicknesses; (<b>d</b>) electric field distribution with a 1000 nm indium tin oxide layer; and (<b>e</b>) reflectance changes with wavelength and SBS thickness. All structures have a top GST layer of 5 nm and a bottom layer of 10 nm; (<b>f</b>) flowchart of color–structural parameters’ degree of strain conversion.</p>
Full article ">Figure 3
<p>(<b>a</b>) reflectance spectra and colors with varied GST-B thicknesses; (<b>b</b>) reflectance spectra and colors with varied GST-T thicknesses; (<b>c</b>) electric field at peak wavelengths for different GST-B and GST-T thicknesses; and (<b>d</b>) colors for various SBS thicknesses.</p>
Full article ">Figure 4
<p>(<b>a</b>) CIE-1931 coordinates for RGB with an upper GST layer of 5 nm and a lower layer of 10 nm, and (<b>b</b>) reflectance spectra and color blocks for RGB. SBS thicknesses of the F–P structures corresponding to the R, G, and B colors exhibited in <a href="#sensors-24-03676-f004" class="html-fig">Figure 4</a>a and <a href="#sensors-24-03676-f004" class="html-fig">Figure 4</a>b are 172 nm, 264 nm, and 209 nm, respectively.</p>
Full article ">Figure 5
<p>Comparison of strain color coordinate distributions of visual strain sensors designed in this and other studies in CIE-1931 color diagrams [<a href="#B25-sensors-24-03676" class="html-bibr">25</a>,<a href="#B26-sensors-24-03676" class="html-bibr">26</a>,<a href="#B27-sensors-24-03676" class="html-bibr">27</a>,<a href="#B28-sensors-24-03676" class="html-bibr">28</a>,<a href="#B29-sensors-24-03676" class="html-bibr">29</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) sensor with a 300 nm SBS cavity under strain; (<b>b</b>) sensor with a 180 nm SBS cavity under strain; (<b>c</b>) CIE-1931 coordinates for a 300 nm SBS cavity at different strains; and (<b>d</b>) CIE-1931 coordinates for a 180 nm SBS cavity at different strains.</p>
Full article ">Figure 7
<p>Color distribution in simulated cracks: visual strain sensor color variations for simulated cracks.</p>
Full article ">
28 pages, 9195 KiB  
Article
Transformable Quadruped Wheelchairs Capable of Autonomous Stair Ascent and Descent
by Atsuki Akamisaka and Katashi Nagao
Sensors 2024, 24(11), 3675; https://doi.org/10.3390/s24113675 - 6 Jun 2024
Viewed by 1285
Abstract
Despite advancements in creating barrier-free environments, many buildings still have stairs, making accessibility a significant concern for wheelchair users, the majority of whom check for accessibility information before venturing out. This paper focuses on developing a transformable quadruped wheelchair to address the mobility [...] Read more.
Despite advancements in creating barrier-free environments, many buildings still have stairs, making accessibility a significant concern for wheelchair users, the majority of whom check for accessibility information before venturing out. This paper focuses on developing a transformable quadruped wheelchair to address the mobility challenges posed by stairs and steps for wheelchair users. The wheelchair, inspired by the Unitree B2 quadruped robot, combines wheels for flat surfaces and robotic legs for navigating stairs and is equipped with advanced sensors and force detectors to interact with its surroundings effectively. This research utilized reinforcement learning, specifically curriculum learning, to teach the wheelchair stair-climbing skills, with progressively increasing complexity in a simulated environment crafted in the Unity game engine. The experiments demonstrated high success rates in both stair ascent and descent, showcasing the wheelchair’s potential in overcoming mobility barriers. However, the current model faces limitations in tackling various stair types, like spiral staircases, and requires further enhancements in safety and stability, particularly in the descending phase. The project illustrates a significant step towards enhancing mobility for wheelchair users, aiming to broaden their access to diverse environments. Continued improvements and testing are essential to ensure the wheelchair’s adaptability and safety across different terrains and situations, underlining the ongoing commitment to technological innovation in aiding individuals with mobility impairments. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Transformable quadruped wheelchair.</p>
Full article ">Figure 2
<p>Example of chair transformation.</p>
Full article ">Figure 3
<p>Wheeled transportation.</p>
Full article ">Figure 4
<p>Dimensions of Unitree B2.</p>
Full article ">Figure 5
<p>Dimensions of transformable quadruped wheelchair (top, vertical; bottom, horizontal).</p>
Full article ">Figure 6
<p>Weight setting of various parts.</p>
Full article ">Figure 7
<p>Left front leg joint.</p>
Full article ">Figure 8
<p>Chair armrest and body joints.</p>
Full article ">Figure 9
<p>Typical gaits of a quadruped mechanism.</p>
Full article ">Figure 10
<p>An example of trajectory generated by the TG.</p>
Full article ">Figure 11
<p>PMTG structure.</p>
Full article ">Figure 12
<p>Structure of PMTG used in this research.</p>
Full article ">Figure 13
<p>Reinforcement learning for optimization of policy network.</p>
Full article ">Figure 14
<p>Width, height, and yaw angle of TG.</p>
Full article ">Figure 15
<p>TG for forward/backward and rotational movements.</p>
Full article ">Figure 16
<p>The output trajectory of the final TG when the direction of travel is straight ahead (1, 0).</p>
Full article ">Figure 17
<p>The output trajectory of the final TG when the direction of travel is backward (−1, 0).</p>
Full article ">Figure 18
<p>Output trajectory of the final TG when the direction of travel is right (0, 0.5).</p>
Full article ">Figure 19
<p>The output trajectory of the final TG when the direction of travel is diagonally right (0.5, 0.5).</p>
Full article ">Figure 20
<p>Staircase kick and tread.</p>
Full article ">Figure 21
<p>A 3D model of the staircase.</p>
Full article ">Figure 22
<p>Tesla Bot in quadruped wheelchair.</p>
Full article ">Figure 23
<p>Field used in stair-ascending motion acquisition experiment.</p>
Full article ">Figure 24
<p>Curriculum changes.</p>
Full article ">Figure 25
<p>Reward graph in the stair-ascending motion acquisition experiment.</p>
Full article ">Figure 26
<p>Field used in stair-descending motion acquisition experiment.</p>
Full article ">Figure 27
<p>Reward graph in stair-descending motion acquisition experiment.</p>
Full article ">Figure A1
<p>ArticulationBody component.</p>
Full article ">Figure A2
<p>LiDAR component provided by VTC on Unity.</p>
Full article ">
14 pages, 4416 KiB  
Article
Measuring DNI with a New Radiometer Based on an Optical Fiber and Photodiode
by Alejandro Carballar, Roberto Rodríguez-Garrido, Manuel Jerez, Jonathan Vera and Joaquín Granado
Sensors 2024, 24(11), 3674; https://doi.org/10.3390/s24113674 - 6 Jun 2024
Viewed by 2631
Abstract
A new cost-effective radiometer has been designed, built, and tested to measure direct normal solar irradiance (DNI). The proposed instrument for solar irradiance measurement is based on an optical fiber as the light beam collector, a semiconductor photodiode to measure the optical power, [...] Read more.
A new cost-effective radiometer has been designed, built, and tested to measure direct normal solar irradiance (DNI). The proposed instrument for solar irradiance measurement is based on an optical fiber as the light beam collector, a semiconductor photodiode to measure the optical power, and a calibration algorithm to convert the optical power into solar irradiance. The proposed radiometer offers the advantage of separating the measurement point, where the optical fiber collects the solar irradiation, from the place where the optical power is measured. A calibration factor is mandatory because the semiconductor photodiode is only spectrally responsive to a limited part of the spectral irradiance. Experimental tests have been conducted under different conditions to evaluate the performance of the proposed device. The measurements confirm that the proposed instrument performs similarly to the expensive high-accuracy pyrheliometer used as a reference. Full article
(This article belongs to the Special Issue Recent Advance of Optical Measurement Based on Sensors)
Show Figures

Figure 1

Figure 1
<p>Block diagram for the proposed radiometer. The system is composed of an optical fiber (OF), a semiconductor photodiode (SC-PD), an optical power meter (OPM), and a module for data acquisition (DAQ).</p>
Full article ">Figure 2
<p>Scheme of the multimode optical fiber’s end tip exposed to the solar radiation.</p>
Full article ">Figure 3
<p>Calibration process: <span class="html-italic">P</span> (λ<sub>i</sub>) is the estimated optical power measurement, <span class="html-italic">A</span><sub>eff</sub> is the fiber-optic effective area, <span class="html-italic">Φ<sub>raw</sub></span> (λ<sub>i</sub>) is the raw solar irradiance, <span class="html-italic">CF</span> (<span class="html-italic">λ</span><sub>i</sub>) is the correction factor, and <span class="html-italic">B<sub>n</sub></span> is the resulting DNI measurement.</p>
Full article ">Figure 4
<p>Solar spectral irradiance [<a href="#B28-sensors-24-03674" class="html-bibr">28</a>] (blue line), and responsivity response for the silicon photodiode Thorlabs S140C (red line) as a function of wavelength.</p>
Full article ">Figure 5
<p>Experimental assembly for the proof of concept of the proposed instrument: (<b>a</b>) optical fiber and commercial pyrheliometer installed in a sun tracker; and (<b>b</b>) silicon photodiodes and an optical power meter, plus a computer for data acquisition.</p>
Full article ">Figure 6
<p>DNI measurements obtained by the proposed radiometer (red line) and the commercial pyrheliometer (black line) for two sunny days, with: (<b>a</b>) Thorlabs FG050LGA multimode fiber; and (<b>b</b>) Thorlabs FP200URT multimode fiber. The green line represents the raw solar irradiance before applying the correction factor.</p>
Full article ">Figure 7
<p>DNI measurements obtained by the commercial pyrheliometer (black line) and the proposed radiometer (red line) with a Thorlabs FG105LVA multimode fiber: (<b>a</b>) DNI measurements for a sunny day; absolute (<b>b</b>) and relative (<b>c</b>) deviations between the DNI measurements provided by the proposed instrument and the commercial pyrheliometer. Green line in (<b>a</b>) represents the raw solar irradiance before applying the correction factor.</p>
Full article ">Figure 8
<p>Variation of the correction factor depending on the selected reference wavelength in the OPM.</p>
Full article ">Figure 9
<p>DNI measurements obtained by the commercial pyrheliometer (black line) and the proposed radiometer (red line) on a cloudy day, with: (<b>a</b>) Thorlabs FG050LGA multimode fiber; and (<b>b</b>) Thorlabs FG105LCA multimode fiber. The green line represents the raw solar irradiance before applying the correction factor.</p>
Full article ">Figure 10
<p>DNI measurements obtained by the commercial pyrheliometer (black line) and the proposed radiometer (red line) with the Thorlabs FG105LVA multimode fiber: (<b>a</b>) DNI measurements for a cloudy day; absolute (<b>b</b>) and relative (<b>c</b>) deviations between the DNI measurements provided by the proposed instrument and the commercial pyrheliometer.</p>
Full article ">Figure 11
<p>DNI measurements obtained by the commercial pyrheliometer (black line) and the proposed radiometer (red line) for a rainy interval with the Thorlabs FG105LCA multimode fiber.</p>
Full article ">
10 pages, 895 KiB  
Article
Precision Balance Assessment in Parkinson’s Disease: Utilizing Vision-Based 3D Pose Tracking for Pull Test Analysis
by Nina Ellrich, Kasimir Niermeyer, Daniela Peto, Julian Decker, Urban M. Fietzek, Sabrina Katzdobler, Günter U. Höglinger, Klaus Jahn, Andreas Zwergal and Max Wuehr
Sensors 2024, 24(11), 3673; https://doi.org/10.3390/s24113673 - 6 Jun 2024
Viewed by 1380
Abstract
Postural instability is a common complication in advanced Parkinson’s disease (PD) associated with recurrent falls and fall-related injuries. The test of retropulsion, consisting of a rapid balance perturbation by a pull in the backward direction, is regarded as the gold standard for evaluating [...] Read more.
Postural instability is a common complication in advanced Parkinson’s disease (PD) associated with recurrent falls and fall-related injuries. The test of retropulsion, consisting of a rapid balance perturbation by a pull in the backward direction, is regarded as the gold standard for evaluating postural instability in PD and is a key component of the neurological examination and clinical rating in PD (e.g., MDS-UPDRS). However, significant variability in test execution and interpretation contributes to a low intra- and inter-rater test reliability. Here, we explore the potential for objective, vision-based assessment of the pull test (vPull) using 3D pose tracking applied to single-sensor RGB-Depth recordings of clinical assessments. The initial results in a cohort of healthy individuals (n = 15) demonstrate overall excellent agreement of vPull-derived metrics with the gold standard marker-based motion capture. Subsequently, in a cohort of PD patients and controls (n = 15 each), we assessed the inter-rater reliability of vPull and analyzed PD-related impairments in postural response (including pull-to-step latency, number of steps, retropulsion angle). These quantitative metrics effectively distinguish healthy performance from and within varying degrees of postural impairment in PD. vPull shows promise for straightforward clinical implementation with the potential to enhance the sensitivity and specificity of postural instability assessment and fall risk prediction in PD. Full article
(This article belongs to the Special Issue Wearable Sensors for Monitoring Athletic and Clinical Cohorts)
Show Figures

Figure 1

Figure 1
<p>Experimental setup and analysis approach. (<b>A</b>) The pull test execution is recorded using a single RGB-Depth sensor positioned 2 m in front of the assessed patient. A 17 keypoint pose model is then estimated from the RGB frames and projected into 3D space based on the sensor’s depth frames. (<b>B</b>) Pull onset and magnitude are determined from 3D shoulder acceleration, utilizing thresholding relative to baseline shoulder motion. The amplitude of retropulsion and latency of balance recovery are assessed through 3D trunk bending dynamics in the backward direction. Steps are identified by thresholding 3D bilateral ankle velocities. <span class="html-italic">Abbreviations: acc: acceleration; vel: velocity</span>.</p>
Full article ">Figure 2
<p>Group comparison of vPull test metrics between patients and controls (* indicate a significant difference between groups). (<b>A</b>–<b>D</b>) General test characteristics, including pull magnitude, percentage of individuals displaying a stepping response, percentage of individuals showing successful balance recovery, and corresponding rating of pull test performance according to the MDS-UPDRS scheme (grade 1—slight, grade 3—moderate). (<b>E</b>–<b>H</b>) Detailed metrics characterizing the stepping response. (<b>I</b>,<b>J</b>) Detailed metrics characterizing the truncal response and balance recovery. (<b>K</b>) Low-dimensional embedding of the above quantitative features (<b>E</b>–<b>J</b>) labeled by pull test performance rating using UMAP. <span class="html-italic">Abbreviations: HS: healthy subjects; PD: patients with Parkinson’s disease; MDS-UPDRS: The Movement Disorder Society Unified Parkinson’s Disease Rating Scale; UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction</span>.</p>
Full article ">
1 pages, 131 KiB  
Retraction
RETRACTED: Ji, W.; Chen, X. TRUST: A Novel Framework for Vehicle Trajectory Recovery from Urban-Scale Videos. Sensors 2022, 22, 9948
by Wentao Ji and Xing Chen
Sensors 2024, 24(11), 3672; https://doi.org/10.3390/s24113672 - 6 Jun 2024
Viewed by 624
Abstract
The Sensors Editorial Office retracts the article, “TRUST: A Novel Framework for Vehicle Trajectory Recovery from Urban-Scale Videos” [...] Full article
(This article belongs to the Section Sensing and Imaging)
32 pages, 4861 KiB  
Article
Creating Expressive Social Robots That Convey Symbolic and Spontaneous Communication
by Enrique Fernández-Rodicio, Álvaro Castro-González, Juan José Gamboa-Montero, Sara Carrasco-Martínez and Miguel A. Salichs
Sensors 2024, 24(11), 3671; https://doi.org/10.3390/s24113671 - 5 Jun 2024
Viewed by 1026
Abstract
Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker’s [...] Read more.
Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker’s emotional and motivational state) communication. Thus, to enhance human–robot interactions, the expressions that are used have to convey both dimensions. This paper presents a method for modelling a robot’s expressiveness as a combination of these two dimensions, where each of them can be generated independently. This is the first contribution of our work. The second contribution is the development of an expressiveness architecture that uses predefined multimodal expressions to convey the symbolic dimension and integrates a series of modulation strategies for conveying the robot’s mood and emotions. In order to validate the performance of the proposed architecture, the last contribution is a series of experiments that aim to study the effect that the addition of the spontaneous dimension of communication and its fusion with the symbolic dimension has on how people perceive a social robot. Our results show that the modulation strategies improve the users’ perception and can convey a recognizable affective state. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>Mini, a social robot developed for interacting with adults who are older that suffer from mild cases of cognitive impairment.</p>
Full article ">Figure 2
<p>Software architecture integrated in Mini.</p>
Full article ">Figure 3
<p>Overview of the Expression Manager. The blocks in grey (i.e., affect generator, HRI Manager, and output interfaces) are outside of this work’s scope.</p>
Full article ">Figure 4
<p>Process followed for expressing Mini’s internal state. Actions in blue blocks are performed in the affect-generation module; actions in orange blocks are performed in the Expression Executor; actions in green blocks are performed in the Interface Players.</p>
Full article ">Figure 5
<p>Response time for the Expression Manager, understood as the time that passes between the moment a gesture request is sent until the first action is sent to the output interface. Bars represent the average value; whiskers represent the standard deviation; the green line represents the threshold for responding to a stimulus; the blue line is the threshold for identifying if a stimulus requires a response; the red line is the threshold we have defined for conscious interactions. Bars with the name of an interface correspond to expressions with a single action, while Full gesture corresponds to a multimodal gesture that performs multiple actions.</p>
Full article ">Figure 6
<p>Mini during the quiz game conducted as part of the evaluation. The robot’s tablet shows the question the robot has asked and the four possible answers (all of them appear in Spanish).</p>
Full article ">Figure 7
<p>Evolution of the affective state expression during the experiment.</p>
Full article ">Figure 8
<p>Confusion matrices representing the results of the affective state-recognition evaluation. Rows in the matrices represent the affective state the robot was expressing, while columns represent the options selected by the participants (as a percentage). Cases where participants selected the correct affective state have been highlighted with a thick black border. The intensity of the colour is directly tied to the percentage of participants selecting each option.</p>
Full article ">Figure 9
<p>Average value for the ratings computed for each of the dimensions on the RoSAS questionnaire (warmth, competence, discomfort) for both conditions. Bars represent the average rating, while whiskers represent the 95% confidence intervals.</p>
Full article ">Figure 10
<p>Evolution of the amplitude and speed parameters during the game of guessing the location of famous landmarks.</p>
Full article ">Figure 11
<p>Average value for the ratings computed for each of the dimensions on the RoSAS questionnaire (warmth, competence, discomfort) for both conditions. Bars represent the average rating, while whiskers represent the 95% confidence intervals. The bars connected with an asterisk are those for which significant differences were observed.</p>
Full article ">Figure 12
<p>Average value for the competence rating when considering only users that reported either a mid-high or high interest in owning a robot and users that reported either a mid-high or high familiarity with technology. Bars represent the average rating, while whiskers represent the 95% confidence intervals. The bars connected with an asterisk are those for which significant differences were observed.</p>
Full article ">
31 pages, 10122 KiB  
Article
Construction and Application of Energy Footprint Model for Digital Twin Workshop Oriented to Low-Carbon Operation
by Lei Zhang, Cunbo Zhuang, Ying Tian and Mengqi Yao
Sensors 2024, 24(11), 3670; https://doi.org/10.3390/s24113670 - 5 Jun 2024
Cited by 1 | Viewed by 713
Abstract
To address the difficulty of accurately characterizing the fluctuations in equipment energy consumption and the dynamic evolution of whole energy consumption in low-carbon workshops, a low-carbon-operation-oriented construction method of the energy footprint model (EFM) for a digital twin workshop (DTW) is proposed. With [...] Read more.
To address the difficulty of accurately characterizing the fluctuations in equipment energy consumption and the dynamic evolution of whole energy consumption in low-carbon workshops, a low-carbon-operation-oriented construction method of the energy footprint model (EFM) for a digital twin workshop (DTW) is proposed. With a focus on considering the fluctuations in equipment energy consumption and the correlation between multiple pieces of equipment at the workshop production process level (CBMEatWPPL), the EFM of a DTW is obtained to characterize the dynamic evolution of whole energy consumption in the workshop. Taking a production unit as a case, on the one hand, an EFM of the production unit is constructed, which achieved the characterization and visualization of the fluctuations in equipment energy consumption and the dynamic evolution of whole energy consumption in the production unit; on the other hand, based on the EFM, an objective function of workshop energy consumption is established, which is combined with the tool life, robot motion stability, and production time to formulate a multi-objective optimization function. The bee colony algorithm is adopted to solve the multi-objective optimization function, achieving collaborative optimization of cross-equipment process parameters and effectively reducing energy consumption in the production unit. The effectiveness of the proposed method and constructed EFM is demonstrated from the above two aspects. Full article
Show Figures

Figure 1

Figure 1
<p>Energy footprint composition of workshops.</p>
Full article ">Figure 2
<p>Architecture of EFM for DTW.</p>
Full article ">Figure 3
<p>Motion path of robot during loading.</p>
Full article ">Figure 4
<p>Process for estimating energy consumption of machine tools during workshop operation.</p>
Full article ">Figure 5
<p>Process for estimating energy consumption of robots during workshop operation.</p>
Full article ">Figure 6
<p>Hierarchical structure of data interaction model.</p>
Full article ">Figure 7
<p>Data transmission interface of the data layer.</p>
Full article ">Figure 8
<p>Processed workpiece.</p>
Full article ">Figure 9
<p>Equipment operating sequences.</p>
Full article ">Figure 10
<p>Operation logic model.</p>
Full article ">Figure 11
<p>Energy consumption simulation model of the production unit.</p>
Full article ">Figure 12
<p>Statistical analysis of equipment energy consumption.</p>
Full article ">Figure 13
<p>Visualization interface for production unit EFM. (<b>a</b>) Energy consumption of machine tool. (<b>b</b>) Energy consumption of robot. (<b>c</b>) Energy consumption of production unit.</p>
Full article ">Figure 13 Cont.
<p>Visualization interface for production unit EFM. (<b>a</b>) Energy consumption of machine tool. (<b>b</b>) Energy consumption of robot. (<b>c</b>) Energy consumption of production unit.</p>
Full article ">Figure 14
<p>(<b>a</b>) White steel milling cutter and (<b>b</b>) microscope.</p>
Full article ">Figure 15
<p>Microscopic images of flank wear. (<b>a</b>) Tool 1 cutting 3780 mm; (<b>b</b>) Tool 2 cutting 3360 mm; (<b>c</b>) Tool 3 cutting 2880 mm; (<b>d</b>) Tool 1 cutting 8820 mm; (<b>e</b>) Tool 2 cutting 5880 mm; (<b>f</b>) Tool 3 cutting 5400 mm.</p>
Full article ">Figure 16
<p>Solving process of bee colony algorithm for multi-objective optimization function.</p>
Full article ">Figure 17
<p>Energy consumption optimization interface for the production unit.</p>
Full article ">
14 pages, 3183 KiB  
Article
Theoretical Study of Microwires with an Inhomogeneous Magnetic Structure Using Magnetoimpedance Tomography
by Nikita A. Buznikov and Galina V. Kurlyandskaya
Sensors 2024, 24(11), 3669; https://doi.org/10.3390/s24113669 - 5 Jun 2024
Cited by 2 | Viewed by 956
Abstract
The recently proposed magnetoimpedance tomography method is based on the analysis of the frequency dependences of the impedance measured at different external magnetic fields. The method allows one to analyze the distribution of magnetic properties over the cross-section of the ferromagnetic conductor. Here, [...] Read more.
The recently proposed magnetoimpedance tomography method is based on the analysis of the frequency dependences of the impedance measured at different external magnetic fields. The method allows one to analyze the distribution of magnetic properties over the cross-section of the ferromagnetic conductor. Here, we describe the example of theoretical study of the magnetoimpedance effect in an amorphous microwire with inhomogeneous magnetic structure. In the framework of the proposed model, it is assumed that the microwire cross-section consists of several regions with different features of the effective anisotropy. The distribution of the electromagnetic fields and the microwire impedance are found by an analytical solution of Maxwell equations in the particular regions. The field and frequency dependences of the microwire impedance are analyzed taking into account the frequency dependence of the permeability values in the considered regions. Although the calculations are given for the case of amorphous microwires, the obtained results can be useful for the development of the magnetoimpedance tomography method adaptation for different types of ferromagnetic conductors. Full article
(This article belongs to the Special Issue Challenges and Future Trends of Magnetic Sensors)
Show Figures

Figure 1

Figure 1
<p>SEM images showing the general views of different types of magnetic wires. Cold-drawn in the rotating water amorphous wires with following compositions: Fe<sub>75</sub>Si<sub>10</sub>B<sub>15</sub> (<b>a</b>) and (Co<sub>94</sub>Fe<sub>6</sub>)<sub>72</sub>.<sub>5</sub>Si<sub>12</sub>.<sub>5</sub>B<sub>15</sub> (<b>b</b>). Glass-coated amorphous microwires obtained by Taylor–Ulitovsky technique: (Co<sub>94</sub>Fe<sub>6</sub>)<sub>72</sub>.<sub>5</sub>Si<sub>12</sub>.<sub>5</sub>B<sub>15</sub> (<b>c</b>) and (Co<sub>50</sub>Fe<sub>50</sub>)<sub>72</sub>.<sub>5</sub>Si<sub>12</sub>.<sub>5</sub>B<sub>15</sub> (<b>d</b>). Composite CuBe/CoFeNi electroplated wires: cross-section showing both the central conductive base wire and magnetic coating (<b>e</b>), general features of the surface properties of CoFeNi layer showing different types of the surface defects appearing daring the electroplating process (<b>f</b>).</p>
Full article ">Figure 2
<p>The values of the static permeability in the five-region model, <span class="html-italic">n</span> = 5, versus the external field <span class="html-italic">H<sub>e</sub></span>.</p>
Full article ">Figure 3
<p>The field dependences of the MI ratio Δ<span class="html-italic">Z</span>/<span class="html-italic">Z</span> calculated for different numbers of regions <span class="html-italic">n</span> at several frequencies: <span class="html-italic">f</span> = 10 MHz (<b>a</b>); <span class="html-italic">f</span> = 25 MHz (<b>b</b>); <span class="html-italic">f</span> = 50 MHz (<b>c</b>); <span class="html-italic">f</span> = 100 MHz (<b>d</b>).</p>
Full article ">Figure 4
<p>The frequency dependence of the impedance <span class="html-italic">Z</span> calculated for different numbers of regions <span class="html-italic">n</span> at several external magnetic fields: <span class="html-italic">H<sub>e</sub></span> = 0 (<b>a</b>); <span class="html-italic">H<sub>e</sub></span> = 2.5 Oe (<b>b</b>); <span class="html-italic">H<sub>e</sub></span> = 5 Oe (<b>c</b>); <span class="html-italic">H<sub>e</sub></span> = 10 Oe (<b>d</b>).</p>
Full article ">Figure 5
<p>Frequency dependence of maximum MI ratio (Δ<span class="html-italic">Z</span>/<span class="html-italic">Z</span>)<sub>max</sub> for the number of regions <span class="html-italic">n</span> = 5: curve 1, the proposed model; curve 2, model with the permeability independent of the frequency; curve 3, model with the imaginary part of the permeability equal to zero.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop