[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 23, March-2
Previous Issue
Volume 23, February-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 5 (March-1 2023) – 509 articles

Cover Story (view full-size image): Onboard monitoring information, such as Axle Box Accelerometers (ABAs), can support the real-time condition assessment of railways. Such an assessment, albeit spatially dense, suffers from noise influences and uncertainties related to the underlying dynamics, which challenge a reliable assessment. We propose a new approach to improve the monitoring of railway welds by fusing expert feedback, obtained on critical weld samples, with ABA features. A Bayesian Logistic Regression (BLR) model, which comes with the benefit of uncertainty quantification, is compared against alternate approaches employing Random Forests and Binary Classifiers. We further demonstrate the importance of continuous asset monitoring to robustly track the evolution of conditions as a guide for preventive maintenance actions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 6438 KiB  
Article
Enabling Modular Robotics with Secure Transducer Identification Based on Extended IEEE 21450 Transducer Electronic Datasheets
by Tobias Mitterer, Christian Lederer and Hubert Zangl
Sensors 2023, 23(5), 2873; https://doi.org/10.3390/s23052873 - 6 Mar 2023
Cited by 1 | Viewed by 2005
Abstract
In robotics, there are many different sensors and actuators mounted onto a robot which may also, in the case of modular robotics, be interchanged during operation. During development of new sensors or actuators, prototypes may also be mounted onto a robot to test [...] Read more.
In robotics, there are many different sensors and actuators mounted onto a robot which may also, in the case of modular robotics, be interchanged during operation. During development of new sensors or actuators, prototypes may also be mounted onto a robot to test functionality, where the new prototypes often have to be integrated manually into the robot environment. Proper, fast and secure identification of new sensor or actuator modules for the robot thus becomes important. In this work, a workflow to add new sensors or actuators to an existing robot environment while establishing trust in an automated manner using electronic datasheets has been developed. The new sensors or actuators are identified via near field communication (NFC) to the system and exchange security information via the same channel. By using electronic datasheets stored on the sensor or actuator, the device can be easily identified and trust can be established by using additional security information contained in the datasheet. In addition, the NFC hardware can simultaneously be used for wireless charging (WLC), thus allowing for wireless sensor and actuator modules. The developed workflow has been tested with prototype tactile sensors mounted onto a robotic gripper. Full article
(This article belongs to the Special Issue Intelligent Sensing and Decision-Making in Advanced Manufacturing)
Show Figures

Figure 1

Figure 1
<p>Illustration of a modular robot system with multiple different sensors and actuators which need to be connected. They should be easily interchangeable in the concept of modular robotics.</p>
Full article ">Figure 2
<p>Overview of the system used to connect the sensor to the robot.</p>
Full article ">Figure 3
<p>Initial identification workflow for the new transducer.</p>
Full article ">Figure 4
<p>Security workflow for the new transducer.</p>
Full article ">Figure 5
<p>Measurement workflow for the new transducer.</p>
Full article ">Figure 6
<p>Used robot system and base station, with the sensors mounted in the robot gripper.</p>
Full article ">Figure 7
<p>Used modular capacitive sensor board consisting of a microcontroller, a sensor and a security part.</p>
Full article ">
15 pages, 4375 KiB  
Article
Advanced Pressure Compensation in High Accuracy NDIR Sensors for Environmental Studies
by Bakhram Gaynullin, Christine Hummelgård, Claes Mattsson, Göran Thungström and Henrik Rödjegård
Sensors 2023, 23(5), 2872; https://doi.org/10.3390/s23052872 - 6 Mar 2023
Cited by 5 | Viewed by 2834
Abstract
Measurements of atmospheric gas concentrations using of NDIR gas sensors requires compensation of ambient pressure variations to achieve reliable result. The extensively used general correction method is based on collecting data for varying pressures for a single reference concentration. This one-dimensional compensation approach [...] Read more.
Measurements of atmospheric gas concentrations using of NDIR gas sensors requires compensation of ambient pressure variations to achieve reliable result. The extensively used general correction method is based on collecting data for varying pressures for a single reference concentration. This one-dimensional compensation approach is valid for measurements carried out in gas concentrations close to reference concentration but will introduce significant errors for concentrations further away from the calibration point. For applications, requiring high accuracy, collecting, and storing calibration data at several reference concentrations can reduce the error. However, this method will cause higher demands on memory capacity and computational power, which is problematic for cost sensitive applications. We present here an advanced, but practical, algorithm for compensation of environmental pressure variations for relatively low-cost/high resolution NDIR systems. The algorithm consists of a two-dimensional compensation procedure, which widens the valid pressure and concentrations range but with a minimal need to store calibration data, compared to the general one-dimensional compensation method based on a single reference concentration. The implementation of the presented two-dimensional algorithm was verified at two independent concentrations. The results show a reduction in the compensation error from 5.1% and 7.3%, for the one-dimensional method, to −0.02% and 0.83% for the two-dimensional algorithm. In addition, the presented two-dimensional algorithm only requires calibration in four reference gases and the storing of four sets of polynomial coefficients used for calculations. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Lambert–Beer law’s graphic representation, <b><span class="html-italic">k<sub>ϑ</sub></span></b> is the absorption coefficient. <b><span class="html-italic">P</span></b>, <b><span class="html-italic">T</span></b>, and <b><span class="html-italic">q</span></b> respectively pressure, temperature, and concentration in sensing volume.</p>
Full article ">Figure 2
<p>Simulation of absorption cross section <b><span class="html-italic">σ<sub>ϑ</sub></span></b> for two pressure levels 0.5 and 1.0 Bar at a concentration of 400 ppm CO<sub>2</sub> and a temperature of 296 K. A variation in ambient pressure leads to a variation in absorption capacity.</p>
Full article ">Figure 3
<p>Pressure calibration procedure for deriving compensation factor function <b><span class="html-italic">K</span></b> with respective polynomial coefficients <b><span class="html-italic">A</span></b> = 0.2923 Bar <sup>−2</sup> and <b><span class="html-italic">B</span></b> = 1.3019 Bar <sup>−1</sup>.</p>
Full article ">Figure 4
<p>(<b>a</b>) Compensation factor <b><span class="html-italic">K</span></b> as function of relative pressure for different reference concentrations (200, 400, 500, 800, 1600, 3000, and 5000 ppm), (<b>b</b>) magnification of the relative pressure range −0.4 to −0.15 Bar for visually enhancing the concentration dependence of the compensation factor—red dots show difference in compensation factor values for different concentrations at the same pressure level <b><span class="html-italic">P_ex</span></b>.</p>
Full article ">Figure 5
<p>Space of compensation factor values in 3D volume concentration range vs. pressure range vs. compensation factor range. Red dots indicates <b><span class="html-italic">K</span></b> values for different reference concentrations on the same certain pressure level.</p>
Full article ">Figure 6
<p>Schematic of the pressure calibration system. Protection against possible contamination from ambient concentration is based on a pressure drop. The pressure drop is developed by maintaining the pressure in the test volume higher than in the protection volume.</p>
Full article ">Figure 7
<p>The pressure calibration data for reference concentrations 200, 500, 1600, and 5000 ppm CO<sub>2</sub>.</p>
Full article ">Figure 8
<p>Compensation factor calibration dependences as a polynomial functions.</p>
Full article ">Figure 9
<p>Overall relation between concentration and the compensation parameters. Red dots represent the compensation factor derived for the reference concentrations at pressure 0.72 Bar (equal to <a href="#sensors-23-02872-t004" class="html-table">Table 4</a>).</p>
Full article ">Figure 10
<p>The dependence <b><span class="html-italic">K<sub>_n</sub></span></b> vs. <b><span class="html-italic">q<sub>n_norm_p</sub></span></b> contains a set of all possible compensation factors within calibrated range at the specific pressure (here <b><span class="html-italic">P<sub>_ver</sub></span></b> = 0.72 Bar) where <b><span class="html-italic">q<sub>meas</sub></span></b> was obtained. The <b><span class="html-italic">K<sub>_n</sub></span></b> dependence equation gives the value of the compensation (here <b><span class="html-italic">K<sub>_ver</sub></span></b>) matching the respective reported <b><span class="html-italic">q<sub>meas</sub></span></b> (here <b><span class="html-italic">Q<sub>_ver</sub></span></b>).</p>
Full article ">Figure 11
<p>The dependence <b><span class="html-italic">K<sub>_n</sub></span></b> vs. <b><span class="html-italic">q<sub>n_norm_p</sub></span></b> contains a set of all possible compensation factor within a calibrated range at the specific pressure (here <b><span class="html-italic">P<sub>_ver</sub></span></b> = 0.55 Bar) where <b><span class="html-italic">q<sub>meas</sub></span></b> was obtained. The <b><span class="html-italic">K<sub>_n</sub></span></b> dependence-equation gives the value of the compensation (here <b><span class="html-italic">K<sub>_ver</sub></span></b>) matching the respective reported <b><span class="html-italic">q<sub>meas</sub></span></b> (here <b><span class="html-italic">Q<sub>_ver</sub></span></b>).</p>
Full article ">
17 pages, 9894 KiB  
Article
An Online 3D Modeling Method for Pose Measurement under Uncertain Dynamic Occlusion Based on Binocular Camera
by Xuanchang Gao, Junzhi Yu and Min Tan
Sensors 2023, 23(5), 2871; https://doi.org/10.3390/s23052871 - 6 Mar 2023
Cited by 2 | Viewed by 2139
Abstract
3D modeling plays a significant role in many industrial applications that require geometry information for pose measurements, such as grasping, spraying, etc. Due to random pose changes in the workpieces on the production line, demand for online 3D modeling has increased and many [...] Read more.
3D modeling plays a significant role in many industrial applications that require geometry information for pose measurements, such as grasping, spraying, etc. Due to random pose changes in the workpieces on the production line, demand for online 3D modeling has increased and many researchers have focused on it. However, online 3D modeling has not been entirely determined due to the occlusion of uncertain dynamic objects that disturb the modeling process. In this study, we propose an online 3D modeling method under uncertain dynamic occlusion based on a binocular camera. Firstly, focusing on uncertain dynamic objects, a novel dynamic object segmentation method based on motion consistency constraints is proposed, which achieves segmentation by random sampling and poses hypotheses clustering without any prior knowledge about objects. Then, in order to better register the incomplete point cloud of each frame, an optimization method based on local constraints of overlapping view regions and a global loop closure is introduced. It establishes constraints in covisibility regions between adjacent frames to optimize the registration of each frame, and it also establishes them between the global closed-loop frames to jointly optimize the entire 3D model. Finally, a confirmatory experimental workspace is designed and built to verify and evaluate our method. Our method achieves online 3D modeling under uncertain dynamic occlusion and acquires an entire 3D model. The pose measurement results further reflect the effectiveness. Full article
(This article belongs to the Special Issue Recent Advances in Robotics and Intelligent Mechatronics Systems)
Show Figures

Figure 1

Figure 1
<p>An illustration of our pipeline. (1) Vision system captures left and right images. Stereo rectification is first performed to align the epipolar lines of the left and right images by rotating two images. The depth of the points can be calculated by disparity. (2) When two consecutive frames arrive, data association is established by Grid-based Motion Statistics (GMS) and image block correlation. Then, object segmentation is carried out by random sampling and pose hypotheses clustering. (3) The segmentation results, which include initial poses and 2D-3D point pairs, are sent to the 3D modeling part. It includes the target object tracking module and the dynamic object tracking module. The target object poses (namely, camera pose) and the dynamic object motions will be estimated and optimized. (4) Finally, local BA and global BA are carried out to optimize and update these variables. The outputs of our pipeline are poses of target and dynamic objects, a 3D model, and 3D point cloud of dynamic objects.</p>
Full article ">Figure 2
<p>Notation and coordinate system. The big black and green circle represent the 3D point of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and 3D point of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math> in the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> coordinate. Small black and green circles denote the corresponding observations in images of adjacent frames. We establish the camera coordinate system in the camera optical center and the object coordinate system in the centroid of the object.</p>
Full article ">Figure 3
<p>The structure of the workspace. The vision system mainly included two symmetrically arranged high-resolution cameras and an auxiliary texture device in the middle. The workpiece was placed in the white slot on the turntable. The turntable and rotating shaft could change the pose of the object. Tools could be clamped by the end effector and be moved by the robotic arm.</p>
Full article ">Figure 4
<p>The results of segmentation and clustering. In the grayscale images, the last and current frame represent two adjacent frames and the circles indicate the matched points. The circles with different colors indicate that the feature points are located on different objects. In the three-dimensional pose parameter space, the translation vectors with the real scale are demonstrated. The shade of the color means the density is different. These two cluster centers correspond to two objects with different motions.</p>
Full article ">Figure 5
<p>The output of our pipeline. (1) The trajectory of the camera is represented by the blue line. The magenta pyramid denotes the pose of each frame. For a clearer presentation, the images show the pictures captured by the camera at frames 5, 15, 21, and 32. In each image, the green and blue marks mean the tracked feature points on the target object and moving tool, respectively. (2) 3D model. The blue point cloud in the center of the figure is the 3D model of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>W</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math>, surrounded by the moving tool. Here, different colors show the tool observed in each frame. With the rotation of the camera and the movement of the tool, the occluded parts will be observed and modeled in other frames, as shown in the middle of the 3D model with different colors. Here, different colors represent that the occluded parts of other frames are observed in the current frame.</p>
Full article ">Figure 6
<p>Pose errors between measured poses and their corresponding ground truths consisting of rotation errors (along the x, y, z-axes in degrees) and translation errors (along the x, y, z-axes in millimeters). We chose the pose measurement results of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> to show the error variations. (<b>a</b>,<b>b</b>) are the rotation and translation errors of measurement results of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, respectively. (<b>c</b>,<b>d</b>) are the rotation and translation errors of measurement results of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>3D modeling results obtained by three methods. (<b>a</b>) is the result of the incremental open-loop registering method. The object structure cannot be recovered in (<b>b</b>) because of the disadvantages of ICP. Our method performs well in (<b>c</b>).</p>
Full article ">
13 pages, 5150 KiB  
Article
Highly Sensitive and Selective Dopamine Determination in Real Samples Using Au Nanoparticles Decorated Marimo-like Graphene Microbead-Based Electrochemical Sensors
by Qichen Tian, Yuanbin She, Yangguang Zhu, Dan Dai, Mingjiao Shi, Wubo Chu, Tao Cai, Hsu-Sheng Tsai, He Li, Nan Jiang, Li Fu, Hongyan Xia, Cheng-Te Lin and Chen Ye
Sensors 2023, 23(5), 2870; https://doi.org/10.3390/s23052870 - 6 Mar 2023
Cited by 8 | Viewed by 2686
Abstract
A sensitive and selective electrochemical dopamine (DA) sensor has been developed using gold nanoparticles decorated marimo-like graphene (Au NP/MG) as a modifier of the glassy carbon electrode (GCE). Marimo-like graphene (MG) was prepared by partial exfoliation on the mesocarbon microbeads (MCMB) through molten [...] Read more.
A sensitive and selective electrochemical dopamine (DA) sensor has been developed using gold nanoparticles decorated marimo-like graphene (Au NP/MG) as a modifier of the glassy carbon electrode (GCE). Marimo-like graphene (MG) was prepared by partial exfoliation on the mesocarbon microbeads (MCMB) through molten KOH intercalation. Characterization via transmission electron microscopy confirmed that the surface of MG is composed of multi-layer graphene nanowalls. The graphene nanowalls structure of MG provided abundant surface area and electroactive sites. Electrochemical properties of Au NP/MG/GCE electrode were investigated by cyclic voltammetry and differential pulse voltammetry techniques. The electrode exhibited high electrochemical activity towards DA oxidation. The oxidation peak current increased linearly in proportion to the DA concentration in a range from 0.02 to 10 μM with a detection limit of 0.016 μM. The detection selectivity was carried out with the presence of 20 μM uric acid in goat serum real samples. This study demonstrated a promising method to fabricate DA sensor-based on MCMB derivatives as electrochemical modifiers. Full article
(This article belongs to the Special Issue State-of-the-Art Electrochemical Biosensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Real photo of Marimo. (<b>b</b>) Schematic illustration of the fabrication of Marimo-like graphene (MG) and Au NP-decorated Marimo-like graphene (Au NP/MG). SEM images of (<b>c</b>) MCMB and (<b>d</b>) MG. (<b>e</b>) Raman spectra and (<b>f</b>) C1s XPS spectra of MCMB and MG.</p>
Full article ">Figure 2
<p>(<b>a</b>) Scheme of stripping and collecting graphene nanowalls shell from MG. (<b>b</b>) Raman spectrum and (<b>c</b>) C1s XPS spectrum of graphene nanowalls. (<b>d</b>) TEM image and (<b>e</b>) HRTEM image of MG (inset: SAED pattern; scale bar: 5 nm<sup>–1</sup>). (<b>f</b>) TEM image of Au NP/MG (inset: SAED pattern; scale bar: 5 nm<sup>–1</sup>).</p>
Full article ">Figure 3
<p>(<b>a</b>) DPV of Au NP/MG/GCE electrodes with and without 10 μM DA in PBS. (<b>b</b>) DPV curves of various modified electrodes with 10 μM DA in PBS. (<b>c</b>) DPV of Au NP/MG/GCE electrodes with various concentrations of MG to the same of AuNP with 10 μM DA in PBS. (<b>d</b>) Au NP/MG/GCE electrodes in 10 mM [Fe(CN)<sub>6</sub>]<sup>3−/4−</sup> and 0.1 M KCl electrolyte solution at scan rates from 20 to 250 mV s<sup>−1</sup>. (<b>e</b>) Linear plots of <span class="html-italic">I<sub>o</sub><sub>x</sub>/I<sub>r</sub><sub>ed</sub></span> versus scan rates (<b>f</b>) DPV of 10 μM DA on Au NP/MG/GCE electrodes with pH.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) DPV curves of MG/GCE and Au NP/MG/GCE electrodes with various concentrations of DA, respectively. (<b>c</b>) The corresponding peak current versus DA concentration. (<b>d</b>–<b>f</b>) The repeatability and good anti-interference of Au NP/MG/GCE electrodes.</p>
Full article ">
16 pages, 6480 KiB  
Article
Cognitive Video Surveillance Management in Hierarchical Edge Computing System with Long Short-Term Memory Model
by Dilshod Bazarov Ravshan Ugli, Jingyeom Kim, Alaelddin F. Y. Mohammed and Joohyung Lee
Sensors 2023, 23(5), 2869; https://doi.org/10.3390/s23052869 - 6 Mar 2023
Cited by 6 | Viewed by 2715
Abstract
Nowadays, deep learning (DL)-based video surveillance services are widely used in smart cities because of their ability to accurately identify and track objects, such as vehicles and pedestrians, in real time. This allows a more efficient traffic management and improved public safety. However, [...] Read more.
Nowadays, deep learning (DL)-based video surveillance services are widely used in smart cities because of their ability to accurately identify and track objects, such as vehicles and pedestrians, in real time. This allows a more efficient traffic management and improved public safety. However, DL-based video surveillance services that require object movement and motion tracking (e.g., for detecting abnormal object behaviors) can consume a substantial amount of computing and memory capacity, such as (i) GPU computing resources for model inference and (ii) GPU memory resources for model loading. This paper presents a novel cognitive video surveillance management with long short-term memory (LSTM) model, denoted as the CogVSM framework. We consider DL-based video surveillance services in a hierarchical edge computing system. The proposed CogVSM forecasts object appearance patterns and smooths out the forecast results needed for an adaptive model release. Here, we aim to reduce standby GPU memory by model release while avoiding unnecessary model reloads for a sudden object appearance. CogVSM hinges on an LSTM-based deep learning architecture explicitly designed for future object appearance pattern prediction by training previous time-series patterns to achieve these objectives. By referring to the result of the LSTM-based prediction, the proposed framework controls the threshold time value in a dynamic manner by using an exponential weighted moving average (EWMA) technique. Comparative evaluations on both simulated and real-world measurement data on the commercial edge devices prove that the LSTM-based model in the CogVSM can achieve a high predictive accuracy, i.e., a root-mean-square error metric of 0.795. In addition, the suggested framework utilizes up to 32.1% less GPU memory than the baseline and 8.9% less than previous work. Full article
(This article belongs to the Special Issue Applications of Video Processing and Computer Vision Sensor II)
Show Figures

Figure 1

Figure 1
<p>RNN architecture.</p>
Full article ">Figure 2
<p>Long short-term memory architecture.</p>
Full article ">Figure 3
<p>Overall architecture of the proposed framework.</p>
Full article ">Figure 4
<p>The prediction error measurement of mainstream deep learning models on training and test sets.</p>
Full article ">Figure 5
<p>The implementation results on 1st edge node and 2nd edge node.</p>
Full article ">Figure 6
<p>Object occurrence in the sample video and its pretrained LSTM-based prediction.</p>
Full article ">Figure 7
<p>Comparison of Proposed CogVSM framework and AdaMM framework in terms of GPU memory utilization on 2nd edge node.</p>
Full article ">Figure 8
<p>GPU memory utilization on an urban area video of the proposed CogVSM framework compared to the AdaMM framework (<math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> s for AdaMM).</p>
Full article ">Figure 9
<p>GPU memory utilization on an urban area video of the proposed CogVSM framework compared to the AdaMM framework (<math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics> </math> s for AdaMM).</p>
Full article ">Figure 10
<p>GPU memory utilization on a rural area video of the proposed CogVSM framework compared to the AdaMM framework (<math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics> </math> s for AdaMM).</p>
Full article ">Figure 11
<p>GPU memory utilization on a rural area video of the proposed CogVSM framework compared to the AdaMM framework (<math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics> </math> s for AdaMM).</p>
Full article ">
21 pages, 8083 KiB  
Article
PointPainting: 3D Object Detection Aided by Semantic Image Information
by Zhentong Gao, Qiantong Wang, Zongxu Pan, Zhenyu Zhai and Hui Long
Sensors 2023, 23(5), 2868; https://doi.org/10.3390/s23052868 - 6 Mar 2023
Cited by 1 | Viewed by 2837
Abstract
A multi-modal 3D object-detection method, based on data from cameras and LiDAR, has become a subject of research interest. PointPainting proposes a method for improving point-cloud-based 3D object detectors using semantic information from RGB images. However, this method still needs to improve on [...] Read more.
A multi-modal 3D object-detection method, based on data from cameras and LiDAR, has become a subject of research interest. PointPainting proposes a method for improving point-cloud-based 3D object detectors using semantic information from RGB images. However, this method still needs to improve on the following two complications: first, there are faulty parts in the image semantic segmentation results, leading to false detections. Second, the commonly used anchor assigner only considers the intersection over union (IoU) between the anchors and ground truth boxes, meaning that some anchors contain few target LiDAR points assigned as positive anchors. In this paper, three improvements are suggested to address these complications. Specifically, a novel weighting strategy is proposed for each anchor in the classification loss. This enables the detector to pay more attention to anchors containing inaccurate semantic information. Then, SegIoU, which incorporates semantic information, instead of IoU, is proposed for the anchor assignment. SegIoU measures the similarity of the semantic information between each anchor and ground truth box, avoiding the defective anchor assignments mentioned above. In addition, a dual-attention module is introduced to enhance the voxelized point cloud. The experiments demonstrate that the proposed modules obtained significant improvements in various methods, consisting of single-stage PointPillars, two-stage SECOND-IoU, anchor-base SECOND, and an anchor-free CenterPoint on the KITTI dataset. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Semantic segmentation and 3D object-detection results of painted PointPillars [<a href="#B8-sensors-23-02868" class="html-bibr">8</a>] on the KITTI [<a href="#B7-sensors-23-02868" class="html-bibr">7</a>] dataset. The purple and yellow parts in the semantic segmentation results represent motorcycle and pedestrian categories, respectively. The inaccurate parts of the semantic segmentation results lead to false detections.</p>
Full article ">Figure 2
<p>Figure (<b>a</b>) is a annotation sample from the KITTI [<a href="#B7-sensors-23-02868" class="html-bibr">7</a>] dataset, and Figure (<b>b</b>) is a typical case of anchor assignment. The red box in Figure (<b>b</b>) is the ground-truth box, and the green and blue boxes are anchors. A positive anchor tag will be assigned to the blue box by the max-IoU assigner. However, the blue box containing few target LiDAR points is not a high-quality positive anchor.</p>
Full article ">Figure 3
<p>Chronological overview of the multi-modal 3D object-detection methods.</p>
Full article ">Figure 4
<p>Architecture of PointPainting [<a href="#B8-sensors-23-02868" class="html-bibr">8</a>]. This consists of three main stages: (1) image-based semantic segmentation, (2) point cloud painting, and (3) point-cloud-based detector.</p>
Full article ">Figure 5
<p>Architecture of PointPainting++. It consists of six steps: (1) image-based semantic segmentation, (2) point cloud painting, (3) generation of anchor weights, (4) feature extraction, (5) SegIoU-based anchor assignment, (6) calculation of classification loss.</p>
Full article ">Figure 6
<p>Anchor weight-assignment strategy. The weight of each anchor is calculated according to the proportion of inaccurate points it contains.</p>
Full article ">Figure 7
<p>The structure of SEBlock [<a href="#B9-sensors-23-02868" class="html-bibr">9</a>]. It first uses the <span class="html-italic">squeeze</span> operation to generate global features, and then uses the <span class="html-italic">excitation</span> operation to capture channel dependencies and generate channel-wise weights.</p>
Full article ">Figure 8
<p>The architecture of the dual attention module. This module has a symmetrical structure, and each part can be regarded as an SEBlock [<a href="#B9-sensors-23-02868" class="html-bibr">9</a>]. Fully connected layers are used to compress dimensions to extract global features.</p>
Full article ">Figure 9
<p>The 2D convolution calculation process for the number of points in each anchor. The size of the convolution kernels is determined by the size of anchors. We generated a tensor that records the number of points in each voxel first, and then used different convolution kernels to perform 2D convolution on this tensor to obtain the number of points in each anchor.</p>
Full article ">Figure 10
<p>Qualitative results of our PointPainting++ of PointPillars [<a href="#B34-sensors-23-02868" class="html-bibr">34</a>], CenterPoint [<a href="#B35-sensors-23-02868" class="html-bibr">35</a>], SECOND [<a href="#B39-sensors-23-02868" class="html-bibr">39</a>], and SECOND-IoU [<a href="#B39-sensors-23-02868" class="html-bibr">39</a>] on the KITTI [<a href="#B7-sensors-23-02868" class="html-bibr">7</a>] <span class="html-italic">valid</span> set. The upper part of each picture is the 3D detection projected to the image, and the lower part is the 3D detection in the LiDAR point cloud. The blue, green, and red boxes in the 2D detection results represent the car, cyclist, and pedestrian categories, respectively. The red boxes in the 3D result are the ground-truth boxes, and the rest of the boxes are the detection boxes. The results indicate that PointPainting++ improves the performance of detectors of various structures in multiple scenarios. (<b>a</b>) Painted PointPillars [<a href="#B34-sensors-23-02868" class="html-bibr">34</a>], (<b>b</b>) Painted CenterPoint [<a href="#B35-sensors-23-02868" class="html-bibr">35</a>], (<b>c</b>) Painted SECOND [<a href="#B39-sensors-23-02868" class="html-bibr">39</a>], (<b>d</b>) Painted SECOND-IoU [<a href="#B39-sensors-23-02868" class="html-bibr">39</a>], (<b>e</b>) Painted PointPillars++, (<b>f</b>) Painted CenterPoint++, (<b>g</b>) Painted SECOND++, (<b>h</b>) Painted SECOND-IoU++.</p>
Full article ">Figure 11
<p>Qualitative results of our three improvements in the Painted PointPillars++. The experimental results of the Painted PointPillars [<a href="#B8-sensors-23-02868" class="html-bibr">8</a>] obtained with different improvement methods are shown from left to right. The blue, green, and red boxes in the 2D detection results represent the car, cyclist, and pedestrian categories, respectively. The red boxes in the 3D result are the ground-truth boxes, and the rest of the boxes are the detection boxes. False detections were significantly reduced as the improvements were introduced.</p>
Full article ">Figure 12
<p>The 3D and BEV detection results of our PointPainting++ with the relative weight coefficient <math display="inline"><semantics> <mi>β</mi> </semantics></math>. The detection performance first reaches the peak with the increase in <math display="inline"><semantics> <mi>β</mi> </semantics></math>, and then shows a downward trend. If weights are too large, the detector will focus too much on the difficult samples, while weights that are too small will not emphasize the difficult samples. (<b>a</b>) 3D detection results with <math display="inline"><semantics> <mi>β</mi> </semantics></math> (<b>b</b>) BEV Detection results with <math display="inline"><semantics> <mi>β</mi> </semantics></math>.</p>
Full article ">Figure 13
<p>The 3D and BEV detection results of our PointPainting++ with the semantic loss coefficient <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. The detector performance shows a trend of increasing and then decreasing as the semantic loss items increase. If the semantic loss terms are too large, there will be too few positive anchors, while semantic loss terms that are too small will not filter inferior positive anchors using semantic information. (<b>a</b>) 3D Detection results with <math display="inline"><semantics> <mi>γ</mi> </semantics></math> (<b>b</b>) BEV detection results with <math display="inline"><semantics> <mi>γ</mi> </semantics></math>.</p>
Full article ">Figure 14
<p>The 3D and BEV detection results of our PointPainting++ with the number of positive anchors. The detection performance peaks after an appropriate number of positive anchors is selected. The excessive retention of the max-IoU assigner-based results may fail to remove inferior positive anchors, while having too few positive anchors will make it difficult for the detector to learn target features. (<b>a</b>) 3D detection results with the number of positive anchors. (<b>b</b>) BEV detection with the number of positive anchors.</p>
Full article ">
21 pages, 3245 KiB  
Article
Real-Time Evaluation of Perception Uncertainty and Validity Verification of Autonomous Driving
by Mingliang Yang, Kun Jiang, Junze Wen, Liang Peng, Yanding Yang, Hong Wang, Mengmeng Yang, Xinyu Jiao and Diange Yang
Sensors 2023, 23(5), 2867; https://doi.org/10.3390/s23052867 - 6 Mar 2023
Cited by 3 | Viewed by 3175
Abstract
Deep neural network algorithms have achieved impressive performance in object detection. Real-time evaluation of perception uncertainty from deep neural network algorithms is indispensable for safe driving in autonomous vehicles. More research is required to determine how to assess the effectiveness and uncertainty of [...] Read more.
Deep neural network algorithms have achieved impressive performance in object detection. Real-time evaluation of perception uncertainty from deep neural network algorithms is indispensable for safe driving in autonomous vehicles. More research is required to determine how to assess the effectiveness and uncertainty of perception findings in real-time.This paper proposes a novel real-time evaluation method combining multi-source perception fusion and deep ensemble. The effectiveness of single-frame perception results is evaluated in real-time. Then, the spatial uncertainty of the detected objects and influencing factors are analyzed. Finally, the accuracy of spatial uncertainty is validated with the ground truth in the KITTI dataset. The research results show that the evaluation of perception effectiveness can reach 92% accuracy, and a positive correlation with the ground truth is found for both the uncertainty and the error. The spatial uncertainty is related to the distance and occlusion degree of detected objects. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Comparison graph between ground truth and perception results under the KITTI dataset based on DNN algorithms. There is uncertainty in perception results, such as missed detection, false detection, position errors, and orientation errors. Green boxes represent the ground truth of data labels and blue boxes donate the perception results of DNN algorithms based on lidar and camera. The red numbers represent the order of the dataset frames.</p>
Full article ">Figure 2
<p>The logic flow between different algorithms.</p>
Full article ">Figure 3
<p>The Schematic sequence of the perception effectiveness judgment and uncertainty evaluation.</p>
Full article ">Figure 4
<p>Scheme flow of objects matching algorithm.</p>
Full article ">Figure 5
<p>Objects matching algorithm:Diagram of triangle matching.</p>
Full article ">Figure 6
<p>Scheme flow of the perception effectiveness judgment.</p>
Full article ">Figure 7
<p>Scheme flow of PointPillars [<a href="#B27-sensors-23-02867" class="html-bibr">27</a>].</p>
Full article ">Figure 8
<p>Scheme flow of SMOKE [<a href="#B28-sensors-23-02867" class="html-bibr">28</a>].</p>
Full article ">Figure 9
<p>Judgment results of perception effectiveness: the judgment is correct. The judgment and verification are valid after matching and verifying with the ground truth.</p>
Full article ">Figure 10
<p>Judgment results of perception effectiveness: the judgment is correct. The judgment and verification are invalid after matching and verifying with the ground truth.</p>
Full article ">Figure 11
<p>Judgment results of perception effectiveness: the judgment is correct. The judgment and verification are invalid after matching and verifying with the ground truth.</p>
Full article ">Figure 12
<p>Spatial uncertainty based on deep ensemble.</p>
Full article ">Figure 13
<p>Correlation research between uncertainty and error of Pointpillars (3769 frames) In order: the horizontal direction, longitudinal direction, vertical direction, and orientation of the car.</p>
Full article ">Figure 14
<p>Correlation research between uncertainty and error of SMOKE (3769 frames) In order: the horizontal direction, longitudinal direction, vertical direction, and orientation of the car.</p>
Full article ">Figure 15
<p>The relationship between object distance and perception uncertainty.</p>
Full article ">Figure 16
<p>The relationship between object occlusion and perception uncertainty.</p>
Full article ">
16 pages, 3012 KiB  
Article
A Novel Catheter Distal Contact Force Sensing for Cardiac Ablation Based on Fiber Bragg Grating with Temperature Compensation
by Yuyang Lou, Tianyu Yang, Dong Luo, Jianwei Wu and Yuming Dong
Sensors 2023, 23(5), 2866; https://doi.org/10.3390/s23052866 - 6 Mar 2023
Cited by 3 | Viewed by 3243
Abstract
Objective: To accurately achieve distal contact force, a novel temperature-compensated sensor is developed and integrated into an atrial fibrillation (AF) ablation catheter. Methods: A dual elastomer-based dual FBGs structure is used to differentiate the strain on the two FBGs to achieve temperature compensation, [...] Read more.
Objective: To accurately achieve distal contact force, a novel temperature-compensated sensor is developed and integrated into an atrial fibrillation (AF) ablation catheter. Methods: A dual elastomer-based dual FBGs structure is used to differentiate the strain on the two FBGs to achieve temperature compensation, and the design is optimized and validated by finite element simulation. Results: The designed sensor has a sensitivity of 90.5 pm/N, resolution of 0.01 N, and root–mean–square error (RMSE) of 0.02 N and 0.04 N for dynamic force loading and temperature compensation, respectively, and can stably measure distal contact forces with temperature disturbances. Conclusion: Due to the advantages, i.e., simple structure, easy assembly, low cost, and good robustness, the proposed sensor is suitable for industrial mass production. Full article
(This article belongs to the Special Issue Fiber Optic Sensing and Applications)
Show Figures

Figure 1

Figure 1
<p>The schematic diagram of a dual elastomer fiber Bragg grating force sensor for cardiac catheterization. (<b>a</b>) Exploded view of the overall structure; (<b>b</b>) prototype; (<b>c</b>) multi-layer continuous beam-slots. (<b>d</b>) Detailed dimensions of the designed flexure and FBG arrangements.</p>
Full article ">Figure 2
<p>Strain distribution on (<b>a</b>) the overall structure and (<b>b</b>) the fiber under the axial force F stimulation.</p>
Full article ">Figure 3
<p>(<b>a</b>) Different structures; (<b>b</b>) strain of FBG2 under axial force of 0.01 N with different structures.</p>
Full article ">Figure 4
<p>The relation between the applied axial force and the strain distribution along the suspended fiber.</p>
Full article ">Figure 5
<p>The relation between the applied temperature and the strain distribution along the suspended fiber.</p>
Full article ">Figure 6
<p>The relation between applied force and temperature and the strain of the two FBGs configured inside the sensor.</p>
Full article ">Figure 7
<p>(<b>a</b>) First-order, (<b>b</b>) second-order, and (<b>c</b>) third-order modal analysis of the designed sensor. (<b>d</b>) The harmonic response curves of the designed sensor in X, Y, and Z directions under Fz conditions.</p>
Full article ">Figure 8
<p>(<b>a</b>) Block diagram and usage flow of each equipment in the calibration experiment; (<b>b</b>) Experimental configuration for force calibration.</p>
Full article ">Figure 9
<p>Corresponding response curves of center wavelength shift with different axial forces for the two FBGs.</p>
Full article ">Figure 10
<p>Corresponding response curves of the center wavelength shift with different temperatures for the two FBGs.</p>
Full article ">Figure 11
<p>(<b>a</b>) Comparison between the calculated force of the designed sensor and the corresponding values detected from the ATI force sensor during dynamic loading. (<b>b</b>) Correlation diagram of the designed sensor and ATI force sensor data.</p>
Full article ">Figure 12
<p>Temperature compensation experiment data of FBG sensor.</p>
Full article ">
12 pages, 2336 KiB  
Article
Bioluminescent-Triple-Enzyme-Based Biosensor with Lactate Dehydrogenase for Non-Invasive Training Load Monitoring
by Galina V. Zhukova, Oleg S. Sutormin, Irina E. Sukovataya, Natalya V. Maznyak and Valentina A. Kratasyuk
Sensors 2023, 23(5), 2865; https://doi.org/10.3390/s23052865 - 6 Mar 2023
Cited by 3 | Viewed by 2287
Abstract
Saliva is one of the most significant biological liquids for the development of a simple, rapid, and non-invasive biosensor for training load diagnostics. There is an opinion that enzymatic bioassays are more relevant in terms of biology. The present paper is aimed at [...] Read more.
Saliva is one of the most significant biological liquids for the development of a simple, rapid, and non-invasive biosensor for training load diagnostics. There is an opinion that enzymatic bioassays are more relevant in terms of biology. The present paper is aimed at investigating the effects of saliva samples, upon altering the lactate content, on the activity of a multi-enzyme, namely lactate dehydrogenase + NAD(P)H:FMN-oxidoreductase + luciferase (LDH + Red + Luc). Optimal enzymes and their substrate composition of the proposed multi-enzyme system were chosen. During the tests of the lactate dependence, the enzymatic bioassay showed good linearity to lactate in the range from 0.05 mM to 0.25 mM. The activity of the LDH + Red + Luc enzyme system was tested in the presence of 20 saliva samples taken from students whose lactate levels were compared by the Barker and Summerson colorimetric method. The results showed a good correlation. The proposed LDH + Red + Luc enzyme system could be a useful, competitive, and non-invasive tool for correct and rapid monitoring of lactate in saliva. This enzyme-based bioassay is easy to use, rapid, and has the potential to deliver point-of-care diagnostics in a cost-effective manner. Full article
(This article belongs to the Special Issue Biosensors for Surveillance and Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Values of light intensity of the LDH + Red + Luc enzyme system upon the LDH concentrations variation.</p>
Full article ">Figure 2
<p>Values of light intensity of the LDH + Red + Luc enzyme system upon the lactate concentration variation.</p>
Full article ">Figure 3
<p>Values of light intensity of the LDH + Red + Luc enzyme system upon the NAD<sup>+</sup> concentration variation.</p>
Full article ">Figure 4
<p>Values of light intensity of the LDH + Red + Luc enzyme system upon the FMN concentration variation.</p>
Full article ">Figure 5
<p>Values of light intensity of the LDH + Red + Luc enzyme system upon the C<sub>14</sub> concentration variation.</p>
Full article ">Figure 6
<p>Activity of the LDH + Red + Luc enzyme system at different pH values.</p>
Full article ">Figure 7
<p>Light intensity values of the multi-enzyme system in the presence of the salivary lactate concentrations mentioned in the published articles.</p>
Full article ">Figure 8
<p>Effect of the saliva samples on the activity of the multi-enzyme system. Numbers above the mean calculated salivary lactate concentrations.</p>
Full article ">
19 pages, 7524 KiB  
Article
A Study on the Effectiveness of Deep Learning-Based Anomaly Detection Methods for Breast Ultrasonography
by Changhee Yun, Bomi Eom, Sungjun Park, Chanho Kim, Dohwan Kim, Farah Jabeen, Won Hwa Kim, Hye Jung Kim and Jaeil Kim
Sensors 2023, 23(5), 2864; https://doi.org/10.3390/s23052864 - 6 Mar 2023
Cited by 3 | Viewed by 2452
Abstract
In the medical field, it is delicate to anticipate good performance in using deep learning due to the lack of large-scale training data and class imbalance. In particular, ultrasound, which is a key breast cancer diagnosis method, is delicate to diagnose accurately as [...] Read more.
In the medical field, it is delicate to anticipate good performance in using deep learning due to the lack of large-scale training data and class imbalance. In particular, ultrasound, which is a key breast cancer diagnosis method, is delicate to diagnose accurately as the quality and interpretation of images can vary depending on the operator’s experience and proficiency. Therefore, computer-aided diagnosis technology can facilitate diagnosis by visualizing abnormal information such as tumors and masses in ultrasound images. In this study, we implemented deep learning-based anomaly detection methods for breast ultrasound images and validated their effectiveness in detecting abnormal regions. Herein, we specifically compared the sliced-Wasserstein autoencoder with two representative unsupervised learning models autoencoder and variational autoencoder. The anomalous region detection performance is estimated with the normal region labels. Our experimental results showed that the sliced-Wasserstein autoencoder model outperformed the anomaly detection performance of others. However, anomaly detection using the reconstruction-based approach may not be effective because of the occurrence of numerous false-positive values. In the following studies, reducing these false positives becomes an important challenge. Full article
Show Figures

Figure 1

Figure 1
<p>Autoencoder (AE) Architecture.</p>
Full article ">Figure 2
<p>VAE Architecture.</p>
Full article ">Figure 3
<p>SWAE Architecture.</p>
Full article ">Figure 4
<p>Deep Learning-based Anomalous Region Detection Process.</p>
Full article ">Figure 5
<p>Anomaly detection by pixel difference between an original image and reconstructed image on ultrasonography.</p>
Full article ">Figure 6
<p>AE model architecture.</p>
Full article ">Figure 7
<p>VAE model architecture.</p>
Full article ">Figure 8
<p>SWAE model architecture.</p>
Full article ">Figure 9
<p>Reconstructed images by model.</p>
Full article ">Figure 10
<p>Reconstructed result images by models.</p>
Full article ">Figure 11
<p>Changes in indicators according to the threshold for each model.</p>
Full article ">Figure 12
<p>Anomalous region detection results with respect to threshold with applying Relu function.</p>
Full article ">Figure 13
<p>Changes in indicators according to tumor size by model.</p>
Full article ">
17 pages, 14109 KiB  
Article
A Multi-Channel Ensemble Method for Error-Related Potential Classification Using 2D EEG Images
by Tangfei Tao, Yuxiang Gao, Yaguang Jia, Ruiquan Chen, Ping Li and Guanghua Xu
Sensors 2023, 23(5), 2863; https://doi.org/10.3390/s23052863 - 6 Mar 2023
Cited by 3 | Viewed by 2068
Abstract
An error-related potential (ErrP) occurs when people’s expectations are not consistent with the actual outcome. Accurately detecting ErrP when a human interacts with a BCI is the key to improving these BCI systems. In this paper, we propose a multi-channel method for error-related [...] Read more.
An error-related potential (ErrP) occurs when people’s expectations are not consistent with the actual outcome. Accurately detecting ErrP when a human interacts with a BCI is the key to improving these BCI systems. In this paper, we propose a multi-channel method for error-related potential detection using a 2D convolutional neural network. Multiple channel classifiers are integrated to make final decisions. Specifically, every 1D EEG signal from the anterior cingulate cortex (ACC) is transformed into a 2D waveform image; then, a model named attention-based convolutional neural network (AT-CNN) is proposed to classify it. In addition, we propose a multi-channel ensemble approach to effectively integrate the decisions of each channel classifier. Our proposed ensemble approach can learn the nonlinear relationship between each channel and the label, which obtains 5.27% higher accuracy than the majority voting ensemble approach. We conduct a new experiment and validate our proposed method on a Monitoring Error-Related Potential dataset and our dataset. With the method proposed in this paper, the accuracy, sensitivity and specificity were 86.46%, 72.46% and 90.17%, respectively. The result shows that the AT-CNNs-2D proposed in this paper can effectively improve the accuracy of ErrP classification, and provides new ideas for the study of classification of ErrP brain–computer interfaces. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Grand average ErrP at channel FCz for correct, erroneous and different (error minus correct) conditions. t = 0 corresponds to the stimulus presentation onset.</p>
Full article ">Figure 2
<p>The overview of the proposed multi-channel ensemble method for ErrP detection using 2D EEG images.</p>
Full article ">Figure 3
<p>ErrP elicited at channel FCZ for a signal trail. (The x and y axes are shown for the sake of illustration, but actually, the x and y axes are hidden).</p>
Full article ">Figure 4
<p>The schematic diagram of AT-CNN identifying a 2D EEG Image. Inside the dotted line is the proposed attention-based convolutional neural network. The convolutional block attention module (CMAM) is added before the fully connected layer and it contains a channel attention module (CAM) and a spatial attention module (SAM).</p>
Full article ">Figure 5
<p>The training process of AT-CNN of each channel and the ensemble model.</p>
Full article ">Figure 6
<p>Experimental paradigm. Two squares are displayed on the horizontal line in the center of the screen; one is the cursor (green square), the other is the target (red square), and the dotted square is the previous position of the cursor. The interval time of cursor movement is 2 s, and the cursor moves away from the target with the probability of 20%.</p>
Full article ">Figure 7
<p>Experimental paradigm of dataset 2. A green square and a white box are displayed on the horizontal line in the center of the screen. With the same probability, the green square appears either on the left side or on the right side of the white box. After a 4 s interval, the green square moves, and there is a probability of 70% that the green square will move to the white box, and a probability of 30% that it will move away from the white box.</p>
Full article ">Figure 8
<p>Visualization of the five channel groups chosen in the regions of anterior cingulate cortex. The five channel groups (<b>A</b>–<b>E</b>) illustrate the position of the 64 EEG electrodes on the scalp (small circles, each reporting the standard designation). The channels forming the channel group are highlighted in red.</p>
Full article ">
21 pages, 5853 KiB  
Article
Abnormal Brain Circuits Characterize Borderline Personality and Mediate the Relationship between Childhood Traumas and Symptoms: A mCCA+jICA and Random Forest Approach
by Alessandro Grecucci, Harold Dadomo, Gerardo Salvato, Gaia Lapomarda, Sara Sorella and Irene Messina
Sensors 2023, 23(5), 2862; https://doi.org/10.3390/s23052862 - 6 Mar 2023
Cited by 8 | Viewed by 3698
Abstract
Borderline personality disorder (BPD) is a severe personality disorder whose neural bases are still unclear. Indeed, previous studies reported inconsistent findings concerning alterations in cortical and subcortical areas. In the present study, we applied for the first time a combination of an unsupervised [...] Read more.
Borderline personality disorder (BPD) is a severe personality disorder whose neural bases are still unclear. Indeed, previous studies reported inconsistent findings concerning alterations in cortical and subcortical areas. In the present study, we applied for the first time a combination of an unsupervised machine learning approach known as multimodal canonical correlation analysis plus joint independent component analysis (mCCA+jICA), in combination with a supervised machine learning approach known as random forest, to possibly find covarying gray matter and white matter (GM-WM) circuits that separate BPD from controls and that are also predictive of this diagnosis. The first analysis was used to decompose the brain into independent circuits of covarying grey and white matter concentrations. The second method was used to develop a predictive model able to correctly classify new unobserved BPD cases based on one or more circuits derived from the first analysis. To this aim, we analyzed the structural images of patients with BPD and matched healthy controls (HCs). The results showed that two GM-WM covarying circuits, including basal ganglia, amygdala, and portions of the temporal lobes and of the orbitofrontal cortex, correctly classified BPD against HC. Notably, these circuits are affected by specific child traumatic experiences (emotional and physical neglect, and physical abuse) and predict symptoms severity in the interpersonal and impulsivity domains. These results support that BPD is characterized by anomalies in both GM and WM circuits related to early traumatic experiences and specific symptoms. Full article
(This article belongs to the Special Issue Brain Activity Monitoring and Measurement)
Show Figures

Figure 1

Figure 1
<p><b>Schematic workflow of the analyses</b>. After fusing the two modalities (GM and WM), the brain was decomposed into independent networks of covarying GM-WM (mCCA+jICA). Then Bonferroni corrected <span class="html-italic">t</span>-test was used to assess the networks that differed between groups.</p>
Full article ">Figure 2
<p><b>Predictive model generation</b>. The loading coefficients of the GM-WM networks derived from mCCA+jICA were entered in a random forest classification model to predict BPD diagnosis. Several trees were generated to classify the labels BPD and HC. Each tree voted for that class. The forest then chose the classification having most of the votes.</p>
Full article ">Figure 3
<p><b>A covarying GM-WM network that differs from BPD and HC</b>. Top: violin plots of the loading coefficients for GM and WM of the IC2. Central: network plot showing in green the strength of correlations between components. Bottom: brain plot of positive (increased GM-WM concentration) and negative (decreased GM-WM concentration) of IC2.</p>
Full article ">Figure 4
<p><b>Prediction of new cases</b>. Random forest classification performance metrics.</p>
Full article ">Figure 5
<p><b>Brain plots from random forest analysis</b>. Top: violin plots of the loading coefficients for GM and WM of the IC6. Bottom: brain plot of positive (increased GM - WM concentration) and negative (decreased GM-WM concentration) of IC6.</p>
Full article ">Figure 6
<p><b>Mediation analysis results</b>. Emotional neglect and physical abuse predicted the IC2 (WM) network and that this in turn predicted symptoms in the impulsivity domain. Physical neglect and abuse predicted IC6 (GM) and this in turn predicted interpersonal symptoms. The colors indicate the same indirect effect linking a given child trauma (IV) to a specific symptom (DV) mediated by a specific IC (MV).</p>
Full article ">
19 pages, 7913 KiB  
Article
Low-Cost Dual-Frequency GNSS Receivers and Antennas for Surveying in Urban Areas
by Veton Hamza, Bojan Stopar, Oskar Sterle and Polona Pavlovčič-Prešeren
Sensors 2023, 23(5), 2861; https://doi.org/10.3390/s23052861 - 6 Mar 2023
Cited by 13 | Viewed by 4437
Abstract
Low-cost dual-frequency global navigation satellite system (GNSS) receivers have recently been tested in various positioning applications. Considering that these sensors can now provide high positioning accuracy at a lower cost, they can be considered an alternative to high-quality geodetic GNSS devices. The main [...] Read more.
Low-cost dual-frequency global navigation satellite system (GNSS) receivers have recently been tested in various positioning applications. Considering that these sensors can now provide high positioning accuracy at a lower cost, they can be considered an alternative to high-quality geodetic GNSS devices. The main objectives of this work were to analyze the differences between geodetic and low-cost calibrated antennas on the quality of observations from low-cost GNSS receivers and to evaluate the performance of low-cost GNSS devices in urban areas. In this study, a simple RTK2B V1 board u-blox ZED-F9P (Thalwil, Switzerland) was tested in combination with a low-cost calibrated and geodetic antenna in open-sky and adverse conditions in urban areas, while a high-quality geodetic GNSS device was used as a reference for comparison. The results of the observation quality check show that low-cost GNSS instruments have a lower carrier-to-noise ratio (C/N0) than geodetic instruments, especially in the urban areas where the difference is larger and in favor of the geodetic GNSS instruments. The root-mean-square error (RMSE) of the multipath error in the open sky is twice as high for low-cost as for geodetic instruments, while this difference is up to four times greater in urban areas. The use of a geodetic GNSS antenna does not show a significant improvement in the C/N0 and multipath of low-cost GNSS receivers. However, the ambiguity fix ratio is larger when geodetic antennas are used, with a difference of 1.5% and 18.4% for the open-sky and urban conditions, respectively. It should be noted that float solutions may become more evident when low-cost equipment is used, especially for short sessions and in urban areas with more multipath. In relative positioning mode, low-cost GNSS devices were able to provide horizontal accuracy lower than 10 mm in urban areas in 85% of sessions, while the vertical and spatial accuracy was lower than 15 mm in 82.5% and 77.5% of the sessions, respectively. In the open sky, low-cost GNSS receivers achieve a horizontal, vertical, and spatial accuracy of 5 mm for all sessions considered. In RTK mode, positioning accuracy varies between 10–30 mm in the open-sky and urban areas, while better performance is demonstrated for the former. Full article
(This article belongs to the Special Issue Advances in GNSS Positioning and GNSS Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Low-cost GNSS receiver simpleRTK2B V1 (in front), Survey calibrated low-cost antenna (left), and Javad RingAnt-G3T geodetic antenna (right).</p>
Full article ">Figure 2
<p>Measuring stations.</p>
Full article ">Figure 3
<p>Location of measuring stations: (<b>a</b>) LC−1, LC−2, and GD−1 in ST 1; (<b>b</b>) LC−1, LC−2, and GD−1 in ST 1A; (<b>c</b>) LC−1 in ST 2; (<b>d</b>) LC−1 in ST 3; and (<b>e</b>) LC−1 in ST 4.</p>
Full article ">Figure 4
<p>C/N<sub>0</sub> for observations in F1 and F2 frequency in the open sky: (<b>a</b>) F1 C/N<sub>0</sub> for LC−1, (<b>b</b>) F2 C/N<sub>0</sub> for LC−1, (<b>c</b>) F1 C/N<sub>0</sub> for LC−2, (<b>d</b>) F2 C/N<sub>0</sub> for LC−2, (<b>e</b>) F1 C/N<sub>0</sub> for GD−1, and (<b>f</b>) F2 C/N<sub>0</sub> for GD−1.</p>
Full article ">Figure 5
<p>Sky plot of C/N<sub>0</sub> for LC−1.</p>
Full article ">Figure 6
<p>C/N<sub>0</sub> for observations in F1 and F2 frequency in the urban areas: (<b>a</b>) F1 C/N<sub>0</sub> for LC−1, (<b>b</b>) F2 C/N<sub>0</sub> for LC−1, (<b>c</b>) F1 C/N<sub>0</sub> for LC−2, (<b>d</b>) F2 C/N<sub>0</sub> for LC−2, (<b>e</b>) F1 C/N<sub>0</sub> for GD−1, and (<b>f</b>) F2 C/N<sub>0</sub> for GD−1.</p>
Full article ">Figure 7
<p>Multipath for code observations in F1 and F2 frequency in the open sky: (<b>a</b>) F1 multipath for LC−1, (<b>b</b>) F2 multipath for LC−1, (<b>c</b>) F1 multipath for LC−2, (<b>d</b>) F2 multipath for LC−2, (<b>e</b>) F1 multipath for GD−1, and (<b>f</b>) F2 multipath for GD−1.</p>
Full article ">Figure 8
<p>Multipath for code observations in F1 and F2 frequency in urban areas: (<b>a</b>) F1 multipath for LC−1, (<b>b</b>) F2 multipath for LC−1, (<b>c</b>) F1 multipath for LC−2, (<b>d</b>) F2 multipath for LC−2, (<b>e</b>) F1 multipath for GD−1, and (<b>f</b>) F2 multipath for GD−1.</p>
Full article ">Figure 9
<p>Positioning solutions in open sky and urban conditions: (<b>a</b>) LC−1 on ST 1 (open sky); (<b>b</b>) LC−2 on ST 1 (open sky); (<b>c</b>) LC−1 on ST 1A (urban areas); and (<b>d</b>) LC−2 on ST 1A (urban areas).</p>
Full article ">Figure 10
<p>Horizontal, vertical, and spatial positioning accuracy in the open-sky (ST 1) and urban areas (ST 2, ST 3, ST 4): (<b>a</b>) LC−1 at ST 1; (<b>b</b>) LC−1 at ST 2; (<b>c</b>) LC−1 at ST 3; and (<b>d</b>) LC−1 at ST 4.</p>
Full article ">Figure 11
<p>Horizontal, vertical, and spatial positioning accuracy in the open-sky (ST 1) and urban areas (ST 2, ST 3, ST 4) for 24 sessions: (<b>a</b>) LC−1 at ST 1; (<b>b</b>) LC−1 at ST 2; (<b>c</b>) LC−1 at ST 3; and (<b>d</b>) LC−1 at ST 4.</p>
Full article ">
26 pages, 8373 KiB  
Article
Swarm Intelligence Internet of Vehicles Approaches for Opportunistic Data Collection and Traffic Engineering in Smart City Waste Management
by Gerald K. Ijemaru, Li-Minn Ang and Kah Phooi Seng
Sensors 2023, 23(5), 2860; https://doi.org/10.3390/s23052860 - 6 Mar 2023
Cited by 15 | Viewed by 2996
Abstract
Recent studies have shown the efficacy of mobile elements in optimizing the energy consumption of sensor nodes. Current data collection approaches for waste management applications focus on exploiting IoT-enabled technologies. However, these techniques are no longer sustainable in the context of smart city [...] Read more.
Recent studies have shown the efficacy of mobile elements in optimizing the energy consumption of sensor nodes. Current data collection approaches for waste management applications focus on exploiting IoT-enabled technologies. However, these techniques are no longer sustainable in the context of smart city (SC) waste management applications due to the emergence of large-scale wireless sensor networks (LS-WSNs) in smart cities with sensor-based big data architectures. This paper proposes an energy-efficient swarm intelligence (SI) Internet of Vehicles (IoV)-based technique for opportunistic data collection and traffic engineering for SC waste management strategies. This is a novel IoV-based architecture exploiting the potential of vehicular networks for SC waste management strategies. The proposed technique involves deploying multiple data collector vehicles (DCVs) traversing the entire network for data gathering via a single-hop transmission. However, employing multiple DCVs comes with additional challenges including costs and network complexity. Thus, this paper proposes analytical-based methods to investigate critical tradeoffs in optimizing energy consumption for big data collection and transmission in an LS-WSN such as (1) finding the optimal number of data collector vehicles (DCVs) required in the network and (2) determining the optimal number of data collection points (DCPs) for the DCVs. These critical issues affect efficient SC waste management and have been overlooked by previous studies exploring waste management strategies. Simulation-based experiments using SI-based routing protocols validate the efficacy of the proposed method in terms of the evaluation metrics. Full article
Show Figures

Figure 1

Figure 1
<p>An IoV-based network model for smart city waste management. (<b>a</b>) An overview of Internet of Vehicles smart city waste management. (<b>b</b>) A model showing data collection for SC waste management.</p>
Full article ">Figure 1 Cont.
<p>An IoV-based network model for smart city waste management. (<b>a</b>) An overview of Internet of Vehicles smart city waste management. (<b>b</b>) A model showing data collection for SC waste management.</p>
Full article ">Figure 2
<p>IoV-based network model showing V2X technologies.</p>
Full article ">Figure 3
<p>A working model of the proposed approach in an LS-WSN.</p>
Full article ">Figure 4
<p>Network model and components for LS-WSN.</p>
Full article ">Figure 5
<p>Optimal point selection and path computation with a single DCV.</p>
Full article ">Figure 6
<p>Optimal point selection and path computation with multiple DCVs.</p>
Full article ">Figure 7
<p>Flow chart of the proposed approach.</p>
Full article ">Figure 8
<p>Comparing adaptive and non-adaptive partition schemes in terms of the number of sensor nodes.</p>
Full article ">Figure 9
<p>Comparing adaptive and non-adaptive partition schemes in terms of number of data collector vehicles (DCVs).</p>
Full article ">Figure 10
<p>A working example of the proposed model using a single DCV.</p>
Full article ">Figure 11
<p>A working example of the proposed model using multiple DCVs.</p>
Full article ">Figure 12
<p>Maximum time usage with different numbers of DCVs from one to five.</p>
Full article ">Figure 13
<p>Number of DCVs at specific deadlines.</p>
Full article ">Figure 14
<p>Comparison based on the packet delivery ratio.</p>
Full article ">Figure 15
<p>Comparison in terms of throughput.</p>
Full article ">Figure 16
<p>Comparison in terms of network lifetime.</p>
Full article ">Figure 17
<p>Comparison based average energy consumption.</p>
Full article ">Figure 18
<p>Comparison in terms of energy efficiency.</p>
Full article ">Figure 19
<p>Comparison based on latency.</p>
Full article ">
29 pages, 6191 KiB  
Review
Natural Intelligence as the Brain of Intelligent Systems
by Mahdi Naghshvarianjahromi, Shiva Kumar and Mohammed Jamal Deen
Sensors 2023, 23(5), 2859; https://doi.org/10.3390/s23052859 - 6 Mar 2023
Cited by 3 | Viewed by 2287
Abstract
This article discusses the concept and applications of cognitive dynamic systems (CDS), which are a type of intelligent system inspired by the brain. There are two branches of CDS, one for linear and Gaussian environments (LGEs), such as cognitive radio and cognitive radar, [...] Read more.
This article discusses the concept and applications of cognitive dynamic systems (CDS), which are a type of intelligent system inspired by the brain. There are two branches of CDS, one for linear and Gaussian environments (LGEs), such as cognitive radio and cognitive radar, and another one for non-Gaussian and nonlinear environments (NGNLEs), such as cyber processing in smart systems. Both branches use the same principle, called the perception action cycle (PAC), to make decisions. The focus of this review is on the applications of CDS, including cognitive radios, cognitive radar, cognitive control, cyber security, self-driving cars, and smart grids for LGEs. For NGNLEs, the article reviews the use of CDS in smart e-healthcare applications and software-defined optical communication systems (SDOCS), such as smart fiber optic links. The results of implementing CDS in these systems are very promising, with improved accuracy, performance, and lower computational costs. For example, CDS implementation in cognitive radars achieved a range estimation error that is as good as 0.47 (m) and a velocity estimation error of 3.30 (m/s), outperforming traditional active radars. Similarly, CDS implementation in smart fiber optic links improved the quality factor by 7 dB and the maximum achievable data rate by 43% compared to those of other mitigation techniques. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Intelligent system architecture using a cognitive dynamic system (CDS) as the cyber-physical system (CPS) and an autonomous decision-making system (ADMS). HCI: Human–computer interaction, HRI: Human–robot interaction, ADC: Analog-to-digital converter, DAC: Digital-to-analog converter.</p>
Full article ">Figure 2
<p>A diagram showing the association between each subsections presenting in this paper. SDOCS: software-defined optical communication system.</p>
Full article ">Figure 3
<p>A basic cognitive dynamic system is shown in a block diagram.</p>
Full article ">Figure 4
<p>The functional brain-like block in the cognitive dynamic system that controls executive and perceptual memory. (CDS: cognitive dynamic system.)</p>
Full article ">Figure 5
<p>Attention/focusing in the CDS [<a href="#B11-sensors-23-02859" class="html-bibr">11</a>].</p>
Full article ">Figure 6
<p>Schematics for two well-known machine learning techniques: (<b>a</b>) supervised learning (SL) and (<b>b</b>) reinforcement learning (RL) [<a href="#B13-sensors-23-02859" class="html-bibr">13</a>].</p>
Full article ">Figure 7
<p>Block diagram of a typical CDS conceptually.</p>
Full article ">Figure 8
<p>The cognitive-information-processing cycle in cognitive radio [<a href="#B29-sensors-23-02859" class="html-bibr">29</a>].</p>
Full article ">Figure 9
<p>Block diagram of CR with memory.</p>
Full article ">Figure 10
<p>Block diagram of the overall perception–action cycle in a cognitive dynamic system.</p>
Full article ">Figure 11
<p>Structure of the CDS for the SG.</p>
Full article ">Figure 12
<p>The fundamental layout of a coordinated vehicular radar and communication system.</p>
Full article ">Figure 13
<p>The proposed CDS’s conceptualization.</p>
Full article ">Figure 14
<p>The software defined optical communications system (SDOCS) relies on CDS as its brain.</p>
Full article ">Figure 15
<p>Receiver with cognitive dynamic system and OTDM digital signal processing (DSP). (CDS v6 [<a href="#B78-sensors-23-02859" class="html-bibr">78</a>]). FIR: finite impulse response and <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>X</mi> <mo>¯</mo> </mover> <mi>n</mi> </msub> </mrow> </semantics></math>: Estimation of transmitted symbols.</p>
Full article ">Figure 16
<p>Block diagram of the perception multiple actions cycle-based cognitive dynamic system (CDS).</p>
Full article ">
26 pages, 27778 KiB  
Article
Home Chimney Pinwheels (HCP) as Steh and Remote Monitoring for Smart Building IoT and WSN Applications
by Ajibike Eunice Akin-Ponnle, Paulo Capitão, Ricardo Torres and Nuno Borges Carvalho
Sensors 2023, 23(5), 2858; https://doi.org/10.3390/s23052858 - 6 Mar 2023
Cited by 5 | Viewed by 2814
Abstract
Smart, and ultra-low energy consuming Internet of Things (IoTs), wireless sensor networks (WSN), and autonomous devices are being deployed to smart buildings and cities, which require continuous power supply, whereas battery usage has accompanying environmental problems, coupled with additional maintenance cost. We present [...] Read more.
Smart, and ultra-low energy consuming Internet of Things (IoTs), wireless sensor networks (WSN), and autonomous devices are being deployed to smart buildings and cities, which require continuous power supply, whereas battery usage has accompanying environmental problems, coupled with additional maintenance cost. We present Home Chimney Pinwheels (HCP) as the Smart Turbine Energy Harvester (STEH) for wind; and Cloud-based remote monitoring of its output data. The HCP commonly serves as an external cap to home chimney exhaust outlets; they have very low inertia to wind; and are available on the rooftops of some buildings. Here, an electromagnetic converter adapted from a brushless DC motor was mechanically fastened to the circular base of an 18-blade HCP. In simulated wind, and rooftop experiments, an output voltage of 0.3 V to 16 V was realised for a wind speed between 0.6 to 16 km/h. This is sufficient to operate low-power IoT devices deployed around a smart city. The harvester was connected to a power management unit and its output data was remotely monitored via the IoT analytic Cloud platform “ThingSpeak” by means of LoRa transceivers, serving as sensors; while also obtaining supply from the harvester. The HCP can be a battery-less “stand-alone” low-cost STEH, with no grid connection, and can be installed as attachments to IoT or wireless sensors nodes in smart buildings and cities. Full article
(This article belongs to the Collection Wireless Sensor Networks towards the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Motivations and ecological impacts of STEH: (<b>a</b>) Problems of Battery Replacement. (<b>b</b>) Motivation for STEH.</p>
Full article ">Figure 2
<p>(<b>a</b>) Vertical Axis Wind Turbine (VAWT). (<b>b</b>) Multiple-blade VAWT [<a href="#B38-sensors-23-02858" class="html-bibr">38</a>,<a href="#B40-sensors-23-02858" class="html-bibr">40</a>].</p>
Full article ">Figure 3
<p>Home chimney pinwheels (HCP).</p>
Full article ">Figure 4
<p>(<b>a</b>) Aerodynamic lift and drag acting on a blade. (<b>b</b>) Mid-horizontal cross-sectional view of the turbine and wind flow.</p>
Full article ">Figure 5
<p>Block Diagram of energy harvesting mechanism and remote monitoring of Wind STEH.</p>
Full article ">Figure 6
<p>STEH fabrication from Home Chimney Pinwheels (HCP): (<b>a</b>) Physical outlook, (<b>b</b>) the magnetic cap of the converter, (<b>c</b>) the harness showing the converter winding coils with the magnetic cap uncovered.</p>
Full article ">Figure 7
<p>STEH-HCP Top-view rotation.</p>
Full article ">Figure 8
<p>PMU Circuit diagram.</p>
Full article ">Figure 9
<p>STEH Smart Sensing and Communication Process: (<b>a</b>) Sensing and communication (<b>b</b>) Flowchart.</p>
Full article ">Figure 10
<p>The ESP32 LoRa 1-CH Gateway Receiving Module (<b>a</b>) with a duck antenna, (<b>b</b>) mounted within a building.</p>
Full article ">Figure 11
<p>Laboratory Set-Up for the HCP Smart Turbine Energy Harvester. (<b>a</b>) Experimental set-up, (<b>b</b>) Set-up for wind measurement with output voltage.</p>
Full article ">Figure 12
<p>STEH Rooftop Set-up: (<b>a</b>) HCP-STEH on flat rooftop (<b>b</b>) output reading.</p>
Full article ">Figure 13
<p>Output voltage waveforms for different wind speeds.</p>
Full article ">Figure 14
<p>No-load peak output voltage with wind speeds.</p>
Full article ">Figure 15
<p>Fitted output voltage curve with wind speed.</p>
Full article ">Figure 16
<p>Recorded output data for Day 1.</p>
Full article ">Figure 17
<p>Recorded output data for Day 2.</p>
Full article ">Figure 18
<p>Recorded output data for Day 3.</p>
Full article ">Figure 19
<p>Recorded output data for Day 4.</p>
Full article ">Figure 20
<p>Recorded output data for Day 5.</p>
Full article ">Figure 21
<p>Measured output voltage of the harvester for the five days with an 8-point moving average trend line.</p>
Full article ">Figure 22
<p>Wind speed in Aveiro for the month of September 2022. [Source: <a href="http://www.meteoblue.com" target="_blank">www.meteoblue.com</a>, accessed on 12 December 2022]. The five days of monitoring of the harvester is indicated by the red rectangle.</p>
Full article ">Figure 23
<p>A snapshot of the HCP-STEH Cloud-based output data for Day 3 on the “ThingSpeak” Cloud platform.</p>
Full article ">Figure 24
<p>A snapshot of the HCP-STEH Cloud-based output data for Day 4 on the “ThingSpeak” Cloud platform.</p>
Full article ">
11 pages, 2951 KiB  
Communication
A Dew-Condensation Sensor Exploiting Local Variations in the Relative Refractive Index on the Dew-Friendly Surface of a Waveguide
by Subin Hwa, Eun-Seon Sim, Jun-Hee Na, Ik-Hoon Jang, Jin-Hyuk Kwon and Min-Hoi Kim
Sensors 2023, 23(5), 2857; https://doi.org/10.3390/s23052857 - 6 Mar 2023
Viewed by 2209
Abstract
We propose a sensor technology for detecting dew condensation, which exploits a variation in the relative refractive index on the dew-friendly surface of an optical waveguide. The dew-condensation sensor is composed of a laser, waveguide, medium (i.e., filling material for the waveguide), and [...] Read more.
We propose a sensor technology for detecting dew condensation, which exploits a variation in the relative refractive index on the dew-friendly surface of an optical waveguide. The dew-condensation sensor is composed of a laser, waveguide, medium (i.e., filling material for the waveguide), and photodiode. The formation of dewdrops on the waveguide surface causes local increases in the relative refractive index accompanied by the transmission of the incident light rays, hence reducing the light intensity inside the waveguide. In particular, the dew-friendly surface of the waveguide is obtained by filling the interior of the waveguide with liquid H2O, i.e., water. A geometric design for the sensor was first carried out considering the curvature of the waveguide and the incident angles of the light rays. Moreover, the optical suitability of waveguide media with various absolute refractive indices, i.e., water, air, oil, and glass, were evaluated through simulation tests. In actual experiments, the sensor with the water-filled waveguide displayed a wider gap between the measured photocurrent levels under conditions with and without dew, than those with the air- and glass-filled waveguides, as a result of the relatively high specific heat of the water. The sensor with the water-filled waveguide exhibited excellent accuracy and repeatability as well. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram of the sensor, (<b>b</b>) key denotations for the constituent media and surfaces of the sensor as well as those for the absolute refractive indices of the media, and (<b>c</b>) the path difference between the incident light rays, with and without dew.</p>
Full article ">Figure 2
<p>The geometry of the waveguide with curvature and the design strategy in which the condition of the total reflection at the external interface is considered.</p>
Full article ">Figure 3
<p>Optical simulation results (the light wavelength: 650 nm). Optical paths obtained with various internal media for the waveguide, i.e., (<b>a</b>) glass, (<b>b</b>) oil, (<b>c</b>) water, and (<b>d</b>) air.</p>
Full article ">Figure 4
<p>In the optical simulation results: (<b>a</b>) variations in the intensity according to the simulated dew condensation for the various internal media, and (<b>b</b>) the regions on the waveguide surface where the incident light rays are transmitted under conditions with dew for the various internal media.</p>
Full article ">Figure 5
<p>Responses of the actual sensors with the water-, glass-, and air-filled waveguides to dew condensation (the laser light wavelength: 650 nm).</p>
Full article ">Figure 6
<p>The photos of the (<b>a</b>) air-filled waveguide, (<b>b</b>) glass-filled waveguide, and (<b>c</b>) water-filled waveguide before the dew formation (‘initial’), after the dew formation (‘with dew’), and after the dew evaporation (‘without dew’) in sequence.</p>
Full article ">Figure 7
<p>Responses of the five actual sensors with the water-filled waveguides to dew condensation.</p>
Full article ">
14 pages, 4036 KiB  
Article
Identification and Classification of Small Sample Desert Grassland Vegetation Communities Based on Dynamic Graph Convolution and UAV Hyperspectral Imagery
by Tao Zhang, Yuge Bi, Xiangbing Zhu and Xinchao Gao
Sensors 2023, 23(5), 2856; https://doi.org/10.3390/s23052856 - 6 Mar 2023
Cited by 6 | Viewed by 1868
Abstract
Desert steppes are the last barrier to protecting the steppe ecosystem. However, existing grassland monitoring methods still mainly use traditional monitoring methods, which have certain limitations in the monitoring process. Additionally, the existing deep learning classification models of desert and grassland still use [...] Read more.
Desert steppes are the last barrier to protecting the steppe ecosystem. However, existing grassland monitoring methods still mainly use traditional monitoring methods, which have certain limitations in the monitoring process. Additionally, the existing deep learning classification models of desert and grassland still use traditional convolutional neural networks for classification, which cannot adapt to the classification task of irregular ground objects, which limits the classification performance of the model. To address the above problems, this paper uses a UAV hyperspectral remote sensing platform for data acquisition and proposes a spatial neighborhood dynamic graph convolution network (SN_DGCN) for degraded grassland vegetation community classification. The results show that the proposed classification model had the highest classification accuracy compared to the seven classification models of MLP, 1DCNN, 2DCNN, 3DCNN, Resnet18, Densenet121, and SN_GCN; its OA, AA, and kappa were 97.13%, 96.50%, and 96.05% in the case of only 10 samples per class of features, respectively; The classification performance was stable under different numbers of training samples, had better generalization ability in the classification task of small samples, and was more effective for the classification task of irregular features. Meanwhile, the latest desert grassland classification models were also compared, which fully demonstrated the superior classification performance of the proposed model in this paper. The proposed model provides a new method for the classification of vegetation communities in desert grasslands, which is helpful for the management and restoration of desert steppes. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>UAV hyperspectral remote sensing system.</p>
Full article ">Figure 2
<p>Spectral curve after reflectivity correction.</p>
Full article ">Figure 3
<p>Construction of spatial neighborhood graph.</p>
Full article ">Figure 4
<p>Edge convolution block.</p>
Full article ">Figure 5
<p>SN_DGCN classification network structure.</p>
Full article ">Figure 6
<p>Box plots of OA with different numbers of neighborhood nodes <span class="html-italic">k</span>.</p>
Full article ">Figure 7
<p>Box plots of OA with different sliding windows <span class="html-italic">w</span>.</p>
Full article ">Figure 8
<p>Classification accuracy of different training samples.</p>
Full article ">Figure 9
<p>Results of feature visualization for different models. (<b>a</b>) MLP, (<b>b</b>) 1DCNN, (<b>c</b>) 2DCNN, (<b>d</b>) 3DCNN, (<b>e</b>) Resnet18, (<b>f</b>) Densenet121, (<b>g</b>) SN_GCN, and (<b>h</b>) SN_DGCN.</p>
Full article ">Figure 10
<p>Classification visualization results.</p>
Full article ">
19 pages, 3939 KiB  
Article
Multiple Dipole Source Position and Orientation Estimation Using Non-Invasive EEG-like Signals
by Saina Namazifard and Kamesh Subbarao
Sensors 2023, 23(5), 2855; https://doi.org/10.3390/s23052855 - 6 Mar 2023
Cited by 8 | Viewed by 2082
Abstract
The problem of precisely estimating the position and orientation of multiple dipoles using synthetic EEG signals is considered in this paper. After determining a proper forward model, a nonlinear constrained optimization problem with regularization is solved, and the results are compared with a [...] Read more.
The problem of precisely estimating the position and orientation of multiple dipoles using synthetic EEG signals is considered in this paper. After determining a proper forward model, a nonlinear constrained optimization problem with regularization is solved, and the results are compared with a widely used research code, namely EEGLAB. A thorough sensitivity analysis of the estimation algorithm to the parameters (such as the number of samples and sensors) in the assumed signal measurement model is conducted. To confirm the efficacy of the proposed source identification algorithm on any category of data sets, three different kinds of data-synthetic model data, visually evoked clinical EEG data, and seizure clinical EEG data are used. Furthermore, the algorithm is tested on both the spherical head model and the realistic head model based on the MNI coordinates. The numerical results and comparisons with the EEGLAB show very good agreement, with little pre-processing required for the acquired data. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Half-sphere head model: (<b>a</b>) spherical coordinates and (<b>b</b>) an example of two sensors in the presence of the <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>th</mi> </mrow> </semantics></math> dipole.</p>
Full article ">Figure 2
<p>MNI and spherical head model in a same figure.</p>
Full article ">Figure 3
<p>Uniform random distribution of sensors located in the hemisphere head model.</p>
Full article ">Figure 4
<p>Synthetic EEG signals recorded by two different sensors. Signal 1 has a higher absolute mean strength compared to signal 2.</p>
Full article ">Figure 5
<p>BESA comparison: (<b>a</b>) model parameters for the BESA data; (<b>b</b>) comparison of BESA data with proposed signal model.</p>
Full article ">Figure 6
<p>Error percentage of estimated variables in terms of number of sensors and samples, Dipole number 1.</p>
Full article ">Figure 7
<p>Error percentage of estimated variables in terms of number of sensors and samples, Dipole number 2.</p>
Full article ">Figure 8
<p>Error percentage of estimated variables in terms of number of sensors and samples, Dipole number 3.</p>
Full article ">Figure 9
<p>Error percentage of a dipole’s location for different sensors and 500 samples.</p>
Full article ">Figure 10
<p>Error percentage of a dipole’s orientation for different sensors and 500 samples.</p>
Full article ">Figure 11
<p>Error percentage of a dipole’s magnitude for different sensors and 500 samples.</p>
Full article ">Figure 12
<p>Average error percentage for different sensors and 500 samples.</p>
Full article ">Figure 13
<p>Average error percentage for different sensors (10–100) and 500 samples.</p>
Full article ">Figure 14
<p>Error percentage for different numbers of samples and 19 sensors.</p>
Full article ">Figure 15
<p>Generating EEG signals recorded by two different sensors.</p>
Full article ">Figure 16
<p>(<b>a</b>) Multiple dipoles’ source localization result from EEGLAB where the Talairach coordinates are: dipole 1 (32, −40, 72); dipole 2 (1, 26, 65); and dipole 3 (−27, −35, 73). (<b>b</b>) Multiple dipoles’ source localization result from the introduced algorithm.</p>
Full article ">Figure 17
<p>Selected time range from the original dataset.</p>
Full article ">Figure 18
<p>Visual Attention Task (113 s - 116 s): The source localization result from the introduced algorithm vs. EEGLAB. (<b>a</b>) MATLAB simulation result; (<b>b</b>) EEGLAB result, all estimations; and (<b>c</b>) EEGLAB Result, RV <math display="inline"><semantics> <mrow> <mo>&gt;</mo> <mn>15</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Visual Attention Task (113 s–116 s): Multiple dipoles’ source localization result from the introduced algorithm.</p>
Full article ">Figure 20
<p>Visual Attention Task (146 s–149 s): The source localization result from the introduced algorithm vs. EEGLAB: (<b>a</b>) Proposed algorithm’s result; (<b>b</b>) EEGLAB result, all estimations; (<b>c</b>) EEGLAB result, RV &gt; 15%.</p>
Full article ">Figure 21
<p>Visual Attention Task (146 s–149 s): Multiple dipoles’ source localization result from the introduced algorithm.</p>
Full article ">Figure 22
<p>The ERP data are shown for each channel which helps to validate the source localization result. (<b>a</b>) Channel data; (<b>b</b>) Channel location.</p>
Full article ">Figure 23
<p>Active Seizure Case: The source localization result from the introduced algorithm vs. EEGLAB. (<b>a</b>) Proposed algorithm’s result; (<b>b</b>) EEGLAB result, all estimations; (<b>c</b>) EEGLAB result, RV &gt; 15%.</p>
Full article ">Figure 24
<p>Active Seizure Case: Multiple dipoles’ source localization results from the introduced algorithm.</p>
Full article ">
16 pages, 1559 KiB  
Article
Morphological Autoencoders for Beat-by-Beat Atrial Fibrillation Detection Using Single-Lead ECG
by Rafael Silva, Ana Fred and Hugo Plácido da Silva
Sensors 2023, 23(5), 2854; https://doi.org/10.3390/s23052854 - 6 Mar 2023
Cited by 3 | Viewed by 2505
Abstract
Engineered feature extraction can compromise the ability of Atrial Fibrillation (AFib) detection algorithms to deliver near real-time results. Autoencoders (AEs) can be used as an automatic feature extraction tool, tailoring the resulting features to a specific classification task. By coupling an encoder to [...] Read more.
Engineered feature extraction can compromise the ability of Atrial Fibrillation (AFib) detection algorithms to deliver near real-time results. Autoencoders (AEs) can be used as an automatic feature extraction tool, tailoring the resulting features to a specific classification task. By coupling an encoder to a classifier, it is possible to reduce the dimension of the Electrocardiogram (ECG) heartbeat waveforms and classify them. In this work we show that morphological features extracted using a Sparse AE are sufficient to distinguish AFib from Normal Sinus Rhythm (NSR) beats. In addition to the morphological features, rhythm information was included in the model using a proposed short-term feature called Local Change of Successive Differences (LCSD). Using single-lead ECG recordings from two referenced public databases, and with features from the AE, the model was able to achieve an F1-score of 88.8%. These results show that morphological features appear to be a distinct and sufficient factor for detecting AFib in ECG recordings, especially when designed for patient-specific applications. This is an advantage over state-of-the-art algorithms that need longer acquisition times to extract engineered rhythm features, which also requires careful preprocessing steps. To the best of our knowledge, this is the first work that presents a near real-time morphological approach for AFib detection under naturalistic ECG acquisition with a mobile device. Full article
(This article belongs to the Special Issue Advanced Machine Intelligence for Biomedical Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Morphological and rhythm changes between ECG recordings in Normal Sinus Rhythm (<b>top</b>) and in Atrial Fibrillation (<b>bottom</b>), where the absence of P-waves and the irregular heart rate are noticeable.</p>
Full article ">Figure 2
<p>Number of AFib detection algorithm publications from 2013 to 2020, showing how the number of ANN-based algorithms have gained popularity over time. The year 2018 was boosted by the CinC2017 challenge. Data obtained from the Appendix A, Supplementary Data section of [<a href="#B19-sensors-23-02854" class="html-bibr">19</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) A traditional autoencoder which is trained to reproduce the input samples, and (<b>b</b>) a supervised autoencoder that receives additional feedback from labeled samples.</p>
Full article ">Figure 4
<p>The proposed model for classifying ECG heartbeat waveforms into Normal Sinus Rhythm (NSR) and Atrial Fibrillation (AFib). It consists of an encoder responsible for extracting morphological features from the ECG segments, which are then used by a classifier. The LCSD metric is also used to train the classifier to provide local rhythm information and improve classification performance.</p>
Full article ">Figure 5
<p>Example of LCSD values for ECG beats in NSR (left) and in AFib (right). Since RR-Intervals are more regular in NSR, their consecutive differences are smaller compared to AFib.</p>
Full article ">Figure 6
<p>Preprocessing steps of the ECG recordings.</p>
Full article ">Figure 7
<p>(<b>a</b>) Example of second R-peak removal from an AFib ECG segment using zero-padding. (<b>b</b>) Use of DMEAN to detect outliers in a set of ECG segments from a recording—valid segments are depicted as solid lines, while outliers are displayed in dashed lines. Data from the CinC2017 database [<a href="#B23-sensors-23-02854" class="html-bibr">23</a>].</p>
Full article ">Figure 8
<p>Distribution of LCSD values for NSR and AFib in two databases. The Mann–Whitney U test revealed significant statistical differences between the distributions, with p-values smaller than 0.0001 (marked by ****).</p>
Full article ">Figure 9
<p>Comparison of Encoder-MLP (<b>left</b>) and Supervised Autoencoder (<b>center</b>) on CinC2017 data shows that SupAEs have insignificant changes in convergence and performance in terms of lower classification losses. The ROC curve of SupAE with and without the LCSD metric (<b>right</b>) highlights the best threshold value.</p>
Full article ">Figure 10
<p>Comparison of Encoder-MLP (<b>left</b>) and Supervised Autoencoder (<b>center</b>) on AFDB data shows the SupAE’s quicker convergence and better performance in terms of lower classification losses. The ROC curve of SupAE with and without the LCSD metric (<b>right</b>) highlights the best threshold value.</p>
Full article ">Figure 11
<p>Comparing the performance of Encoder-MLP and Supervised Autoencoder on AFDB data using a patient-based split. The learning curves on the (<b>left</b>) and (<b>center</b>) show that both models face difficulties in converging the validation loss, possibly due to the presence of unseen pathological waveforms. The ROC curve of the SupAE model (<b>right</b>) with and without the LCSD metric highlights the best threshold value.</p>
Full article ">Figure 12
<p>Portion of a pathological ECG waveform from the AFDB database (recording 04043).</p>
Full article ">
23 pages, 20417 KiB  
Article
Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model
by Jennifer Eunice, Andrew J, Yuichi Sei and D. Jude Hemanth
Sensors 2023, 23(5), 2853; https://doi.org/10.3390/s23052853 - 6 Mar 2023
Cited by 12 | Viewed by 3591
Abstract
Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In [...] Read more.
Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In this paper, we propose a systematic approach for gloss prediction in WLSR using the Sign2Pose Gloss prediction transformer model. The primary goal of this work is to enhance WLSR’s gloss prediction accuracy with reduced time and computational overhead. The proposed approach uses hand-crafted features rather than automated feature extraction, which is computationally expensive and less accurate. A modified key frame extraction technique is proposed that uses histogram difference and Euclidean distance metrics to select and drop redundant frames. To enhance the model’s generalization ability, pose vector augmentation using perspective transformation along with joint angle rotation is performed. Further, for normalization, we employed YOLOv3 (You Only Look Once) to detect the signing space and track the hand gestures of the signers in the frames. The proposed model experiments on WLASL datasets achieved the top 1% recognition accuracy of 80.9% in WLASL100 and 64.21% in WLASL300. The performance of the proposed model surpasses state-of-the-art approaches. The integration of key frame extraction, augmentation, and pose estimation improved the performance of the proposed gloss prediction model by increasing the model’s precision in locating minor variations in their body posture. We observed that introducing YOLOv3 improved gloss prediction accuracy and helped prevent model overfitting. Overall, the proposed model showed 17% improved performance in the WLASL 100 dataset. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition II)
Show Figures

Figure 1

Figure 1
<p>Overview of gloss prediction from sign poses—WLASL using a standard transformer.</p>
Full article ">Figure 2
<p>Illustrating the augmentation techniques applied to a single frame while pre-processing.</p>
Full article ">Figure 3
<p>Sample visualization of normalized pose using YOLOv3.</p>
Full article ">Figure 4
<p>Proposed architecture of the Sign2Pose Gloss prediction transformer.</p>
Full article ">Figure 5
<p>Sample images of key-frame extraction for the Gloss “Drink” from the WLASL 100 dataset (<b>a</b>) sample of extracted frames for the mentioned gloss. (<b>b</b>) Discarded redundant frames. (<b>c</b>) Preserved key-frame sample from extracted frames.</p>
Full article ">Figure 5 Cont.
<p>Sample images of key-frame extraction for the Gloss “Drink” from the WLASL 100 dataset (<b>a</b>) sample of extracted frames for the mentioned gloss. (<b>b</b>) Discarded redundant frames. (<b>c</b>) Preserved key-frame sample from extracted frames.</p>
Full article ">Figure 6
<p>Performance analysis of proposed work with existing appearance and pose-based models. (<b>a</b>) Graphical representation comparing our approach with the pose-based as well as appearance-based model. (<b>b</b>) Comparing top 1% recognition accuracy on both pose-based and appearance-based models; (<b>c</b>) comparing top K macro recognition accuracy on pose-based models.</p>
Full article ">Figure 6 Cont.
<p>Performance analysis of proposed work with existing appearance and pose-based models. (<b>a</b>) Graphical representation comparing our approach with the pose-based as well as appearance-based model. (<b>b</b>) Comparing top 1% recognition accuracy on both pose-based and appearance-based models; (<b>c</b>) comparing top K macro recognition accuracy on pose-based models.</p>
Full article ">Figure 7
<p>Validation accuracy and validation loss of our model.</p>
Full article ">Figure 8
<p>Comparison of the pose-based approaches’ top 1 accuracies (%) and scalability on four subsets of the WLASL dataset.</p>
Full article ">
23 pages, 14661 KiB  
Article
A Non-Equal Time Interval Incremental Motion Prediction Method for Maritime Autonomous Surface Ships
by Zhijie Zhou, Haixiang Xu, Hui Feng and Wenjuan Li
Sensors 2023, 23(5), 2852; https://doi.org/10.3390/s23052852 - 6 Mar 2023
Cited by 1 | Viewed by 1755
Abstract
Recent technological advancements facilitate the autonomous navigation of maritime surface ships. The accurate data given by a range of various sensors serve as the primary assurance of a voyage’s safety. Nevertheless, as sensors have various sample rates, they cannot obtain information at the [...] Read more.
Recent technological advancements facilitate the autonomous navigation of maritime surface ships. The accurate data given by a range of various sensors serve as the primary assurance of a voyage’s safety. Nevertheless, as sensors have various sample rates, they cannot obtain information at the same time. Fusion decreases the accuracy and reliability of perceptual data if different sensor sample rates are not taken into account. Hence, it is helpful to increase the quality of the fusion information to precisely anticipate the motion status of ships at the sampling time of each sensor. This paper proposes a non-equal time interval incremental prediction method. In this method, the high dimensionality of the estimated state and nonlinearity of the kinematic equation are taken into consideration. First, the cubature Kalman filter is employed to estimate a ship’s motion at equal intervals based on the ship’s kinematic equation. Next, a ship motion state predictor based on a long short-term memory network structure is created, using the increment and time interval of the historical estimation sequence as the network input and the increment of the motion state at the projected time as the network output. The suggested technique can lessen the effect of the speed difference between the test set and the training set on the prediction accuracy compared with the traditional long short-term memory prediction method. Finally, comparison experiments are carried out to validate the precision and effectiveness of the proposed approach. The experimental results show that the root-mean-square error coefficient of the prediction error is decreased on average by roughly 78% for various modes and speeds when compared with the conventional non-incremental long short-term memory prediction approach. Additionally, the proposed prediction technology and the traditional approach have virtually the same algorithm times, which may fulfill the real engineering requirements. Full article
(This article belongs to the Special Issue Algorithms, Systems and Applications of Smart Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Multi-sensor system sampling diagram for MASS.</p>
Full article ">Figure 2
<p>The algorithm flowchart of NETIIP.</p>
Full article ">Figure 3
<p>Coordinate system diagram.</p>
Full article ">Figure 4
<p>Algorithm flow charts: (<b>a</b>) CKF and (<b>b</b>) UKF.</p>
Full article ">Figure 5
<p>LSTM network structure.</p>
Full article ">Figure 6
<p>Experimental ship models and sensors: (<b>a</b>) experimental model, (<b>b</b>) attitude instrument, and (<b>c</b>) experimental site.</p>
Full article ">Figure 7
<p>Comparison of ship position estimations: (<b>a</b>) slow speed of MRM, (<b>b</b>) medium speed of MRM, (<b>c</b>) high speed of MRM, (<b>d</b>) slow speed of ANM, (<b>e</b>) medium speed of ANM, and (<b>f</b>) high speed of ANM.</p>
Full article ">Figure 8
<p>UKF estimation results for the ship’s attitude: (<b>a</b>) slow speed of MRM, (<b>b</b>) medium speed of MRM, (<b>c</b>) high speed of MRM, (<b>d</b>) slow speed of ANM, (<b>e</b>) medium speed of ANM, and (<b>f</b>) high speed of ANM.</p>
Full article ">Figure 9
<p>CKF estimation results for the ship’s attitude: (<b>a</b>) slow speed of MRM, (<b>b</b>) medium speed of MRM, (<b>c</b>) high speed of MRM, (<b>d</b>) slow speed of ANM, (<b>e</b>) medium speed of ANM, and (<b>f</b>) high speed of ANM.</p>
Full article ">Figure 10
<p>Comparison of ship forward speed estimations: (<b>a</b>) slow speed of MRM, (<b>b</b>) medium speed of MRM, (<b>c</b>) high speed of MRM, (<b>d</b>) slow speed of ANM, (<b>e</b>) medium speed of ANM, and (<b>f</b>) high speed of ANM.</p>
Full article ">Figure 11
<p>Comparison of ship resultant velocity estimations: (<b>a</b>) slow speed of MRM, (<b>b</b>) medium speed of MRM, (<b>c</b>) high speed of MRM, (<b>d</b>) slow speed of ANM, (<b>e</b>) medium speed of ANM, and (<b>f</b>) high speed of ANM.</p>
Full article ">Figure 12
<p>Comparison of ship position predictions: (<b>a</b>) slow speed of MRM, (<b>b</b>) medium speed of MRM, (<b>c</b>) high speed of MRM, (<b>d</b>) slow speed of ANM, (<b>e</b>) medium speed of ANM, and (<b>f</b>) high speed of ANM.</p>
Full article ">Figure 13
<p>Comparison of ship attitude predictions: (<b>a</b>) slow speed of MRM, (<b>b</b>) medium speed of MRM, (<b>c</b>) high speed of MRM, (<b>d</b>) slow speed of ANM, (<b>e</b>) medium speed of ANM, and (<b>f</b>) high speed of ANM.</p>
Full article ">Figure 14
<p>Prediction error–velocity distribution of ship position: (<b>a</b>) MRM and (<b>b</b>) ANM.</p>
Full article ">Figure 15
<p>Prediction error–velocity distribution of ship attitude: (<b>a</b>) MRM and (<b>b</b>) ANM.</p>
Full article ">
17 pages, 3557 KiB  
Article
Detecting Grapevine Virus Infections in Red and White Winegrape Canopies Using Proximal Hyperspectral Sensing
by Yeniu Mickey Wang, Bertram Ostendorf and Vinay Pagay
Sensors 2023, 23(5), 2851; https://doi.org/10.3390/s23052851 - 6 Mar 2023
Cited by 7 | Viewed by 2900
Abstract
Grapevine virus-associated disease such as grapevine leafroll disease (GLD) affects grapevine health worldwide. Current diagnostic methods are either highly costly (laboratory-based diagnostics) or can be unreliable (visual assessments). Hyperspectral sensing technology is capable of measuring leaf reflectance spectra that can be used for [...] Read more.
Grapevine virus-associated disease such as grapevine leafroll disease (GLD) affects grapevine health worldwide. Current diagnostic methods are either highly costly (laboratory-based diagnostics) or can be unreliable (visual assessments). Hyperspectral sensing technology is capable of measuring leaf reflectance spectra that can be used for the non-destructive and rapid detection of plant diseases. The present study used proximal hyperspectral sensing to detect virus infection in Pinot Noir (red-berried winegrape cultivar) and Chardonnay (white-berried winegrape cultivar) grapevines. Spectral data were collected throughout the grape growing season at six timepoints per cultivar. Partial least squares-discriminant analysis (PLS-DA) was used to build a predictive model of the presence or absence of GLD. The temporal change of canopy spectral reflectance showed that the harvest timepoint had the best prediction result. Prediction accuracies of 96% and 76% were achieved for Pinot Noir and Chardonnay, respectively. Our results provide valuable information on the optimal time for GLD detection. This hyperspectral method can also be deployed on mobile platforms including ground-based vehicles and unmanned aerial vehicles (UAV) for large-scale disease surveillance in vineyards. Full article
(This article belongs to the Special Issue Methodologies Used in Hyperspectral Remote Sensing in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Spectral data measurement. The yellow shaded area shows the circular field of view of diameter of approx. 20 cm based on the approx. 0.5 m horizontal measurement distance.</p>
Full article ">Figure 2
<p>Spectral data pre-processing steps for Pinot Noir at the February 2021 timepoint. The raw spectral data is smoothed using the SavGol algorithm with seven bandwidths, then normalised using the standard normal variate algorithm and, lastly, scaled using the Mean Centring algorithm for 750 bands.</p>
Full article ">Figure 3
<p>PLS-DA threshold determination. (<b>a</b>) ROC curve for disease class; (<b>b</b>) plot of PLS predicted value for disease class for Chardonnay in March 2021. The threshold was 0.698 in this model.</p>
Full article ">Figure 4
<p>The GLD-infected vines at different development stages. Chardonnay in (<b>a</b>) November—flowering stage (EL-17); (<b>b</b>) December— pea-size berries; (<b>c</b>) February—veraison; (<b>d</b>) April—post-harvest. Pinot Noir in (<b>e</b>) November—flowering; (<b>f</b>) December—pea-size berries; (<b>g</b>) February—veraison; (<b>h</b>) April—post-harvest.</p>
Full article ">Figure 5
<p>The difference of normalised averaged spectral reflectance of diseased to healthy vines (value at 0) for each timepoint.</p>
Full article ">Figure 6
<p>The combined violin plot and box plot of the predicted value of disease from the PLS-DA model for (<b>a</b>) Chardonnay, and (<b>b</b>) Pinot Noir at each timepoint. The larger the value, the higher the probability of a sample belonging to a diseased vine, and the smaller the value, the lower the probability of a diseased vine. In this binary model, a low value means the sample more likely belongs to a healthy vine. The greater the separation between actual disease and healthy samples, the better the performance of the classification model.</p>
Full article ">Figure 7
<p>The matrix of MCC from each model prediction of other timepoints for (<b>a</b>) Chardonnay, and (<b>b</b>) Pinot Noir. The bold cells are the self-predicted results, which are equal to the calibration results. The colour score ranges from red (low value) to yellow (medium) to green (high value). NA means √((TP + FP)∙(TP + FN)∙(TN + FP)∙(TN + FN)) equal to zero that cannot be divided.</p>
Full article ">
6 pages, 1805 KiB  
Communication
Epoxy-Coated Side-Polished Fiber-Optic Temperature Sensor for Cryogenic Conditions
by Umesh Sampath and Minho Song
Sensors 2023, 23(5), 2850; https://doi.org/10.3390/s23052850 - 6 Mar 2023
Viewed by 1754
Abstract
We propose coating side-polished optical fiber (SPF) with epoxy polymer to form a fiber-optic sensor for cryogenic temperature measuring applications. The thermo-optic effect of the epoxy polymer coating layer enhances the interaction between the SPF evanescent field and surrounding medium, considerably improving the [...] Read more.
We propose coating side-polished optical fiber (SPF) with epoxy polymer to form a fiber-optic sensor for cryogenic temperature measuring applications. The thermo-optic effect of the epoxy polymer coating layer enhances the interaction between the SPF evanescent field and surrounding medium, considerably improving the temperature sensitivity and robustness of the sensor head in a very low-temperature environment. In tests, due to the evanescent field–polymer coating interlinkage, transmitted optical intensity variation of 5 dB and an average sensitivity of 0.024 dB/K were obtained in the 90–298 K range. Full article
(This article belongs to the Special Issue Applications of Optical Fiber Sensors and Measurement Systems)
Show Figures

Figure 1

Figure 1
<p>Side-polished optical fiber (SPF) (<b>a</b>) SEM cross-section image of SPF, (<b>b</b>) schematic diagram of SPF with epoxy coating, (<b>c</b>) schematic cross-sectional view of coated SPF, (<b>d</b>) refractive index: core, cladding, and epoxy.</p>
Full article ">Figure 2
<p>Experimental setup of the proposed fiber-optic sensor system. SM—single mode; SPF—side-polished fiber; PD—photo detector; DAQ—data acquisition; ASE BBS—amplified spontaneous emission broadband source.</p>
Full article ">Figure 3
<p>Cryogenic temperature measurements with epoxy-coated SPF sensor. The output is optical intensity variation according to temperature change in 90–298 K.</p>
Full article ">Figure 4
<p>Performance comparison of the proposed sensor: (<b>a</b>) Bare SPF sensor vs. epoxy-coated SPF sensor; (<b>b</b>) reference sensors (PCFBG and T-type thermocouple) vs. epoxy-coated SPF sensor.</p>
Full article ">Figure 5
<p>Repeatability test in multiple measurement cycles. Upper trace: thermocouple output; lower trace: output from the epoxy-coated SPF sensor.</p>
Full article ">
16 pages, 770 KiB  
Article
Self-Excited Microcantilever with Higher Mode Using Band-Pass Filter
by Yuji Hyodo and Hiroshi Yabuno
Sensors 2023, 23(5), 2849; https://doi.org/10.3390/s23052849 - 6 Mar 2023
Cited by 1 | Viewed by 1915
Abstract
Microresonators have a variety of scientific and industrial applications. The measurement methods based on the natural frequency shift of a resonator have been studied for a wide range of applications, including the detection of the microscopic mass and measurements of viscosity and stiffness. [...] Read more.
Microresonators have a variety of scientific and industrial applications. The measurement methods based on the natural frequency shift of a resonator have been studied for a wide range of applications, including the detection of the microscopic mass and measurements of viscosity and stiffness. A higher natural frequency of the resonator realizes an increase in the sensitivity and a higher-frequency response of the sensors. In the present study, by utilizing the resonance of a higher mode, we propose a method to produce the self-excited oscillation with a higher natural frequency without downsizing the resonator. We establish the feedback control signal for the self-excited oscillation using the band-pass filter so that the signal consists of only the frequency corresponding to the desired excitation mode. It results that careful position setting of the sensor for constructing a feedback signal, which is needed in the method based on the mode shape, is not necessary. By the theoretical analysis of the equations governing the dynamics of the resonator coupled with the band-pass filter, it is clarified that the self-excited oscillation is produced with the second mode. Furthermore, the validity of the proposed method is experimentally confirmed by an apparatus using a microcantilever. Full article
(This article belongs to the Special Issue Feature Papers in Physical Sensors 2022)
Show Figures

Figure 1

Figure 1
<p>Analytical model of cantilever subject to the displacement excitation; <math display="inline"><semantics> <mi>η</mi> </semantics></math> is the displacement excitation by actuator to produce the self-excited oscillation.</p>
Full article ">Figure 2
<p>Root locus with respect to the feedback gain <math display="inline"><semantics> <msup> <mi>α</mi> <mo>*</mo> </msup> </semantics></math> without band-pass filter.</p>
Full article ">Figure 3
<p>Root loci with respect to the feedback gain <math display="inline"><semantics> <msup> <mi>α</mi> <mo>*</mo> </msup> </semantics></math> in the case that the center frequency of the band-pass filter is adjusted to the second mode: (<b>a</b>) eigenvalues of the first and second modes; (<b>b</b>) eigenvalues of the band-pass filter.</p>
Full article ">Figure 4
<p>Dependency of the root locus with respect to the feedback gain <math display="inline"><semantics> <msup> <mi>α</mi> <mo>*</mo> </msup> </semantics></math> on the center frequency of the band-pass filter; the number denotes the value of the feedback gain: (<b>a</b>) first mode; (<b>b</b>) second mode.</p>
Full article ">Figure 5
<p>Experimental set-up: (<b>a</b>) signal flow in the experiment; (<b>b</b>) appearance of the microcantilever.</p>
Full article ">Figure 6
<p>Signal flow in the feedback circuit. The region surrounded by the solid line corresponds to the feedback circuit in <a href="#sensors-23-02849-f005" class="html-fig">Figure 5</a>. (<b>A</b>,<b>C</b>) denotes an analog integration circuit. The dashed line block (<b>B</b>) represents the field-programmable gate array (FPGA) board.</p>
Full article ">Figure 7
<p>Self-excited oscillation with first mode under feedback control in the case without band-pass filtering: <math display="inline"><semantics> <msub> <mi>g</mi> <mn>1</mn> </msub> </semantics></math> = −2.5, <math display="inline"><semantics> <msub> <mi>g</mi> <mn>3</mn> </msub> </semantics></math> = 50. (<b>a</b>,<b>b</b>) are the time histories of the cantilever and the control signal to the piezoelectric actuator. (<b>c</b>,<b>d</b>) are the expansions of (<b>a</b>,<b>b</b>) in the steady state, respectively. (<b>e</b>,<b>f</b>) are the FFT analysis for the oscillation of the beam and the excitation displacement, respectively.</p>
Full article ">Figure 8
<p>Self-excited oscillation with first mode under feedback control in the case without band-pass filtering: <math display="inline"><semantics> <msub> <mi>g</mi> <mn>1</mn> </msub> </semantics></math> = −2, <math display="inline"><semantics> <msub> <mi>g</mi> <mn>3</mn> </msub> </semantics></math> = 250. (<b>a</b>,<b>b</b>) are the time histories of the cantilever and the control signal to the piezoelectric actuator. (<b>c</b>,<b>d</b>) are the expansions of (<b>a</b>,<b>b</b>) in the steady state, respectively. (<b>e</b>,<b>f</b>) are the FFT analysis for the oscillation of the beam and the excitation displacement, respectively.</p>
Full article ">Figure A1
<p>Schematic diagram of an active band-pass filter: (<b>A</b>) low-pass filter stage; (<b>B</b>) high-pass filter stage; (<b>C</b>) noninverting amplifier.</p>
Full article ">
12 pages, 1436 KiB  
Article
Pre-Trained Joint Model for Intent Classification and Slot Filling with Semantic Feature Fusion
by Yan Chen and Zhenghang Luo
Sensors 2023, 23(5), 2848; https://doi.org/10.3390/s23052848 - 6 Mar 2023
Cited by 5 | Viewed by 3443
Abstract
The comprehension of spoken language is a crucial aspect of dialogue systems, encompassing two fundamental tasks: intent classification and slot filling. Currently, the joint modeling approach for these two tasks has emerged as the dominant method in spoken language understanding modeling. However, the [...] Read more.
The comprehension of spoken language is a crucial aspect of dialogue systems, encompassing two fundamental tasks: intent classification and slot filling. Currently, the joint modeling approach for these two tasks has emerged as the dominant method in spoken language understanding modeling. However, the existing joint models have limitations in terms of their relevancy and utilization of contextual semantic features between the multiple tasks. To address these limitations, a joint model based on BERT and semantic fusion (JMBSF) is proposed. The model employs pre-trained BERT to extract semantic features and utilizes semantic fusion to associate and integrate this information. The results of experiments on two benchmark datasets, ATIS and Snips, in spoken language comprehension demonstrate that the proposed JMBSF model attains 98.80% and 99.71% intent classification accuracy, 98.25% and 97.24% slot-filling F1-score, and 93.40% and 93.57% sentence accuracy, respectively. These results reveal a significant improvement compared to other joint models. Furthermore, comprehensive ablation studies affirm the effectiveness of each component in the design of JMBSF. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Model architecture.</p>
Full article ">Figure 2
<p>BERT-semantic fusion layer.</p>
Full article ">Figure 3
<p>Bi-LSTM layer.</p>
Full article ">Figure 4
<p>Semantic feature fusion for intent tasks.</p>
Full article ">Figure 5
<p>Training curves for accuracy, F1, and sentence accuracy on ATIS and Snips datasets.</p>
Full article ">
4 pages, 195 KiB  
Editorial
Editorial–Special Issue on “Sensor Technology for Enhancing Training and Performance in Sport”
by Pui Wah Kong
Sensors 2023, 23(5), 2847; https://doi.org/10.3390/s23052847 - 6 Mar 2023
Cited by 4 | Viewed by 1761
Abstract
Sensor technology opens up exciting opportunities for sports [...] Full article
(This article belongs to the Special Issue Sensor Technology for Enhancing Training and Performance in Sport)
13 pages, 4017 KiB  
Article
An Adaptive Pedaling Assistive Device for Asymmetric Torque Assistant in Cycling
by Jesse Lozinski, Seyed Hamidreza Heidary, Scott C. E. Brandon and Amin Komeili
Sensors 2023, 23(5), 2846; https://doi.org/10.3390/s23052846 - 6 Mar 2023
Cited by 4 | Viewed by 2698
Abstract
Dynamic loads have short and long-term effects in the rehabilitation of lower limb joints. However, an effective exercise program for lower limb rehabilitation has been debated for a long time. Cycling ergometers were instrumented and used as a tool to mechanically load the [...] Read more.
Dynamic loads have short and long-term effects in the rehabilitation of lower limb joints. However, an effective exercise program for lower limb rehabilitation has been debated for a long time. Cycling ergometers were instrumented and used as a tool to mechanically load the lower limbs and track the joint mechano-physiological response in rehabilitation programs. Current cycling ergometers apply symmetrical loading to the limbs, which may not reflect the actual load-bearing capacity of each limb, as in Parkinson’s and Multiple Sclerosis diseases. Therefore, the present study aimed to develop a new cycling ergometer capable of applying asymmetric loads to the limbs and validate its function using human tests. The instrumented force sensor and crank position sensing system recorded the kinetics and kinematics of pedaling. This information was used to apply an asymmetric assistive torque only to the target leg using an electric motor. The performance of the proposed cycling ergometer was studied during a cycling task at three different intensities. It was shown that the proposed device reduced the pedaling force of the target leg by 19% to 40%, depending on the exercise intensity. This reduction in pedal force caused a significant reduction in the muscle activity of the target leg (p < 0.001), without affecting the muscle activity of the non-target leg. These results demonstrated that the proposed cycling ergometer device is capable of applying asymmetric loading to lower limbs, and thus has the potential to improve the outcome of exercise interventions in patients with asymmetric function in lower limbs. Full article
(This article belongs to the Special Issue Sensors and Actuators for Wearable and Implantable Devices)
Show Figures

Figure 1

Figure 1
<p>The APAD consisted of a crank position sensing system (1) and force sensor (2), BLDC rear hub motor (3), and a controller (4). The APAD was mounted on a trainer (5) for testing in the motion capture lab. The crank tracking unit consisted of 36 hall sensors on the 3D-printed fixture. The strain gauges-based force sensor measured the crank’s perpendicular force. The ID number of each hall sensor to the controller is pictured along the circumference of the unit.</p>
Full article ">Figure 2
<p>APAD motor controller flowchart.</p>
Full article ">Figure 3
<p>The custom motor control method implemented in the APAD provided assistive torque to the target leg.</p>
Full article ">Figure 4
<p>Experimental protocol at three different cadences and motor power assistance which occurred during Trials 1 to 3. Each trial consisted of two sessions of a pedaling task, where the APAD system was active in session (A) and inactive in session (I).</p>
Full article ">Figure 5
<p>The lateral-medial movement of the target knee joint marker in the transverse plane was captured by the motion capture system, when APAD was inactive (I) and active (A). The square (<tt>■</tt>) and circle (○) markers represent the center point of the knee joint position during a full cycle for sessions (I) and (A) in Trial 1, respectively. The result is for a representative participant.</p>
Full article ">Figure 6
<p>The distribution of real-time crank angular velocity (marker style “+”) and its average (solid line, window = 3 s) during sessions (A) and (I) through Trials 1 to 3. The target angular velocity is shown with the dashed line.</p>
Full article ">Figure 7
<p>The perpendicular pedal force of the target leg was measured by the strain gauge system. The force curves over a crank revolution during sessions (A) and (I) were averaged. The highlighted regions show the standard deviation. The negative force represents the crank positions for which the crank perpendicular force creates a torque in the opposite direction of motion, performing a negative work. This happened mainly for the crank angle 270° to 360 + 90°, when the leg weight applied a negative torque.</p>
Full article ">Figure 8
<p>The Polar plot of gastrocnemius muscle activity of left and right legs for Trial 2 (60 RPM). The GM was active mostly when the crank angle varied from 180–360° (±30° SD). The gastrocnemius muscle activity was reduced when APAD was active for the target leg (right leg). The radius <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <mfenced close="|" open="|"> <mrow> <munder accentunder="true"> <mrow> <mi>E</mi> <mi>M</mi> <mi>G</mi> </mrow> <mo stretchy="true">¯</mo> </munder> </mrow> </mfenced> </mrow> </mfenced> </mrow> </semantics></math>, and angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math> represent the magnitude of normalized EMG and the crank angle, respectively. The arrow shows the direction of rotation.</p>
Full article ">Figure 9
<p>The polar plot of normalized VL muscle activity of target (<b>right</b>) and non-target (<b>left</b>) legs for Trials 1 to 3. The VL muscle was excited more than 50% of its respective maximum during the downstroke, i.e., crank angle 90–240° (±10° SD). During session (I), when the APAD was inactive, the left and right VL muscle activity profiles overlapped, indicating that the APAD did not assist the non-target leg. The radius <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <mfenced close="|" open="|"> <mrow> <munder accentunder="true"> <mrow> <mi>E</mi> <mi>M</mi> <mi>G</mi> </mrow> <mo stretchy="true">¯</mo> </munder> </mrow> </mfenced> </mrow> </mfenced> </mrow> </semantics></math>, and angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math> represent the magnitude of normalized EMG and the crank angle, respectively.</p>
Full article ">
19 pages, 4136 KiB  
Article
LiDAR-as-Camera for End-to-End Driving
by Ardi Tampuu, Romet Aidla, Jan Aare van Gent and Tambet Matiisen
Sensors 2023, 23(5), 2845; https://doi.org/10.3390/s23052845 - 6 Mar 2023
Cited by 9 | Viewed by 3824
Abstract
The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering [...] Read more.
The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, simulation studies have shown that depth-sensing can make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. The main goal of our study is to investigate how useful such images are as inputs to a self-driving neural network. We demonstrate that such LiDAR images are sufficient for the real-car road-following task. Models using these images as input perform at least as well as camera-based models in the tested conditions. Moreover, LiDAR images are less sensitive to weather conditions and lead to better generalization. In a secondary research direction, we reveal that the temporal smoothness of off-policy prediction sequences correlates with the actual on-policy driving ability equally well as the commonly used mean absolute error. Full article
(This article belongs to the Special Issue Advances in Sensor Related Technologies for Autonomous Driving)
Show Figures

Figure 1

Figure 1
<p>The location of sensors used in this work. There are other sensors on the vehicle not illustrated here.</p>
Full article ">Figure 2
<p>Input modalities. The red box marks the area used as model input. Top: surround view LiDAR image, with red: intensity, blue: depth, and green: ambient. Bottom: 120-degree FOV camera.</p>
Full article ">Figure 3
<p>The modified PilotNet architecture. Each box represents the output from a layer, with the first box corresponding to the input of size (264, 68, 3). The model consists of 5 convolutional layers and 4 fully connected layers. The flattening operation is not made visible here. See the filter sizes, usage of batch normalization, and activation functions in <a href="#sensors-23-02845-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 4
<p>Safety-driver interventions in the experiments where the test track was not included in the training set. Interventions from 3 test runs with different versions of the same model and from both driving directions are overlaid on one map. Interventions due to traffic are not filtered out from these maps, unlike in <a href="#sensors-23-02845-t002" class="html-table">Table 2</a>. Left: camera models v1–v3 (first 3 rows of <a href="#sensors-23-02845-t002" class="html-table">Table 2</a>). Middle: LiDAR models v1–v3 (rows 4–6 of <a href="#sensors-23-02845-t002" class="html-table">Table 2</a>). Right: an example of a situation where the safety driver has to take over due to traffic. Such situations are not counted as interventions in <a href="#sensors-23-02845-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure A1
<p>LiDAR and camera images in summer, autumn, and winter (from top to down for LiDAR, left to right for the camera). The area used for model inputs is marked with a red rectangle. In LiDAR images, the red channel corresponds to intensity, green to depth, and blue to ambient radiation.</p>
Full article ">Figure A2
<p>LiDAR channels at the same location across the three seasons, in order from top down: summer, autumn, winter. (<b>a</b>) In the intensity channel, we see a significant difference in how the road itself looks, while vegetation is surprisingly similar despite deciduous plants having no leaves in autumn and winter. (<b>b</b>) Depth image looks stable across seasons, but rather uninformative, as road and low vegetation areas are hard to discern. (<b>c</b>) Ambient radiation images vary strongly in brightness across the seasons, while also displaying strong noise. The noise looks akin to white noise or salt-and-pepper noise and authors do not know its cause.</p>
Full article ">Figure A2 Cont.
<p>LiDAR channels at the same location across the three seasons, in order from top down: summer, autumn, winter. (<b>a</b>) In the intensity channel, we see a significant difference in how the road itself looks, while vegetation is surprisingly similar despite deciduous plants having no leaves in autumn and winter. (<b>b</b>) Depth image looks stable across seasons, but rather uninformative, as road and low vegetation areas are hard to discern. (<b>c</b>) Ambient radiation images vary strongly in brightness across the seasons, while also displaying strong noise. The noise looks akin to white noise or salt-and-pepper noise and authors do not know its cause.</p>
Full article ">Figure A3
<p>Interventions of a LiDAR v1 model in the winter. The interventions are far more frequent in open fields, whereas the model can handle driving in the forest much better. Furthermore, the middle section of the route which contains bushes by the roadside is driven well.</p>
Full article ">
24 pages, 783 KiB  
Review
Unsupervised Anomaly Detection for IoT-Based Multivariate Time Series: Existing Solutions, Performance Analysis and Future Directions
by Mohammed Ayalew Belay, Sindre Stenen Blakseth, Adil Rasheed and Pierluigi Salvo Rossi
Sensors 2023, 23(5), 2844; https://doi.org/10.3390/s23052844 - 6 Mar 2023
Cited by 31 | Viewed by 16801
Abstract
The recent wave of digitalization is characterized by the widespread deployment of sensors in many different environments, e.g., multi-sensor systems represent a critical enabling technology towards full autonomy in industrial scenarios. Sensors usually produce vast amounts of unlabeled data in the form of [...] Read more.
The recent wave of digitalization is characterized by the widespread deployment of sensors in many different environments, e.g., multi-sensor systems represent a critical enabling technology towards full autonomy in industrial scenarios. Sensors usually produce vast amounts of unlabeled data in the form of multivariate time series that may capture normal conditions or anomalies. Multivariate Time Series Anomaly Detection (MTSAD), i.e., the ability to identify normal or irregular operative conditions of a system through the analysis of data from multiple sensors, is crucial in many fields. However, MTSAD is challenging due to the need for simultaneous analysis of temporal (intra-sensor) patterns and spatial (inter-sensor) dependencies. Unfortunately, labeling massive amounts of data is practically impossible in many real-world situations of interest (e.g., the reference ground truth may not be available or the amount of data may exceed labeling capabilities); therefore, robust unsupervised MTSAD is desirable. Recently, advanced techniques in machine learning and signal processing, including deep learning methods, have been developed for unsupervised MTSAD. In this article, we provide an extensive review of the current state of the art with a theoretical background about multivariate time-series anomaly detection. A detailed numerical evaluation of 13 promising algorithms on two publicly available multivariate time-series datasets is presented, with advantages and shortcomings highlighted. Full article
(This article belongs to the Special Issue Signal Processing and AI in Sensor Networks and IoT)
Show Figures

Figure 1

Figure 1
<p>Types of anomalies.</p>
Full article ">Figure 2
<p>Approaches to unsupervised MTSAD.</p>
Full article ">Figure 3
<p>Unsupervised MTSAD methods. The colors red, green and blue indicate approach type as in <a href="#sensors-23-02844-f002" class="html-fig">Figure 2</a>. Teal indicates that both compression and reconstruction approaches can be used.</p>
Full article ">Figure 4
<p>ROC and PRC for SWaT dataset.</p>
Full article ">Figure 5
<p>ROC and PRC for SMD dataset.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop