[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 24, February-1
Previous Issue
Volume 24, January-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 2 (January-2 2024) – 406 articles

Cover Story (view full-size image): Indoor localization has become increasingly important in modern and strategic applications, such as navigation, industrial, medical and entertainment. Bluetooth Low Energy ensures low energy consumption and significant diffusion in modern mobile devices. Although the scientific literature proposes various solutions for BLE-based indoor localization, it is not yet clear which combination of solutions is the most effective for obtaining accurate and reliable performance. In this work, a comparative analysis was performed to provide a better understanding of the most effective and reliable solutions for achieving more accurate BLE-based indoor localization. The proposed methodology could help designers of indoor localization systems to identify which techniques should be used to meet the performance requirements of specific applications. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 616 KiB  
Article
Mood Disorder Severity and Subtype Classification Using Multimodal Deep Neural Network Models
by Joo Hun Yoo, Harim Jeong, Ji Hyun An and Tai-Myoung Chung
Sensors 2024, 24(2), 715; https://doi.org/10.3390/s24020715 - 22 Jan 2024
Cited by 3 | Viewed by 2338
Abstract
The subtype diagnosis and severity classification of mood disorder have been made through the judgment of verified assistance tools and psychiatrists. Recently, however, many studies have been conducted using biomarker data collected from subjects to assist in diagnosis, and most studies use heart [...] Read more.
The subtype diagnosis and severity classification of mood disorder have been made through the judgment of verified assistance tools and psychiatrists. Recently, however, many studies have been conducted using biomarker data collected from subjects to assist in diagnosis, and most studies use heart rate variability (HRV) data collected to understand the balance of the autonomic nervous system on statistical analysis methods to perform classification through statistical analysis. In this research, three mood disorder severity or subtype classification algorithms are presented through multimodal analysis of data on the collected heart-related data variables and hidden features from the variables of time and frequency domain of HRV. Comparing the classification performance of the statistical analysis widely used in existing major depressive disorder (MDD), anxiety disorder (AD), and bipolar disorder (BD) classification studies and the multimodality deep neural network analysis newly proposed in this study, it was confirmed that the severity or subtype classification accuracy performance of each disease improved by 0.118, 0.231, and 0.125 on average. Through the study, it was confirmed that deep learning analysis of biomarker data such as HRV can be applied as a primary identification and diagnosis aid for mental diseases, and that it can help to objectively diagnose psychiatrists in that it can confirm not only the diagnosed disease but also the current mood status. Full article
(This article belongs to the Special Issue Advanced Machine Intelligence for Biomedical Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Data preprocessing steps of overall dataset.</p>
Full article ">Figure 2
<p>Overall architecture to diagnose severity or subtype of the mood disorders using multimodal deep neural network model.</p>
Full article ">Figure 3
<p>Equations for calculating classification accuracy, precision, recall, and F1 score.</p>
Full article ">Figure 4
<p>Confusion matrix of major depressive disorder severity classification.</p>
Full article ">Figure 5
<p>Confusion matrix of anxiety disorder severity classification.</p>
Full article ">Figure 6
<p>Confusion matrix of bipolar disorder severity classification.</p>
Full article ">
14 pages, 2632 KiB  
Article
A Compact Broadside Coupled Stripline 2-D Beamforming Network and Its Application to a 2-D Beam Scanning Array Antenna Using Panasonic Megtron 6 Substrate
by Jean Temga, Takashi Shiba and Noriharu Suematsu
Sensors 2024, 24(2), 714; https://doi.org/10.3390/s24020714 - 22 Jan 2024
Cited by 1 | Viewed by 1318
Abstract
This article presents a 4-way 2-D butler matrix (BM)-based beamforming network (BFN) using a multilayer substrate broadside coupled stripline (BCS). To achieve the characteristics of a compact, wide-bandwidth, high-gain phased array, a BCS coupler is implemented using the Megtron 6 substrate. The compact [...] Read more.
This article presents a 4-way 2-D butler matrix (BM)-based beamforming network (BFN) using a multilayer substrate broadside coupled stripline (BCS). To achieve the characteristics of a compact, wide-bandwidth, high-gain phased array, a BCS coupler is implemented using the Megtron 6 substrate. The compact 2-D BFN is formed by combining planarly two horizontal BCS couplers and two vertical BCS couplers. The BFN is proposed without a crossover and without a phase shifter, generating phase responses of ±90° in the x- and y-directions, respectively. The proposed BFN exhibits a wide operating band of 66.7% (3–7 GHz) and a compact physical area of just 0.25 λ0 × 0.25 λ0 × 0.04 λ0. The planar 2-D BFN is easily integrated with the patch antenna radiation elements to construct a 2-D multibeam array antenna that generates four fixed beams, one in each quadrant, at an elevation angle of 30° from the broadside to the array axis when the element separation is 0.6 λ0. The physical area of the 2-D multibeam array antenna is just 0.8 λ0 × 0.8 λ0 × 0.04 λ0. The prototypes of the BCS coupler, the 2-D BFN, and the 2-D multibeam array antenna were fabricated and measured. The measured and simulated results were in good agreement. A gain of 9.1 to 9.9 dBi was measured. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Configuration of the vertical- and horizontal-plane couplers. (<b>b</b>) Topology of the 2-D BFN.</p>
Full article ">Figure 2
<p>Cross-view of the proposed broadside coupled stripline.</p>
Full article ">Figure 3
<p>Proposed broadside coupled stripline 2-D BFN.</p>
Full article ">Figure 4
<p>(<b>a</b>) Cross-view of the Panasonic Megtron 6 substrate; (<b>b</b>) coupler layout; and (<b>c</b>) fabricated prototype.</p>
Full article ">Figure 5
<p>Broadside coupler simulation results: (<b>a</b>) S-paramters; (<b>b</b>) phase difference.</p>
Full article ">Figure 6
<p>Broadside coupler measured results: (<b>a</b>) S-paramters; (<b>b</b>) phase difference.</p>
Full article ">Figure 7
<p>Proposed 2-D BFN: (<b>a</b>) simulation layout; (<b>b</b>) fabricated prototype.</p>
Full article ">Figure 8
<p>Proposed 2-D BFN simulation results: (<b>a</b>) reflection coefficient; (<b>b</b>) amplitude distribution; (<b>c</b>) phase shift in the <span class="html-italic">x</span>-direction; and (<b>d</b>) phase shift in the <span class="html-italic">y</span>-direction.</p>
Full article ">Figure 9
<p>Proposed 2-D BFN measured results: (<b>a</b>) reflection coefficient; (<b>b</b>) amplitude distribution; (<b>c</b>) phase shift in the <span class="html-italic">x</span>-direction; and (<b>d</b>) phase shift in the <span class="html-italic">y</span>-direction.</p>
Full article ">Figure 10
<p>2 × 2 array design layout.</p>
Full article ">Figure 11
<p>Antenna element: (<b>a</b>) simulated/measured reflection coefficients; (<b>b</b>) simulated E-(blue), H-(red), and E-cross (orange) planes and measured E-(green) and H-(black) planes at 5.2 GHz.</p>
Full article ">Figure 12
<p>Simulated 3-D radiation patterns at 5.2 GHz.</p>
Full article ">Figure 13
<p>Fabricated prototype photography of the 2 × 2 array antenna.</p>
Full article ">Figure 14
<p>Measured reflection coefficients for ports #1, #2, #3, and #4.</p>
Full article ">Figure 15
<p>Anechoic chamber setup for radiation pattern measurement.</p>
Full article ">Figure 16
<p>Measured 3-D radiation pattern at 5.2 GHz.</p>
Full article ">Figure 17
<p>Simulated co-pol (blue), measured co-pol (red), and simulated x-pol (black) for ports #1, #2, #3, and #4 at 5.2 GHz.</p>
Full article ">Figure 18
<p>Simulated gain and measured gain when ports #1 to #4 are excited.</p>
Full article ">
30 pages, 4027 KiB  
Article
Anomaly Detection IDS for Detecting DoS Attacks in IoT Networks Based on Machine Learning Algorithms
by Esra Altulaihan, Mohammed Amin Almaiah and Ahmed Aljughaiman
Sensors 2024, 24(2), 713; https://doi.org/10.3390/s24020713 - 22 Jan 2024
Cited by 37 | Viewed by 6081
Abstract
Widespread and ever-increasing cybersecurity attacks against Internet of Things (IoT) systems are causing a wide range of problems for individuals and organizations. The IoT is self-configuring and open, making it vulnerable to insider and outsider attacks. In the IoT, devices are designed to [...] Read more.
Widespread and ever-increasing cybersecurity attacks against Internet of Things (IoT) systems are causing a wide range of problems for individuals and organizations. The IoT is self-configuring and open, making it vulnerable to insider and outsider attacks. In the IoT, devices are designed to self-configure, enabling them to connect to networks autonomously without extensive manual configuration. By using various protocols, technologies, and automated processes, self-configuring IoT devices are able to seamlessly connect to networks, discover services, and adapt their configurations without requiring manual intervention or setup. Users’ security and privacy may be compromised by attackers seeking to obtain access to their personal information, create monetary losses, and spy on them. A Denial of Service (DoS) attack is one of the most devastating attacks against IoT systems because it prevents legitimate users from accessing services. A cyberattack of this type can significantly damage IoT services and smart environment applications in an IoT network. As a result, securing IoT systems has become an increasingly significant concern. Therefore, in this study, we propose an IDS defense mechanism to improve the security of IoT networks against DoS attacks using anomaly detection and machine learning (ML). Anomaly detection is used in the proposed IDS to continuously monitor network traffic for deviations from normal profiles. For that purpose, we used four types of supervised classifier algorithms, namely, Decision Tree (DT), Random Forest (RF), K Nearest Neighbor (kNN), and Support Vector Machine (SVM). In addition, we utilized two types of feature selection algorithms, the Correlation-based Feature Selection (CFS) algorithm and the Genetic Algorithm (GA) and compared their performances. We also utilized the IoTID20 dataset, one of the most recent for detecting anomalous activity in IoT networks, to train our model. The best performances were obtained with DT and RF classifiers when they were trained with features selected by GA. However, other metrics, such as training and testing times, showed that DT was superior. Full article
Show Figures

Figure 1

Figure 1
<p>Types of Intrusion Detection Systems.</p>
Full article ">Figure 2
<p>System workflow.</p>
Full article ">Figure 3
<p>Genetic algorithm process.</p>
Full article ">Figure 4
<p>DT structure.</p>
Full article ">Figure 5
<p>Structure of RF.</p>
Full article ">Figure 6
<p>Training time.</p>
Full article ">Figure 7
<p>Testing time.</p>
Full article ">Figure 8
<p>Evaluation of the performances.</p>
Full article ">Figure 9
<p>Accuracy results.</p>
Full article ">Figure 10
<p>Precision results.</p>
Full article ">Figure 11
<p>Recall results.</p>
Full article ">Figure 12
<p>F1 score results.</p>
Full article ">
20 pages, 12695 KiB  
Article
IIoT Low-Cost ZigBee-Based WSN Implementation for Enhanced Production Efficiency in a Solar Protection Curtains Manufacturing Workshop
by Hicham Klaina, Imanol Picallo, Peio Lopez-Iturri, Aitor Biurrun, Ana V. Alejos, Leyre Azpilicueta, Abián B. Socorro-Leránoz and Francisco Falcone
Sensors 2024, 24(2), 712; https://doi.org/10.3390/s24020712 - 22 Jan 2024
Viewed by 1626
Abstract
Nowadays, the Industry 4.0 concept and the Industrial Internet of Things (IIoT) are considered essential for the implementation of automated manufacturing processes across various industrial settings. In this regard, wireless sensor networks (WSN) are crucial due to their inherent mobility, easy deployment and [...] Read more.
Nowadays, the Industry 4.0 concept and the Industrial Internet of Things (IIoT) are considered essential for the implementation of automated manufacturing processes across various industrial settings. In this regard, wireless sensor networks (WSN) are crucial due to their inherent mobility, easy deployment and maintenance, scalability, and low power consumption, among other benefits. In this context, the presented paper proposes an optimized and low-cost WSN based on ZigBee communication technology for the monitoring of a real manufacturing facility. The company designs and manufactures solar protection curtains and aims to integrate the deployed WSN into the Enterprise Resource Planning (ERP) system in order to optimize their production processes and enhance production efficiency and cost estimation capabilities. To achieve this, radio propagation measurements and 3D ray launching simulations were conducted to characterize the wireless channel behavior and facilitate the development of an optimized WSN system that can operate in the complex industrial environment presented and validated through on-site wireless channel measurements, as well as interference analysis. Then, a low-cost WSN was implemented and deployed to acquire real-time data from different machinery and workstations, which will be integrated into the ERP system. Multiple data streams have been collected and processed from the shop floor of the factory by means of the prototype wireless nodes implemented. This integration will enable the company to optimize its production processes, fabricate products more efficiently, and enhance its cost estimation capabilities. Moreover, the proposed system provides a scalable platform, enabling the integration of new sensors as well as information processing capabilities. Full article
(This article belongs to the Collection Wireless Sensor Networks towards the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Different workstations and areas of the Galeo factory workshop.</p>
Full article ">Figure 2
<p>RF interference measurements within the factory workshop at the 2.4 GHz band.</p>
Full article ">Figure 3
<p>The created workshop scenario for its simulation by the 3D Ray Launching tool.</p>
Full article ">Figure 4
<p>The two measurement campaigns carried out for the validations of the 3D ray launching algorithm, (<b>a</b>) measurement points (1 to 16) covering the entire area of the workshop and (<b>b</b>) measurements for a LoS linear path.</p>
Full article ">Figure 5
<p>Employed transmitter and receiver configuration for the radio channel measurements: (<b>a</b>) the VCO as a transmitter; (<b>b</b>) the spectrum analyzer as a receiver.</p>
Full article ">Figure 6
<p>Measurements vs. 3D-RL simulation results for (<b>a</b>) scenario 1 and (<b>b</b>) scenario 2.</p>
Full article ">Figure 7
<p>Estimated power delay profiles at different locations (measurement points) for both TX1 and TX2 (<b>a</b>) location 3; (<b>b</b>) location 10; (<b>c</b>) location 13; (<b>d</b>) location 15.</p>
Full article ">Figure 8
<p>The diagram shows 2.4 GHz RF power level distribution maps at different heights for scenario 2.</p>
Full article ">Figure 9
<p>Bidimensional RF power level distribution planes for (<b>a</b>) scenario 1, and (<b>b</b>) scenario 2.</p>
Full article ">Figure 10
<p>Two-dimensional planes for ZigBee notes’ sensitivity compliance for (<b>a</b>) scenario 1 and (<b>b</b>) scenario 2.</p>
Full article ">Figure 11
<p>Schematic view of the WSN deployment within the workshop (‘C’ represents the ZigBee network coordinator/gateway).</p>
Full article ">Figure 12
<p>(<b>a</b>) The implemented sensor nodes and Coordinator/Gateway; (<b>b</b>) Picture of a sensor node; (<b>c</b>) The node encapsulated.</p>
Full article ">Figure 13
<p>The implemented nodes at different workstations: (<b>a</b>) knife cutting machine; (<b>b</b>) laser cutting machine; (<b>c</b>) thermal welding machine 1; (<b>d</b>) thermal welding machine 2; (<b>e</b>) automatic slat machine.</p>
Full article ">Figure 14
<p>Participation of employees and workstations in different product fabrications.</p>
Full article ">Figure 15
<p>Data analytics examples: (<b>a</b>) time that each product manufacturing process consumed during two complete working days; (<b>b</b>) time spent by a product at each workstation; (<b>c</b>) employees that took part in the manufacturing of a product.</p>
Full article ">Figure 15 Cont.
<p>Data analytics examples: (<b>a</b>) time that each product manufacturing process consumed during two complete working days; (<b>b</b>) time spent by a product at each workstation; (<b>c</b>) employees that took part in the manufacturing of a product.</p>
Full article ">Figure 16
<p>Time consumed manufacturing products in the heat-welding workstation, by two different workers: (<b>a</b>) Employee 6; (<b>b</b>) Employee 17.</p>
Full article ">Figure 16 Cont.
<p>Time consumed manufacturing products in the heat-welding workstation, by two different workers: (<b>a</b>) Employee 6; (<b>b</b>) Employee 17.</p>
Full article ">
17 pages, 8867 KiB  
Article
Intuitive Cell Manipulation Microscope System with Haptic Device for Intracytoplasmic Sperm Injection Simplification
by Kazuya Sakamoto, Tadayoshi Aoyama, Masaru Takeuchi and Yasuhisa Hasegawa
Sensors 2024, 24(2), 711; https://doi.org/10.3390/s24020711 - 22 Jan 2024
Cited by 2 | Viewed by 1349
Abstract
In recent years, the demand for effective intracytoplasmic sperm injection (ICSI) for the treatment of male infertility has increased. The ICSI operation is complicated as it involves delicate organs and requires a high level of skill. Several cell manipulation systems that do not [...] Read more.
In recent years, the demand for effective intracytoplasmic sperm injection (ICSI) for the treatment of male infertility has increased. The ICSI operation is complicated as it involves delicate organs and requires a high level of skill. Several cell manipulation systems that do not require such skills have been proposed; notably, several automated methods are available for cell rotation. However, these methods are unfeasible for the delicate ICSI medical procedure because of safety issues. Thus, this study proposes a microscopic system that enables intuitive micropipette manipulation using a haptic device that safely and efficiently performs the entire ICSI procedure. The proposed system switches between field-of-view expansion and three-dimensional image presentation to present images according to the operational stage. In addition, the system enables intuitive pipette manipulation using a haptic device. Experiments were conducted on microbeads instead of oocytes. The results confirmed that the time required for the experimental task was improved by 52.6%, and the injection error was improved by 75.3% compared to those observed in the conventional system. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Procedure of ICSI.</p>
Full article ">Figure 2
<p>Configuration of the proposed system.</p>
Full article ">Figure 3
<p>Overview of the proposed system.</p>
Full article ">Figure 4
<p>Grip angle of the manipulation interface.</p>
Full article ">Figure 5
<p>Manipulation of a target with suction pressure.</p>
Full article ">Figure 6
<p>Manipulation of a target with discharge pressure.</p>
Full article ">Figure 7
<p>Contact situation.</p>
Full article ">Figure 8
<p>Noncontact situation.</p>
Full article ">Figure 9
<p>Injection with a guide function.</p>
Full article ">Figure 10
<p>Overview of the experimental procedure.</p>
Full article ">Figure 11
<p>Injection error ‘<span class="html-italic">d</span>’.</p>
Full article ">Figure 12
<p>Experimental scene.</p>
Full article ">Figure 13
<p>Box plot of task time.</p>
Full article ">Figure 14
<p>Box plot of total error.</p>
Full article ">Figure 15
<p>Box plot of x-axis error.</p>
Full article ">Figure 16
<p>Box plot of y-axis error.</p>
Full article ">Figure 17
<p>Box plot of z-axis error.</p>
Full article ">Figure 18
<p>Overhead view of the operation during the demonstration.</p>
Full article ">Figure 19
<p>Presented image during demonstration.</p>
Full article ">Figure 20
<p>Force graph presented during the demonstration.</p>
Full article ">
16 pages, 3360 KiB  
Article
Assessment of Foot Strike Angle and Forward Propulsion with Wearable Sensors in People with Stroke
by Carmen J. Ensink, Cheriel Hofstad, Theo Theunissen and Noël L. W. Keijsers
Sensors 2024, 24(2), 710; https://doi.org/10.3390/s24020710 - 22 Jan 2024
Cited by 2 | Viewed by 1553
Abstract
Effective retraining of foot elevation and forward propulsion is a critical aspect of gait rehabilitation therapy after stroke, but valuable feedback to enhance these functions is often absent during home-based training. To enable feedback at home, this study assesses the validity of an [...] Read more.
Effective retraining of foot elevation and forward propulsion is a critical aspect of gait rehabilitation therapy after stroke, but valuable feedback to enhance these functions is often absent during home-based training. To enable feedback at home, this study assesses the validity of an inertial measurement unit (IMU) to measure the foot strike angle (FSA), and explores eight different kinematic parameters as potential indicators for forward propulsion. Twelve people with stroke performed walking trials while equipped with five IMUs and markers for optical motion analysis (the gold standard). The validity of the IMU-based FSA was assessed via Bland–Altman analysis, ICC, and the repeatability coefficient. Eight different kinematic parameters were compared to the forward propulsion via Pearson correlation. Analyses were performed on a stride-by-stride level and within-subject level. On a stride-by-stride level, the mean difference between the IMU-based FSA and OMCS-based FSA was 1.4 (95% confidence: −3.0; 5.9) degrees, with ICC = 0.97, and a repeatability coefficient of 5.3 degrees. The mean difference for the within-subject analysis was 1.5 (95% confidence: −1.0; 3.9) degrees, with a mean repeatability coefficient of 3.1 (SD: 2.0) degrees. Pearson’s r value for all the studied parameters with forward propulsion were below 0.75 for the within-subject analysis, while on a stride-by-stride level the foot angle upon terminal contact and maximum foot angular velocity could be indicative for the peak forward propulsion. In conclusion, the FSA can accurately be assessed with an IMU on the foot in people with stroke during regular walking. However, no suitable kinematic indicator for forward propulsion was identified based on foot and shank movement that could be used for feedback in people with stroke. Full article
(This article belongs to the Special Issue IMU and Innovative Sensors for Healthcare)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the measurement setup. Note the grey optical markers at the toe and heel of the feet, defining the foot segment, as well as the markers at the knee and ankle, defining the shank segment. {SF} represents the local sensor frame of the IMU, {GF<sub>OMCS</sub>} represents the global frame of the OMCS system, and {GF<sub>IMU</sub>} represents the global frame of the IMU system.</p>
Full article ">Figure 2
<p>The measured IMU-based foot angle (foot angle + α) corrected with the mean foot angle (α) during the foot flat phase of the first 10 strides, to consider the foot angle during the foot flat phase to be zero degrees.</p>
Full article ">Figure 3
<p>Forward propulsion measured by the area under the curve from the breaking-to-propulsion transition until TC, indicated with green. Peak forward propulsion, indicated by x, was defined as the maximum value from the breaking-to-propulsion transition until TC.</p>
Full article ">Figure 4
<p>Bland–Altman analysis of the FSA (degrees) of all strides of all participants. The difference between measures is calculated as IMU-based FSA—OMCS-based FSA.</p>
Full article ">Figure 5
<p>Bland–Altman analysis of the mean FSA (degrees) per participant. The difference between measures is calculated as mean IMU-based FSA—mean OMCS-based FSA.</p>
Full article ">Figure A1
<p>Bland–Altman analysis of the FSA (degrees) on a stride-by-stride level for each participant. The difference between measures is calculated as IMU-based FSA—OMCS-based FSA.</p>
Full article ">Figure A2
<p>Correlation graphs of the potential indicators and the forward propulsion (area under the curve). Each color represents a different participant.</p>
Full article ">Figure A3
<p>Correlation graphs of the potential indicators and the forward propulsion (peak). Each color represents a different participant.</p>
Full article ">
20 pages, 12344 KiB  
Article
Vehicle–Bridge Interaction Modelling Using Precise 3D Road Surface Analysis
by Maja Kreslin, Peter Češarek, Aleš Žnidarič, Darko Kokot, Jan Kalin and Rok Vezočnik
Sensors 2024, 24(2), 709; https://doi.org/10.3390/s24020709 - 22 Jan 2024
Cited by 1 | Viewed by 1554
Abstract
Uneven road surfaces are the primary source of excitation in the dynamic interaction between a bridge and a vehicle and can lead to errors in bridge weigh-in-motion (B-WIM) systems. In order to correctly reproduce this interaction in a numerical model of a bridge, [...] Read more.
Uneven road surfaces are the primary source of excitation in the dynamic interaction between a bridge and a vehicle and can lead to errors in bridge weigh-in-motion (B-WIM) systems. In order to correctly reproduce this interaction in a numerical model of a bridge, it is essential to know the magnitude and location of the various roadway irregularities. This paper presents a methodology for measuring the 3D road surface using static terrestrial laser scanning and a numerical model for simulating vehicle passage over a bridge with a measured road surface. This model allows the evaluation of strain responses in the time domain at any bridge location considering different parameters such as vehicle type, lateral position and speed, road surface unevenness, bridge type, etc. Since the time domain strains are crucial for B-WIM algorithms, the proposed approach facilitates the analysis of the different factors affecting the B-WIM results. The first validation of the proposed methodology was carried out on a real bridge, where extensive measurements were performed using different sensors, including measurements of the road surface, the response of the bridge when crossed by a test vehicle and the dynamic properties of the bridge and vehicle. The comparison between the simulated and measured bridge response marks a promising step towards investigating the influence of unevenness on the results of B-WIM. Full article
(This article belongs to the Special Issue Sensors in Civil Structural Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>The common operating principle of static TLS.</p>
Full article ">Figure 2
<p>A surface model composed of irregular triangles.</p>
Full article ">Figure 3
<p>Coupled model for vehicle–bridge interaction considering the measured unevenness of the road surface.</p>
Full article ">Figure 4
<p>Numerical model of the vehicle.</p>
Full article ">Figure 5
<p>Test bridge. (<b>a</b>) Side view; (<b>b</b>) under the bridge.</p>
Full article ">Figure 6
<p>Dimensions of the test bridge, measured in the field (plan view and cross sections).</p>
Full article ">Figure 7
<p>RIEGL VZ-400 scanner and survey station locations (red and yellow dots).</p>
Full article ">Figure 8
<p>Measurements of the bridge’s dynamic properties: (<b>a</b>,<b>b</b>) instrumented accelerometer and data acquisition; (<b>c</b>) locations of the accelerometers at the bottom part of the slab.</p>
Full article ">Figure 9
<p>Bridge acceleration response to a random vehicle.</p>
Full article ">Figure 10
<p>Measurements of the dynamic properties of the vehicle: (<b>a</b>) 3-axle truck; (<b>b</b>) instrumented transducers; (<b>c</b>) schematic locations of the transducers on the vehicle.</p>
Full article ">Figure 11
<p>Vehicle response to a manual excitation of the vibration around the longitudinal axis.</p>
Full article ">Figure 12
<p>Measurements of the bridge response to crossing vehicle: (<b>a</b>) instrumented bridge; (<b>b</b>) vehicle crossing; (<b>c</b>) strain gauge locations on the bottom part of the slab.</p>
Full article ">Figure 13
<p>Response of the bridge in microstrains in the time domain resulting from the vehicle crossing in lane 2 at a 40 km/h speed.</p>
Full article ">Figure 14
<p>Spatial distribution of road surface unevenness.</p>
Full article ">Figure 15
<p>Comparison of road profiles from TLS and inertial profiler in WP1.</p>
Full article ">Figure 16
<p>Dynamic responses of the test bridge in the frequency domain.</p>
Full article ">Figure 17
<p>Measured eigenmodes of the bridge: (<b>a</b>,<b>c</b>) first mode; (<b>b</b>,<b>d</b>) third mode.</p>
Full article ">Figure 18
<p>Measured eigenmodes of the vehicle: (<b>a</b>) first mode—rotation around longitudinal axes; (<b>b</b>) second mode—front axle inclination; (<b>c</b>) third mode—rear axle inclination.</p>
Full article ">Figure 19
<p>Calculated eigenmodes of the vehicle using Abaqus: (<b>a</b>) first mode—rotation around longitudinal axes; (<b>b</b>) second mode—front axle inclination; (<b>c</b>) third mode—rear axle inclination.</p>
Full article ">Figure 20
<p>Modelled (<b>a</b>) and measured (<b>b</b>) strains for 14 locations.</p>
Full article ">Figure 21
<p>Comparison of experimental and simulated results for selected locations (SG05, SG08, SG10 and SG11).</p>
Full article ">Figure 21 Cont.
<p>Comparison of experimental and simulated results for selected locations (SG05, SG08, SG10 and SG11).</p>
Full article ">Figure 22
<p>Transverse distribution of maximum strains obtained using measures and simulation.</p>
Full article ">Figure 23
<p>0.1 mm wide crack under the strain gauge SG8.</p>
Full article ">
28 pages, 1300 KiB  
Review
A Review of IoT Firmware Vulnerabilities and Auditing Techniques
by Taimur Bakhshi, Bogdan Ghita and Ievgeniia Kuzminykh
Sensors 2024, 24(2), 708; https://doi.org/10.3390/s24020708 - 22 Jan 2024
Cited by 12 | Viewed by 6542
Abstract
In recent years, the Internet of Things (IoT) paradigm has been widely applied across a variety of industrial and consumer areas to facilitate greater automation and increase productivity. Higher dependability on connected devices led to a growing range of cyber security threats targeting [...] Read more.
In recent years, the Internet of Things (IoT) paradigm has been widely applied across a variety of industrial and consumer areas to facilitate greater automation and increase productivity. Higher dependability on connected devices led to a growing range of cyber security threats targeting IoT-enabled platforms, specifically device firmware vulnerabilities, often overlooked during development and deployment. A comprehensive security strategy aiming to mitigate IoT firmware vulnerabilities would entail auditing the IoT device firmware environment, from software components, storage, and configuration, to delivery, maintenance, and updating, as well as understanding the efficacy of tools and techniques available for this purpose. To this effect, this paper reviews the state-of-the-art technology in IoT firmware vulnerability assessment from a holistic perspective. To help with the process, the IoT ecosystem is divided into eight categories: system properties, access controls, hardware and software re-use, network interfacing, image management, user awareness, regulatory compliance, and adversarial vectors. Following the review of individual areas, the paper further investigates the efficiency and scalability of auditing techniques for detecting firmware vulnerabilities. Beyond the technical aspects, state-of-the-art IoT firmware architectures and respective evaluation platforms are also reviewed according to their technical, regulatory, and standardization challenges. The discussion is accompanied also by a review of the existing auditing tools, the vulnerabilities addressed, the analysis method used, and their abilities to scale and detect unknown attacks. The review also proposes a taxonomy of vulnerabilities and maps them with their exploitation vectors and with the auditing tools that could help in identifying them. Given the current interest in analysis automation, the paper explores the feasibility and impact of evolving machine learning and blockchain applications in securing IoT firmware. The paper concludes with a summary of ongoing and future research challenges in IoT firmware to facilitate and support secure IoT development. Full article
(This article belongs to the Special Issue IoT Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>IoT firmware vulnerabilities—influencing factors.</p>
Full article ">Figure 2
<p>Future research avenues, open challenges, and intersecting themes.</p>
Full article ">
24 pages, 12639 KiB  
Article
Cooperative Safe Trajectory Planning for Quadrotor Swarms
by Yahui Zhang, Peng Yi and Yiguang Hong
Sensors 2024, 24(2), 707; https://doi.org/10.3390/s24020707 - 22 Jan 2024
Cited by 1 | Viewed by 1371
Abstract
In this paper, we propose a novel distributed algorithm based on model predictive control and alternating direction multiplier method (DMPC-ADMM) for cooperative trajectory planning of quadrotor swarms. First, a receding horizon trajectory planning optimization problem is constructed, in which the differential flatness property [...] Read more.
In this paper, we propose a novel distributed algorithm based on model predictive control and alternating direction multiplier method (DMPC-ADMM) for cooperative trajectory planning of quadrotor swarms. First, a receding horizon trajectory planning optimization problem is constructed, in which the differential flatness property is used to deal with the nonlinear dynamics of quadrotors while we design a relaxed form of the discrete-time control barrier function (DCBF) constraint to balance feasibility and safety. Then, we decompose the original trajectory planning problem by ADMM and solve it in a fully distributed manner with peer-to-peer communication, which induces the quadrotors within the communication range to reach a consensus on their future trajectories to enhance safety. In addition, an event-triggered mechanism is designed to reduce the communication overhead. The simulation results verify that the trajectories generated by our method are real-time, safe, and smooth. A comprehensive comparison with the centralized strategy and several other distributed strategies in terms of real-time, safety, and feasibility verifies that our method is more suitable for the trajectory planning of large-scale quadrotor swarms. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the trajectory planning based on DMPC-ADMM for a quadrotor swarm in a crowded environment.</p>
Full article ">Figure 2
<p>Centralized design (<b>left</b>) vs. our distributed design (<b>right</b>) for cooperative trajectory planning of quadrotor swarms.</p>
Full article ">Figure 3
<p>DCBF-based collision avoidance between quadrotor <span class="html-italic">i</span> and quadrotor <span class="html-italic">j</span>. The black solid arrows indicate the positions at the identical global time. The boundaries of the DCBF are defined by the corresponding constraints to prevent the quadrotors from approaching each other too fast.</p>
Full article ">Figure 4
<p>Structure of the event detector.</p>
Full article ">Figure 5
<p>Generated trajectories of ten quadrotors (solid dots in different colors) cross a finite space with obstacles using our method (DMPC-ADMM).</p>
Full article ">Figure 6
<p>Generated trajectories of ten quadrotors (solid dots in different colors) cross a finite space with obstacles using CMPC.</p>
Full article ">Figure 7
<p>Simulation results of ten quadrotors cross a finite space with obstacles using our method (<b>left</b>) and CMPC (<b>right</b>). (<b>a</b>,<b>b</b>) Velocity variations. (<b>c</b>,<b>d</b>) Distance between quadrotors and obstacles (Only distance statistics within 6 m are shown). (<b>e</b>,<b>f</b>) Distance among quadrotors (Only distance statistics within 6 m are shown). The red dashed lines represent the maximum velocity <math display="inline"> <semantics> <mrow> <msub> <mi>v</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>3</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math>, the minimum velocity <math display="inline"> <semantics> <mrow> <msub> <mi>v</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>3</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math>, the safe distance between quadrotors and obstacles <math display="inline"> <semantics> <mrow> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mi>o</mi> <mo>,</mo> <mi>s</mi> <mi>a</mi> <mi>f</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>0.2</mn><mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> </mrow> </semantics> </math>, and the safe distance among quadrotors <math display="inline"> <semantics> <mrow> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mi>j</mi> <mo>,</mo> <mi>s</mi> <mi>a</mi> <mi>f</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>0.4</mn><mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> </mrow> </semantics> </math>, respectively.</p>
Full article ">Figure 8
<p>Simulation results of eight quadrotors (solid dots in different colors) exchanging positions flight using our method (<b>left</b>) and CMPC (<b>right</b>). (<b>a</b>,<b>b</b>) Overall view of the trajectories. (<b>c</b>,<b>d</b>) Side view of the trajectories. (<b>e</b>,<b>f</b>) Velocity variations. (<b>g</b>,<b>h</b>) Distance among quadrotors (Only distance statistics within 6 m are shown). The red dashed lines represent the maximum velocity <math display="inline"> <semantics> <mrow> <msub> <mi>v</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>3</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math>, the minimum velocity <math display="inline"> <semantics> <mrow> <msub> <mi>v</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>3</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math>, and the safe distance among quadrotors <math display="inline"> <semantics> <mrow> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mi>j</mi> <mo>,</mo> <mi>s</mi> <mi>a</mi> <mi>f</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>0.4</mn><mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> </mrow> </semantics> </math>, respectively.</p>
Full article ">Figure 9
<p>Simulation results of eight quadrotors (solid dots in different colors) exchanging positions flight using CVMPC (<b>left</b>) and DMPC (<b>right</b>). (<b>a</b>,<b>b</b>) Overall view of the trajectories. (<b>c</b>,<b>d</b>) Side view of the trajectories. (<b>e</b>,<b>f</b>) Velocity variations. (<b>g</b>,<b>h</b>) Distance among quadrotors (Only distance statistics within 6 m are shown). The red dashed lines represent the maximum velocity <math display="inline"> <semantics> <mrow> <msub> <mi>v</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>3</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math>, the minimum velocity <math display="inline"> <semantics> <mrow> <msub> <mi>v</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>3</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math>, and the safe distance among quadrotors <math display="inline"> <semantics> <mrow> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mi>j</mi> <mo>,</mo> <mi>s</mi> <mi>a</mi> <mi>f</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>0.4</mn><mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> </mrow> </semantics> </math>, respectively.</p>
Full article ">Figure 10
<p>Generated trajectories of multiple quadrotors (solid dots in different colors) exchanging positions flight using our method (DMPC-ADMM). (<b>a</b>) Two quadrotors. (<b>b</b>) Four quadrotors. (<b>c</b>) Sixteen quadrotors.</p>
Full article ">Figure 11
<p>Comparison of communication ratio and computation time before and after adding the event-triggered mechanism.</p>
Full article ">
23 pages, 10644 KiB  
Article
Relationship of Magnetic Domain and Permeability for Clustered Soft Magnetic Narrow Strips with In-Plane Inclined Magnetization Easy Axis on Distributed Magnetic Field
by Tomoo Nakai
Sensors 2024, 24(2), 706; https://doi.org/10.3390/s24020706 - 22 Jan 2024
Viewed by 1388
Abstract
A unique functionality was reported for a thin-film soft magnetic strip with a certain angle of inclined magnetic anisotropy. It can switch magnetic domain by applying a surface normal field with a certain distribution on the element. The domain switches between a single [...] Read more.
A unique functionality was reported for a thin-film soft magnetic strip with a certain angle of inclined magnetic anisotropy. It can switch magnetic domain by applying a surface normal field with a certain distribution on the element. The domain switches between a single domain and a multi-domain. Our previous study shows that this phenomenon appears even in the case of the adjacent configuration of multiple narrow strips. It was also reported that the magnetic permeability for the alternating current (AC) magnetic field changes drastically in the frequency range from 10 kHz to 10 MHz as a function of the strength of the distributed magnetic field. In this paper, the correspondence of AC permeability and the magnetic domain as a function of the intensity of the distributed field is investigated. It was confirmed that the extension of the area of the Landau–Lifshitz-like multi-domain on the clustered narrow strips was observed as a function of the intensity of the distributed magnetic field, and this domain extension was matched with the permeability variation. The result leads to the application of this phenomenon to a tunable inductor, electromagnetic shielding, or a sensor for detecting and memorizing the existence of a distributed magnetic field generated by a magnetic nanoparticle in the vicinity of the sensor. Full article
(This article belongs to the Special Issue Challenges and Future Trends of Magnetic Sensors)
Show Figures

Figure 1

Figure 1
<p>Magnetization diagram with a hidden multi-domain state. (<b>a</b>) Without surface normal magnetic field. (<b>b</b>) With a distributed normal magnetic field. The numbers in the figure indicate as follows: 1—longitudinal single domain (parallel direction); 2—longitudinal single domain (anti-parallel direction); 3—inclined Landau–Lifshitz domain [<a href="#B45-sensors-24-00706" class="html-bibr">45</a>,<a href="#B46-sensors-24-00706" class="html-bibr">46</a>] (hidden stable state); 3’—inclined Landau–Lifshitz domain (state of transition available); 4—canted normal field with distribution, where <span class="html-italic">B<sub>z</sub></span> = const. and Δ<span class="html-italic">B<sub>x</sub></span>/Δ<span class="html-italic">x</span> = const.; 5—sensor element. The “Memorized state” and the “Reset state” indicates a function of three-state memory.</p>
Full article ">Figure 2
<p>Schematic of distributed surface normal magnetic field.</p>
Full article ">Figure 3
<p>Schematic of the clustered many-body element.</p>
Full article ">Figure 4
<p>Photograph of the measured elements on glass substrate. (The distance between the vertical lines on the bottom corresponds to 1 mm).</p>
Full article ">Figure 5
<p>Schematic of the observation apparatus of magnetic domain with application of the distributed surface normal field.</p>
Full article ">Figure 6
<p>Photograph of the observation apparatus of magnetic domain with application of the distributed surface normal field.</p>
Full article ">Figure 7
<p>Measured variation in the in-plane magnetic field, <span class="html-italic">B<sub>x</sub></span>, as a function of the position along the length direction of the element, as a parameter of the coil current.</p>
Full article ">Figure 8
<p>Measured variation in the surface normal field, <span class="html-italic">B<sub>z</sub></span>, as a function of the current of excitation coil.</p>
Full article ">Figure 9
<p>Dependence of the intensity of the distribution, <span class="html-italic">dB<sub>x</sub>/dx</span>, on the current of excitation coil.</p>
Full article ">Figure 10
<p>Observation division and corresponding index symbols of the following magnetic domain photographs.</p>
Full article ">Figure 11
<p>Magnetic domain variation in the case of θ = 71° as a function of applied intensity of the distribution of the surface normal magnetic field. (<b>a</b>) Intensity of the distributed field as 0.89 G/mm, (<b>b</b>) the result for 2.39 G/mm, (<b>c</b>) the result for 5.39 G/mm, and (<b>d</b>) for 9.89 G/mm.</p>
Full article ">Figure 11 Cont.
<p>Magnetic domain variation in the case of θ = 71° as a function of applied intensity of the distribution of the surface normal magnetic field. (<b>a</b>) Intensity of the distributed field as 0.89 G/mm, (<b>b</b>) the result for 2.39 G/mm, (<b>c</b>) the result for 5.39 G/mm, and (<b>d</b>) for 9.89 G/mm.</p>
Full article ">Figure 12
<p>Schematic expression of magnetic domain as a tendency of whole clustered element with correspondence to <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>. (<b>a</b>) Schematic expression for <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>a, (<b>b</b>) for <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>c, and (<b>c</b>) for <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>d. The arrow indicates the magnetic moment direction, and the color show the moment direction, which is in much with <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 12 Cont.
<p>Schematic expression of magnetic domain as a tendency of whole clustered element with correspondence to <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>. (<b>a</b>) Schematic expression for <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>a, (<b>b</b>) for <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>c, and (<b>c</b>) for <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>d. The arrow indicates the magnetic moment direction, and the color show the moment direction, which is in much with <a href="#sensors-24-00706-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 13
<p>Schematic expression of the effect of the intensity of distributed magnetic field in the case of θ = 71°. The red arrow indicates the variation direction of the transition point.</p>
Full article ">Figure 14
<p>Variation in the area ratio of the LLD stripe domain as a function of the intensity of the distributed magnetic field.</p>
Full article ">Figure 15
<p>Previously reported measured variation in complex alternating current (AC) permeability as a parameter of the intensity of the distributed field [<a href="#B43-sensors-24-00706" class="html-bibr">43</a>]. (<b>a</b>) Real part permeability, and (<b>b</b>) imaginary part permeability.</p>
Full article ">Figure 16
<p>Magnetic domain variation in the case of θ = 90° as a function of applied intensity of the distribution of the surface normal magnetic field. (<b>a</b>) Intensity of the distributed field as 0.89 G/mm, (<b>b</b>) the result for 2.39 G/mm, (<b>c</b>) the result for 5.39 G/mm, and (<b>d</b>) for 9.89 G/mm.</p>
Full article ">Figure 16 Cont.
<p>Magnetic domain variation in the case of θ = 90° as a function of applied intensity of the distribution of the surface normal magnetic field. (<b>a</b>) Intensity of the distributed field as 0.89 G/mm, (<b>b</b>) the result for 2.39 G/mm, (<b>c</b>) the result for 5.39 G/mm, and (<b>d</b>) for 9.89 G/mm.</p>
Full article ">Figure 17
<p>Schematics of the typical multi-domain structure in case of θ = 90°, as indicated in <a href="#sensors-24-00706-f016" class="html-fig">Figure 16</a>.</p>
Full article ">Figure 18
<p>Schematic expression of magnetic domain as a tendency of whole clustered element with correspondence to <a href="#sensors-24-00706-f016" class="html-fig">Figure 16</a>. (<b>a</b>) Schematic expression for <a href="#sensors-24-00706-f016" class="html-fig">Figure 16</a>a, (<b>b</b>) for <a href="#sensors-24-00706-f016" class="html-fig">Figure 16</a>c, and (<b>c</b>) for <a href="#sensors-24-00706-f016" class="html-fig">Figure 16</a>d. The white arrow indicates the magnetic moment direction.</p>
Full article ">Figure 19
<p>Schematic expression of the effect of the intensity of distributed magnetic field in the case of θ = 90°. The red arrow indicates the variation direction of the transition point.</p>
Full article ">Figure 20
<p>Measured variation in complex AC permeability for θ = 90° as a parameter of the intensity of the distributed field. (<b>a</b>) Real part permeability and (<b>b</b>) imaginary part permeability.</p>
Full article ">Figure 21
<p>Matrix expression of magnetic domain state of the clustered element depending on element width and easy-axis direction θ. (M: multi-domain (LLD), S: single domain). The shaded part indicates the low AC permeability conditions.</p>
Full article ">
12 pages, 2562 KiB  
Article
Triangle-Shaped Cerium Tungstate Nanoparticles Used to Modify Carbon Paste Electrode for Sensitive Hydroquinone Detection in Water Samples
by Vesna Stanković, Slađana Đurđić, Miloš Ognjanović, Gloria Zlatić and Dalibor Stanković
Sensors 2024, 24(2), 705; https://doi.org/10.3390/s24020705 - 22 Jan 2024
Cited by 5 | Viewed by 1632
Abstract
In this study, we propose an eco-friendly method for synthesizing cerium tungstate nanoparticles using hydrothermal techniques. We used scanning, transmission electron microscopy, and X-ray diffraction to analyze the morphology of the synthesized nanoparticles. The results showed that the synthesized nanoparticles were uniform and [...] Read more.
In this study, we propose an eco-friendly method for synthesizing cerium tungstate nanoparticles using hydrothermal techniques. We used scanning, transmission electron microscopy, and X-ray diffraction to analyze the morphology of the synthesized nanoparticles. The results showed that the synthesized nanoparticles were uniform and highly crystalline, with a particle size of about 50 nm. The electrocatalytic properties of the nanoparticles were then investigated using cyclic voltammetry and electrochemical impedance spectroscopy. We further used the synthesized nanoparticles to develop an electrochemical sensor based on a carbon paste electrode that can detect hydroquinone. By optimizing the differential pulse voltammetric method, a wide linearity range of 0.4 to 45 µM and a low detection limit of 0.06 µM were obtained. The developed sensor also expressed excellent repeatability (RSD up to 3.8%) and reproducibility (RSD below 5%). Interferences had an insignificant impact on the determination of analytes, making it possible to use this method for monitoring hydroquinone concentrations in tap water. This study introduces a new approach to the chemistry of materials and the environment and demonstrates that a careful selection of components can lead to new horizons in analytical chemistry. Full article
(This article belongs to the Special Issue Chemical Sensors for Toxic Chemical Detection)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) XRPD diffractogram of Ce<sub>2</sub>(WO<sub>4</sub>)<sub>3</sub> nanoparticles, (<b>b</b>,<b>c</b>) HR-TEM micrographs of prepared nanoparticles, and (<b>d</b>) FE-SEM micrograph of wider sample area and corresponding EDS elemental mapping of Ce, W, and O, respectively.</p>
Full article ">Figure 2
<p>(<b>A</b>) CV and (<b>B</b>) EIS diagrams of 5 mM [Fe(CN)<sub>6</sub>]<sup>3−</sup>/[Fe(CN)<sub>6</sub>]<sup>4−</sup> in 0.1 KCl, obtained by using bare CPE and CPE modified with 5, 10, and 15% of Ce<sub>2</sub>(WO<sub>4</sub>)<sub>3</sub>. (<b>C</b>) CV in 0.1 M KCl containing 5 mM [Fe(CN)<sub>6</sub>] <sup>3−</sup>/[Fe(CN)<sub>6</sub>]<sup>4−</sup>, scan rate range 10 to 300 mVs<sup>−1</sup> with 15% Ce<sub>2</sub>(WO<sub>4</sub>)<sub>3</sub>/CPE as working electrode, with (<b>D</b>) corresponding linear relationship of peak currents vs. square root of scan rate.</p>
Full article ">Figure 3
<p>(<b>A</b>) CVs of 10 µM HQ in BRBS in pH range from 2 to 9; (<b>B</b>) Dependence of peaks potential (<span class="html-italic">E</span>p<sub>a</sub> and <span class="html-italic">E</span>p<sub>c</sub>) and (<b>C</b>) peaks currents (<span class="html-italic">I</span>p<sub>a</sub> and <span class="html-italic">I</span>p<sub>c</sub>) of pH value of supporting solution (<b>D</b>) CVs of 10 µM HQ in BRBS, pH 5, in scan rate range from 10 to 280 mVs<sup>−1</sup> (<b>E</b>) Linear relationship of signal peak currents (<span class="html-italic">I</span>p<sub>a</sub>) vs. square root of scan rate (<b>F</b>) Proposed redox reaction of HQ on 15% Ce<sub>2</sub>(WO<sub>4</sub>)<sub>3</sub>/CPE.</p>
Full article ">Figure 4
<p>(<b>A</b>) Calibration curves for successive addition of HQ in BRBS (pH 5) using 15% Ce<sub>2</sub>(WO<sub>4</sub>)<sub>3</sub>/CPE, (<b>B</b>) Linear relationship between peak currents and HQ concentration, and (<b>C</b>) Influence of interfering compound on HQ signal.</p>
Full article ">
14 pages, 8472 KiB  
Technical Note
Experimental Investigation of Pore Pressure on Sandy Seabed around Submarine Pipeline under Irregular Wave Loading
by Changjing Fu, Jinguo Wang and Tianlong Zhao
Sensors 2024, 24(2), 704; https://doi.org/10.3390/s24020704 - 22 Jan 2024
Cited by 1 | Viewed by 1110
Abstract
The propagation of shallow-water waves may cause liquefaction of the seabed, thereby reducing its support capacity for pipelines and potentially leading to pipeline settlement or deformation. To ensure the stability of buried pipelines, it is crucial to consider the excess pore pressure induced [...] Read more.
The propagation of shallow-water waves may cause liquefaction of the seabed, thereby reducing its support capacity for pipelines and potentially leading to pipeline settlement or deformation. To ensure the stability of buried pipelines, it is crucial to consider the excess pore pressure induced by irregular waves thoroughly. This paper presents the findings of an experimental study on excess pore pressure caused by irregular waves on a sandy seabed. A series of two-dimensional wave flume experiments investigated the excess pore pressure generated by irregular waves. Based on the experimental results, this study examined the influences of irregular wave characteristics and pipeline proximity on excess pore pressure. Using test data, the signal analysis method was employed to categorize different modes of excess pore-water pressure growth into two types and explore the mechanism underlying pore pressure development under the influence of irregular waves. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Figure 1
<p>A schematic diagram of the test section.</p>
Full article ">Figure 2
<p>Data acquisition and processing system.</p>
Full article ">Figure 3
<p>Pore-water pressure sensor: (<b>a</b>) sensors; (<b>b</b>) distribution.</p>
Full article ">Figure 4
<p>Test pipeline.</p>
Full article ">Figure 5
<p>Pattern of irregular waves in the test. (<b>a</b>) <span class="html-italic">H</span><sub>1/3</sub> = 0.1 m. (<b>b</b>) <span class="html-italic">H</span><sub>1/3</sub> = 0.15 m. (<b>c</b>) <span class="html-italic">H</span><sub>1/3</sub> = 0.18 m.</p>
Full article ">Figure 6
<p>The wave conditions of this study [<a href="#B15-sensors-24-00704" class="html-bibr">15</a>,<a href="#B17-sensors-24-00704" class="html-bibr">17</a>,<a href="#B19-sensors-24-00704" class="html-bibr">19</a>,<a href="#B20-sensors-24-00704" class="html-bibr">20</a>]. In the figure, region I is suitable for Stokes third-order wave theory and is suitable for deep water; Region II is suitable for the theory of elliptical cosine waves and can be applied to shallow water.</p>
Full article ">Figure 7
<p>The spatial patterns of irregular wave-induced excess pore pressure on the seabed surrounding the pipeline.</p>
Full article ">Figure 8
<p>The spatial patterns of excess pore pressure caused by irregular waves at different times vary.</p>
Full article ">Figure 9
<p>The distributions of the irregular wave-induced excess pore pressure (|<span class="html-italic">P</span>|/<span class="html-italic">P</span><sub>0</sub>) around the pipeline.</p>
Full article ">Figure 10
<p>Decomposition chart of measured excess pore pressure wavelet on sandy seabed.</p>
Full article ">Figure 11
<p>First type of response of excess pore pressure. (<b>a</b>) Excess pore pressure curve with loading time. (<b>b</b>) Cumulative excess pore-water pressure.</p>
Full article ">Figure 11 Cont.
<p>First type of response of excess pore pressure. (<b>a</b>) Excess pore pressure curve with loading time. (<b>b</b>) Cumulative excess pore-water pressure.</p>
Full article ">Figure 12
<p>Second type of response of excess pore pressure. (<b>a</b>) Excess pore pressure curve with loading time. (<b>b</b>) Cumulative excess pore-water pressure.</p>
Full article ">Figure 13
<p>Excess pore pressure for various conditions. (<b>a</b>) Relative water depth <span class="html-italic">d</span>/<span class="html-italic">L</span> (<span class="html-italic">θ</span> = 45°). (<b>b</b>) Relative wave height <span class="html-italic">H</span>/<span class="html-italic">d</span> (<span class="html-italic">θ</span> = 135°).</p>
Full article ">Figure 14
<p>Pore pressure changes at different depths.</p>
Full article ">Figure 14 Cont.
<p>Pore pressure changes at different depths.</p>
Full article ">Figure 15
<p>Pore pressure for different horizontal distances. (<b>a</b>) Δ<span class="html-italic">x</span> = 0.06 m. (<b>b</b>) Δ<span class="html-italic">x</span> = 0.04 m.</p>
Full article ">
13 pages, 7250 KiB  
Article
OS-BREEZE: Oil Spills Boundary Red Emission Zone Estimation Using Unmanned Surface Vehicles
by Oren Elmakis, Semion Polinov, Tom Shaked, Gabi Gordon and Amir Degani
Sensors 2024, 24(2), 703; https://doi.org/10.3390/s24020703 - 22 Jan 2024
Viewed by 1613
Abstract
Maritime transport, responsible for delivering over eighty percent of the world’s goods, is the backbone of the global delivery industry. However, it also presents considerable environmental risks, particularly regarding aquatic contamination. Nearly ninety percent of marine oil spills near shores are attributed to [...] Read more.
Maritime transport, responsible for delivering over eighty percent of the world’s goods, is the backbone of the global delivery industry. However, it also presents considerable environmental risks, particularly regarding aquatic contamination. Nearly ninety percent of marine oil spills near shores are attributed to human activities, highlighting the urgent need for continuous and effective surveillance. To address this pressing issue, this paper introduces a novel technique named OS-BREEZE. This method employs an Unmanned Surface Vehicle (USV) for assessing the extent of oil pollution on the sea surface. The OS-BREEZE algorithm directs the USV along the spill edge, facilitating rapid and accurate assessment of the contaminated area. The key contribution of this paper is the development of this novel approach for monitoring and managing marine pollution, which significantly reduces the path length required for mapping and estimating the size of the contaminated area. Furthermore, this paper presents a scale model experiment executed at the Coastal and Marine Engineering Research Institute (CAMERI). This experiment demonstrated the method’s enhanced speed and efficiency compared to traditional monitoring techniques. The experiment was methodically conducted across four distinct scenarios: the initial and advanced stages of an oil spill at the outer anchoring, as well as scenarios at the inner docking on both the stern and port sides. Full article
(This article belongs to the Special Issue Remote Sensing Application for Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>Aerial imagery for oil spill simulation. The top image depicts a satellite view of the Haifa Port, with a highlighted red rectangle indicating the focus area. Below is a stitched image of a scaled-down model (1:120 ratio) of the same section of the port, recreated in a controlled pool environment.</p>
Full article ">Figure 2
<p>This composite image illustrates a sequence of steps depicting the process of a USV monitoring an oil spill based on the OS-BREEZE method.</p>
Full article ">Figure 3
<p>Schematic overview of the OS-BREEZE algorithm framework. This diagram is divided into three principal components: Experimental Environment, Boundary Tracing, and Mapping.</p>
Full article ">Figure 4
<p>Demonstration of oil spill variations in a controlled experiment. The top images illustrate the spill’s development over time, showcasing the initial and advanced stages. The bottom images depict the challenges posed by spills along the vessel and the pier. Each image captures the unique dynamics and spread patterns of oil spills in various port spill scenarios, serving as test cases for the OS-BREEZE algorithm.</p>
Full article ">Figure 5
<p>Presents the ground truth for red zone areas in experimented scenarios.</p>
Full article ">Figure 6
<p>Comparative Analysis of OS-BREEZE and Sweep methods for oil spill monitoring. The figure presents a series of images and graphs for the Outer Anchoring Spill in its initial and advanced stages, followed by the Inner Docking Stern Side and Port Side spills. The graphs plot coverage against path length (km) indicating the efficiency of the methods and illustrating the path patterns of both methods.</p>
Full article ">Figure 6 Cont.
<p>Comparative Analysis of OS-BREEZE and Sweep methods for oil spill monitoring. The figure presents a series of images and graphs for the Outer Anchoring Spill in its initial and advanced stages, followed by the Inner Docking Stern Side and Port Side spills. The graphs plot coverage against path length (km) indicating the efficiency of the methods and illustrating the path patterns of both methods.</p>
Full article ">
15 pages, 9761 KiB  
Article
Proximity-Based Optical Camera Communication with Multiple Transmitters Using Deep Learning
by Muhammad Rangga Aziz Nasution, Herfandi Herfandi, Ones Sanjerico Sitanggang, Huy Nguyen and Yeong Min Jang
Sensors 2024, 24(2), 702; https://doi.org/10.3390/s24020702 - 22 Jan 2024
Cited by 1 | Viewed by 1741
Abstract
In recent years, optical camera communication (OCC) has garnered attention as a research focus. OCC uses optical light to transmit data by scattering the light in various directions. Although this can be advantageous with multiple transmitter scenarios, there are situations in which only [...] Read more.
In recent years, optical camera communication (OCC) has garnered attention as a research focus. OCC uses optical light to transmit data by scattering the light in various directions. Although this can be advantageous with multiple transmitter scenarios, there are situations in which only a single transmitter is permitted to communicate. Therefore, this method is proposed to fulfill the latter requirement using 2D object size to calculate the proximity of the objects through an AI object detection model. This approach enables prioritization among transmitters based on the transmitter proximity to the receiver for communication, facilitating alternating communication with multiple transmitters. The image processing employed when receiving the signals from transmitters enables communication to be performed without the need to modify the camera parameters. During the implementation, the distance between the transmitter and receiver varied between 1.0 and 5.0 m, and the system demonstrated a maximum data rate of 3.945 kbps with a minimum BER of 4.2×103. Additionally, the system achieved high accuracy from the refined YOLOv8 detection algorithm, reaching 0.98 mAP at a 0.50 IoU. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

Figure 1
<p>In 2D representation, several same-sized objects may have different 2D sizes if the objects are placed at different distances.</p>
Full article ">Figure 2
<p>Multiple transmitter priority decision based on 2D object size.</p>
Full article ">Figure 3
<p>Mapping of LED array transmitter.</p>
Full article ">Figure 4
<p>Hybrid OpenCV tracker and YOLOv8 approach for LED array detection.</p>
Full article ">Figure 5
<p>LED array frame processing transformation steps for the unmodified camera parameters scenario.</p>
Full article ">Figure 6
<p>YOLOv8 object detection architecture.</p>
Full article ">Figure 7
<p>Hardware equipments used for simulation.</p>
Full article ">Figure 8
<p>Two scenarios for the simulation. (<b>a</b>) The right-hand side transmitter placed ahead of the left-hand side transmitter, while (<b>b</b>) is the opposite of the first scenario.</p>
Full article ">Figure 9
<p>Sample images in the dataset used for YOLOv8 object detection model fine-tuning.</p>
Full article ">Figure 10
<p>(<b>a</b>) A green square tracker on the left-hand side LED transmitter shows that transmission is in progress, while (<b>b</b>) a blue square tracker on the right-hand side LED transmitter shows that transmission is in progress.</p>
Full article ">Figure 11
<p>(<b>a</b>) shows better thresholding on a brighter background condition than (<b>b</b>). The latter figure has more noise due to lower brightness of the background.</p>
Full article ">
12 pages, 565 KiB  
Article
Efficient Cumulant-Based Automatic Modulation Classification Using Machine Learning
by Ben Dgani and Israel Cohen
Sensors 2024, 24(2), 701; https://doi.org/10.3390/s24020701 - 22 Jan 2024
Cited by 2 | Viewed by 1628
Abstract
This paper introduces a new technique for automatic modulation classification (AMC) in Cognitive Radio (CR) networks. The method employs a straightforward classifier that utilizes high-order cumulant for training. It focuses on the statistical behavior of both analog modulation and digital schemes, which have [...] Read more.
This paper introduces a new technique for automatic modulation classification (AMC) in Cognitive Radio (CR) networks. The method employs a straightforward classifier that utilizes high-order cumulant for training. It focuses on the statistical behavior of both analog modulation and digital schemes, which have received limited attention in previous works. The simulation results show that the proposed method performs well with different signal-to-noise ratios (SNRs) and channel conditions. The classifier’s performance is superior to that of complex deep learning methods, making it suitable for deployment in CR networks’ end units, especially in military and emergency service applications. The proposed method offers a cost-effective and high-quality solution for AMC that meets the strict demands of these critical applications. Full article
(This article belongs to the Special Issue Cognitive Radio Networks: Technologies, Challenges and Applications)
Show Figures

Figure 1

Figure 1
<p><math display="inline"><semantics> <msub> <mi>P</mi> <mi>d</mi> </msub> </semantics></math> based on train set SNR threshold. (<b>a</b>) Full view; (<b>b</b>) zoomed-in view.</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <msub> <mi>P</mi> <mi>d</mi> </msub> </semantics></math> based on tree depth. (<b>a</b>) Full view; (<b>b</b>) zoomed-in view.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <msub> <mi>P</mi> <mi>d</mi> </msub> </semantics></math> comparison for the suggested method.</p>
Full article ">Figure 4
<p>Confusion matrices for different tree depths: (<b>a</b>) depth = 7; (<b>b</b>) depth = 8.</p>
Full article ">
17 pages, 5864 KiB  
Article
Deep Reinforcement Learning for Autonomous Driving with an Auxiliary Actor Discriminator
by Qiming Gao, Fangle Chang, Jiahong Yang, Yu Tao, Longhua Ma and Hongye Su
Sensors 2024, 24(2), 700; https://doi.org/10.3390/s24020700 - 22 Jan 2024
Cited by 1 | Viewed by 1892
Abstract
In the research of robot systems, path planning and obstacle avoidance are important research directions, especially in unknown dynamic environments where flexibility and rapid decision makings are required. In this paper, a state attention network (SAN) was developed to extract features to represent [...] Read more.
In the research of robot systems, path planning and obstacle avoidance are important research directions, especially in unknown dynamic environments where flexibility and rapid decision makings are required. In this paper, a state attention network (SAN) was developed to extract features to represent the interaction between an intelligent robot and its obstacles. An auxiliary actor discriminator (AAD) was developed to calculate the probability of a collision. Goal-directed and gap-based navigation strategies were proposed to guide robotic exploration. The proposed policy was trained through simulated scenarios and updated by the Soft Actor-Critic (SAC) algorithm. The robot executed the action depending on the AAD output. Heuristic knowledge (HK) was developed to prevent blind exploration of the robot. Compared to other methods, adopting our approach in robot systems can help robots converge towards an optimal action strategy. Furthermore, it enables them to explore paths in unknown environments with fewer moving steps (showing a decrease of 33.9%) and achieve higher average rewards (showning an increase of 29.15%). Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

Figure 1
<p>Two indoor environments (8 × 8 m<sup>2</sup>) were created, in which solid black lines were walls and hollow geometric figures were obstacles. (<b>a</b>) Environment I. (<b>b</b>) Environment II.</p>
Full article ">Figure 2
<p>The modified SCA model. The actor was a neural network that can learn a navigation strategy from current state and make an action in real time, and the critic was a q-value function fitted using a neural network to evaluate state–action pairs. &amp; means and.</p>
Full article ">Figure 3
<p>The proposed SAN’s structure: (<b>a</b>) robot’s information and (<b>b</b>) local environment information were analyzed to advance the model’s perceptual ability, in which (<b>c</b>) the seg-attention module structure extracted key information using a self-attention mechanism. The hidden layers, neuron number per layer, and output dimension of the MLP encoder are shown next to the network. The image size of the signed distance field is (64, 64, 3) and its output is (32, 32, 1). The final feature scale is (1) after the seg-attention module.</p>
Full article ">Figure 4
<p>VO and RVO in a workspace configuration, where A and B are two moving robots in the 2D workspace (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>x</mi> </mrow> </msub> <mo>,</mo> <mtext> </mtext> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>y</mi> </mrow> </msub> </mrow> </semantics></math>), <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi>B</mi> </mrow> </msub> </mrow> </semantics></math> mean the radius that described the current state of robot A and B, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>B</mi> </mrow> </msub> </mrow> </semantics></math> mean the quality hearts’ positions, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>v</mi> </mrow> <mrow> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>v</mi> </mrow> <mrow> <mi>B</mi> </mrow> </msub> </mrow> </semantics></math> mean the current velocity of A and B, <math display="inline"><semantics> <mrow> <mi>V</mi> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mi>A</mi> <mo>|</mo> <mi>B</mi> </mrow> </msub> </mrow> </semantics></math> means the VO area of the robot A generated by the robot B, and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>V</mi> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mi>A</mi> <mo>|</mo> <mi>B</mi> </mrow> </msub> </mrow> </semantics></math> means the set of speeds that robot A can choose to be safe. (<b>a</b>) Workspace, (<b>b</b>) VO, (<b>c</b>) RVO, and (<b>d</b>) static line obstacle.</p>
Full article ">Figure 5
<p>Graphical representation of radar data in process: (<b>a</b>) Represent the radar data in images, (<b>b</b>) extract the obstacle boundary, (<b>c</b>) calculate the distance obstacle map <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mi mathvariant="normal">t</mi> </mrow> <mrow> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">a</mi> <mi mathvariant="normal">p</mi> </mrow> </msubsup> </mrow> </semantics></math> using the signed distance field algorithm, and (<b>d</b>) show in Gazebo environment. (<b>a</b>) Current LiDar data, (<b>b</b>) Extract obstacles, (<b>c</b>) Signed Distance Field, and (<b>d</b>) Gazebo environment.</p>
Full article ">Figure 6
<p>Reward function in Environment II. The start point is [3.0, −3.0]<sup>T</sup>, the end point is [−3.0, 3.0]<sup>T</sup>, λ = 100, and the reward value falls in [−2, 1]. (<b>a</b>) map, (<b>b</b>) reward.</p>
Full article ">Figure 7
<p>Nine trajectories that the robot obtained by using (<b>a</b>) SAC, (<b>b</b>) SAN+SAC, and (<b>c</b>) SAN+SAC+AAD models for three different end points in Environment I, in which the blue dot denotes the start point, the red circle in the upper right corner is the end point, and the black areas are four walls and obstacles.</p>
Full article ">Figure 8
<p>The average rewards of SAC, SAN+SAC, and SAN+SAC+AAD models in Environment I. Light lines mean the average rewards, and dark lines mean the values after smoothing.</p>
Full article ">Figure 9
<p>Nine trajectories that the robot obtained by using (<b>a</b>) SAC, (<b>b</b>) SAN+SAC, and (<b>c</b>) SAN+SAC+AAD models for three different end points in Environment II, in which the blue dot denotes the start point, the red circle in the upper right corner is the end point, and the black areas are four walls and obstacles.</p>
Full article ">Figure 10
<p>The visual feature map bypasses the SAN module, the SDF picture was used as the input of the SAN module, the feature maps were extracted using CNN, the feature maps were fused using spatial attention, and the seg-attention network was used to correlate the characteristics of different regions in the fused feature map, in which the blue areas mean the safe path and the yellow areas mean the obstacle wall.</p>
Full article ">Figure 11
<p>(<b>a</b>) Real scenario, (<b>b</b>) a map created using the Gmapping algorithm, (<b>c</b>) robot trajectory, in which the blue point is the starting point, and the green point is the target point.</p>
Full article ">Figure 12
<p>Visualization of LiDAR data using the SAN module in a real scenario. (<b>a</b>) Real Scene, (<b>b</b>) LiDAR data, (<b>c</b>) Obstacle Information, and (<b>d</b>) Signed Distance Field.</p>
Full article ">Figure 13
<p>The visual feature map bypasses the SAN module in a real scenario, in which the blue areas mean the safe path and the yellow areas mean the obstacle wall.</p>
Full article ">
17 pages, 1671 KiB  
Article
Simple Scalable Multimodal Semantic Segmentation Model
by Yuchang Zhu and Nanfeng Xiao
Sensors 2024, 24(2), 699; https://doi.org/10.3390/s24020699 - 22 Jan 2024
Viewed by 1830
Abstract
Visual perception is a crucial component of autonomous driving systems. Traditional approaches for autonomous driving visual perception often rely on single-modal methods, and semantic segmentation tasks are accomplished by inputting RGB images. However, for semantic segmentation tasks in autonomous driving visual perception, a [...] Read more.
Visual perception is a crucial component of autonomous driving systems. Traditional approaches for autonomous driving visual perception often rely on single-modal methods, and semantic segmentation tasks are accomplished by inputting RGB images. However, for semantic segmentation tasks in autonomous driving visual perception, a more effective strategy involves leveraging multiple modalities, which is because different sensors of the autonomous driving system bring diverse information, and the complementary features among different modalities enhance the robustness of the semantic segmentation modal. Contrary to the intuitive belief that more modalities lead to better accuracy, our research reveals that adding modalities to traditional semantic segmentation models can sometimes decrease precision. Inspired by the residual thinking concept, we propose a multimodal visual perception model which is capable of maintaining or even improving accuracy with the addition of any modality. Our approach is straightforward, using RGB as the main branch and employing the same feature extraction backbone for other modal branches. The modals score module (MSM) evaluates channel and spatial scores of all modality features, measuring their importance for overall semantic segmentation. Subsequently, the modal branches provide additional features to the RGB main branch through the features complementary module (FCM). Leveraging the residual thinking concept further enhances the feature extraction capabilities of all the branches. Through extensive experiments, we derived several conclusions. The integration of certain modalities into traditional semantic segmentation models tends to result in a decline in segmentation accuracy. In contrast, our proposed simple and scalable multimodal model demonstrates the ability to maintain segmentation precision when accommodating any additional modality. Moreover, our approach surpasses some state-of-the-art multimodal semantic segmentation models. Additionally, we conducted ablation experiments on the proposed model, confirming that the application of the proposed MSM, FCM, and the incorporation of residual thinking contribute significantly to the enhancement of the model. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Three Approaches to Implementing Multimodal Semantic Segmentation.</p>
Full article ">Figure 2
<p>The Structure of Scalable Multimodal Semantic Segmentation Framework.</p>
Full article ">Figure 3
<p>The Details of the Stage in Scalable Multimodal Semantic Segmentation Model.</p>
Full article ">Figure 4
<p>The Structure of Multimodal Semantic Segmentation Head.</p>
Full article ">Figure 5
<p>The Structure of Multimodal Score Module (MSM).</p>
Full article ">Figure 6
<p>The Structure of Feature Complementary Module (FCM).</p>
Full article ">Figure 7
<p>The Visualization of Experiment Results.</p>
Full article ">
16 pages, 6375 KiB  
Article
Using Diffraction Deep Neural Networks for Indirect Phase Recovery Based on Zernike Polynomials
by Fang Yuan, Yang Sun, Yuting Han, Hairong Chu, Tianxiang Ma and Honghai Shen
Sensors 2024, 24(2), 698; https://doi.org/10.3390/s24020698 - 22 Jan 2024
Cited by 2 | Viewed by 1706
Abstract
The phase recovery module is dedicated to acquiring phase distribution information within imaging systems, enabling the monitoring and adjustment of a system’s performance. Traditional phase inversion techniques exhibit limitations, such as the speed of the sensor and complexity of the system. Therefore, we [...] Read more.
The phase recovery module is dedicated to acquiring phase distribution information within imaging systems, enabling the monitoring and adjustment of a system’s performance. Traditional phase inversion techniques exhibit limitations, such as the speed of the sensor and complexity of the system. Therefore, we propose an indirect phase retrieval approach based on a diffraction neural network. By utilizing non-source diffraction through multiple layers of diffraction units, this approach reconstructs coefficients based on Zernike polynomials from incident beams with distorted phases, thereby indirectly synthesizing interference phases. Through network training and simulation testing, we validate the effectiveness of this approach, showcasing the trained network’s capacity for single-order phase recognition and multi-order composite phase inversion. We conduct an analysis of the network’s generalization and evaluate the impact of the network depth on the restoration accuracy. The test results reveal an average root mean square error of 0.086λ for phase inversion. This research provides new insights and methodologies for the development of the phase recovery component in adaptive optics systems. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the DDNN-based indirect phase recovery scheme. (<b>a</b>) Illustration of the beam modulation process in the optical system. When parallel light with an unknown distorted wavefront passes through the pre-trained diffraction layers, a concentrated intensity distribution is obtained in a specific region of the output plane. (<b>b</b>) Flowchart for post-processing in the computer. After simple operations such as summing the output plane intensity collected by the imaging module, applying the sigmoid inverse transformation, and combining them, the predicted target phase is obtained.</p>
Full article ">Figure 2
<p>Conceptual diagram of the proposed indirect phase recovery scheme.</p>
Full article ">Figure 3
<p>Schematic diagram of output plane assignment method. <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>i</mi> </msub> </mrow> </semantics></math> represents the ten areas that are assigned values.</p>
Full article ">Figure 4
<p>Schematic diagram of the network structure.</p>
Full article ">Figure 5
<p>Statistic graph of wavefront errors of two datasets. (<b>a</b>) By calculating the distribution mean of 10 categories of single Zernike polynomials in Dataset 1, the RMS and PV distributions of 1000 training data and 200 test data are obtained. (<b>b</b>) RMS and PV distributions of 10,000 training data and 2000 test data in Dataset 2.</p>
Full article ">Figure 6
<p>The loss function and MSE decline curve. (<b>a</b>) Network training process based on combined Zernike wavefront distortion data. (<b>b</b>) Network training process based on single Zernike wavefront distortion data.</p>
Full article ">Figure 7
<p>The predicted phase distributions for each diffraction layer obtained after training. (<b>a</b>) Combined Zernike wavefront distortion training results. (<b>b</b>) Single Zernike wavefront distortion training results.</p>
Full article ">Figure 8
<p>Output plane intensity when scanning the magnitude of a single-order aberration input phase within the range of <math display="inline"><semantics> <mrow> <mfenced close="]" open="["> <mrow> <mo>−</mo> <mn>3</mn> <mi>λ</mi> <mo>,</mo> <mo> </mo> <mn>3</mn> <mi>λ</mi> </mrow> </mfenced> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Analysis of non-degenerate response test results. (<b>a</b>) Correspondence between output intensity and multiple of base phase. (<b>b</b>) Correspondence between the transformed predicted coefficient and multiple of base phase.</p>
Full article ">Figure 10
<p>Results of single-order Zernike aberration recognition test for the first ten orders.</p>
Full article ">Figure 11
<p>The combined phase of the input, the intensity of the output plane, and the corresponding true value of the output plane.</p>
Full article ">Figure 12
<p>Analysis of network output results. (<b>a</b>) Zernike polynomial coefficients obtained after transformation of the output intensity and comparison with true coefficients. (<b>b</b>) Comparative analysis of predicted results and ground truth.</p>
Full article ">Figure 13
<p>Comparison of original phase and corrected residual phases.</p>
Full article ">Figure 14
<p>Generalization test results of the five-layer network. The red + represents outliers, and the gray dotted line represents the line connecting the medians of each group of data.</p>
Full article ">Figure 15
<p>Generalization test results of 4-layer and 6-layer networks, with comparison of test results for networks with different numbers of layers. The red + represents outliers, and the gray dotted line represents the line connecting the medians of each group of data.</p>
Full article ">
28 pages, 1447 KiB  
Article
Analytical Model of the Connection Handoff in 5G Mobile Networks with Call Admission Control Mechanisms
by Mariusz Głąbowski, Maciej Sobieraj and Maciej Stasiak
Sensors 2024, 24(2), 697; https://doi.org/10.3390/s24020697 - 22 Jan 2024
Viewed by 1328
Abstract
Handoff mechanisms are very important in fifth-generation (5G) mobile networks because of the cellular architecture employed to maximize spectrum utilization. Together with call admission control (CAC) mechanisms, they enable better optimization of bandwidth use. The primary objective of the research presented in this [...] Read more.
Handoff mechanisms are very important in fifth-generation (5G) mobile networks because of the cellular architecture employed to maximize spectrum utilization. Together with call admission control (CAC) mechanisms, they enable better optimization of bandwidth use. The primary objective of the research presented in this article is to analyze traffic levels, aiming to optimize traffic management and handling. This article considers the two most popular CAC mechanisms: the resource reservation mechanism and the threshold mechanism. It presents an analytical approach to occupancy distribution and blocking probability calculation in 5G mobile networks, incorporating connection handoff and CAC mechanisms for managing multiple traffic streams generated by multi-service sources. Due to the fact that the developed analytical model is an approximate model, its accuracy was also examined. For this purpose, the results of analytical calculations of the blocking probability in a group of 5G cells are compared with the simulation data. This paper is an extended version of our paper published in 17th ConTEL 2023. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Telecommunications and Sensing)
Show Figures

Figure 1

Figure 1
<p>Generalized model of the limited-availability group.</p>
Full article ">Figure 2
<p>Model of the limited-availability group with reservation mechanisms. Class 1 belongs to the set <math display="inline"><semantics> <mi mathvariant="double-struck">R</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>Model of the limited-availability group with the threshold mechanism. Class 1 belongs to the set <math display="inline"><semantics> <mi mathvariant="double-struck">T</mi> </semantics></math>.</p>
Full article ">Figure 4
<p>Connection handoff in a group of cells.</p>
Full article ">Figure 5
<p>Group 1—method 1; blocking probability in a group of cells with connection handoff.</p>
Full article ">Figure 6
<p>Group 1—method 2; blocking probability in a group of cells with connection handoff.</p>
Full article ">Figure 7
<p>Group 2—method 1; blocking probability in a group of cells with connection handoff.</p>
Full article ">Figure 8
<p>Group 2—method 2; blocking probability in a group of cells with connection handoff.</p>
Full article ">Figure 9
<p>Group 3—method 1; blocking probability in a group of cells with connection handoff.</p>
Full article ">Figure 10
<p>Group 3—method 2; blocking probability in a group of cells with connection handoff.</p>
Full article ">Figure 11
<p>Group 1—method 1; blocking probability in a group of cells with the connection handoff and reservation mechanism.</p>
Full article ">Figure 12
<p>Group 1—method 2; blocking probability in a group of cells with the connection handoff and reservation mechanism.</p>
Full article ">Figure 13
<p>Group 2—method 1; blocking probability in a group of cells with the connection handoff and reservation mechanism.</p>
Full article ">Figure 14
<p>Group 2—method 2; blocking probability in a group of cells with the connection handoff and reservation mechanism.</p>
Full article ">Figure 15
<p>Group 3—method 1; blocking probability in a group of cells with the connection handoff and reservation mechanism.</p>
Full article ">Figure 16
<p>Group 3—method 2; blocking probability in a group of cells with the connection handoff and reservation mechanism.</p>
Full article ">Figure 17
<p>Group 1—method 1; blocking probability in a group of cells with the connection handoff and threshold mechanism.</p>
Full article ">Figure 18
<p>Group 1—method 2; blocking probability in a group of cells with the connection handoff and threshold mechanism.</p>
Full article ">Figure 19
<p>Group 2—method 1; blocking probability in a group of cells with the connection handoff and threshold mechanism.</p>
Full article ">Figure 20
<p>Group 2—method 2; blocking probability in a group of cells with the connection handoff and threshold mechanism.</p>
Full article ">Figure 21
<p>Group 3—method 1; blocking probability in a group of cells with the connection handoff and threshold mechanism.</p>
Full article ">Figure 22
<p>Group 3—method 2; blocking probability in a group of cells with the connection handoff and threshold mechanism.</p>
Full article ">Figure 23
<p>Occupancy distribution in assembly <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> in group 1 without CAC mechanisms.</p>
Full article ">Figure 24
<p>Occupancy distribution in assembly <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> in group 1 without CAC mechanisms.</p>
Full article ">
20 pages, 4612 KiB  
Article
Buck Converter with Cubic Static Conversion Ratio
by Delia-Anca Botila, Ioana-Monica Pop-Calimanu and Dan Lascu
Sensors 2024, 24(2), 696; https://doi.org/10.3390/s24020696 - 22 Jan 2024
Cited by 1 | Viewed by 1299
Abstract
The paper introduces a step-down converter that exhibits a static conversion ratio of cubic nature, providing an output voltage which is much closer to the input voltage, and at the same duty cycle, compared to a wide class of one-transistor buck-type topologies. Although [...] Read more.
The paper introduces a step-down converter that exhibits a static conversion ratio of cubic nature, providing an output voltage which is much closer to the input voltage, and at the same duty cycle, compared to a wide class of one-transistor buck-type topologies. Although the proposed topology contains many components, its control is still simple, as it employs only one transistor. A dc analysis is performed, the semiconductor stresses are derived in terms of input and output voltages and output power, revealing that the semiconductor voltage stresses remain acceptable and anyway lower than in other cubic buck topology. All detailed design equations are provided. The state-space approach is used to analyze the converter in the presence of conduction losses and a procedure for calculating the individual power dissipation is provided. The feasibility of the proposed cubic buck topology is first validated by computer simulation and finally confirmed by an experimental 12 V–10 W prototype. Full article
Show Figures

Figure 1

Figure 1
<p>The cubic boost converter from [<a href="#B38-sensors-24-00696" class="html-bibr">38</a>]—switching cell extraction.</p>
Full article ">Figure 2
<p>The proposed cubic buck topology.</p>
Full article ">Figure 3
<p>Topological states of the proposed converter, revealing the conducting devices: (<b>a</b>) first topological state; (<b>b</b>) second topological state.</p>
Full article ">Figure 4
<p>Static conversion ratio vs. duty cycle comparison between different buck-type converters and the proposed cubic buck topology: Classical [<a href="#B1-sensors-24-00696" class="html-bibr">1</a>](light green); Cubic [<a href="#B36-sensors-24-00696" class="html-bibr">36</a>] (dark blue); Stacked, n = 3 [<a href="#B24-sensors-24-00696" class="html-bibr">24</a>] (light blue); QBC3 [<a href="#B25-sensors-24-00696" class="html-bibr">25</a>] (magenta); Quadratic [<a href="#B20-sensors-24-00696" class="html-bibr">20</a>] (red); Single-switch [<a href="#B26-sensors-24-00696" class="html-bibr">26</a>] (yellow); Semi-quadratic [<a href="#B19-sensors-24-00696" class="html-bibr">19</a>] (black); Proposed (dark green).</p>
Full article ">Figure 5
<p>Theoretical waveforms for the reactive elements of the proposed cubic buck converter.</p>
Full article ">Figure 6
<p>Theoretical waveforms for the semiconductor devices of the proposed cubic buck converter.</p>
Full article ">Figure 7
<p>Proposed cubic buck converter including the lossy elements.</p>
Full article ">Figure 8
<p>Simulation results (voltages with blue and currents with red): (<b>a</b>) for inductor L<sub>1</sub>; (<b>b</b>) for inductor L<sub>2</sub>; (<b>c</b>) for inductor L<sub>3</sub>; (<b>d</b>) for capacitor C<sub>1</sub>; (<b>e</b>) for capacitor C<sub>2</sub>; (<b>f</b>) for capacitor C<sub>3</sub>; (<b>g</b>) for transistor S; (<b>h</b>) for diode D<sub>1</sub>; (<b>i</b>) for diode D<sub>2</sub>; (<b>j</b>) for diode D<sub>3</sub>; (<b>k</b>) for diode D<sub>4</sub>; (<b>l</b>) for diode D<sub>5</sub>.</p>
Full article ">Figure 8 Cont.
<p>Simulation results (voltages with blue and currents with red): (<b>a</b>) for inductor L<sub>1</sub>; (<b>b</b>) for inductor L<sub>2</sub>; (<b>c</b>) for inductor L<sub>3</sub>; (<b>d</b>) for capacitor C<sub>1</sub>; (<b>e</b>) for capacitor C<sub>2</sub>; (<b>f</b>) for capacitor C<sub>3</sub>; (<b>g</b>) for transistor S; (<b>h</b>) for diode D<sub>1</sub>; (<b>i</b>) for diode D<sub>2</sub>; (<b>j</b>) for diode D<sub>3</sub>; (<b>k</b>) for diode D<sub>4</sub>; (<b>l</b>) for diode D<sub>5</sub>.</p>
Full article ">Figure 9
<p>Experimental waveforms: reference signal, vD5 (dark blue); inductor L<sub>1</sub> voltage, vL1 (red); current through L<sub>1</sub>, iL1 (green); output voltage Vo (purple).</p>
Full article ">Figure 10
<p>Experimental waveforms: reference signal, vD5 (dark blue); inductor L<sub>2</sub> voltage, vL2 (red); current through L<sub>2</sub>, iL2 (green).</p>
Full article ">Figure 11
<p>Experimental waveforms: reference signal, vD5 (dark blue); inductor L<sub>3</sub> voltage, vL3 (red); current through L<sub>3</sub>, iL3 (green).</p>
Full article ">Figure 12
<p>Comparison of the ideal and experimental static conversion ratio against the duty cycle, with constant load resistance.</p>
Full article ">Figure 13
<p>Experimental efficiency against output power.</p>
Full article ">
20 pages, 24482 KiB  
Article
Knee Angle Estimation with Dynamic Calibration Using Inertial Measurement Units for Running
by Matthew B. Rhudy, Joseph M. Mahoney and Allison R. Altman-Singles
Sensors 2024, 24(2), 695; https://doi.org/10.3390/s24020695 - 22 Jan 2024
Viewed by 2165
Abstract
The knee flexion angle is an important measurement for studies of the human gait. Running is a common activity with a high risk of knee injury. Studying the running gait in realistic situations is challenging because accurate joint angle measurements typically come from [...] Read more.
The knee flexion angle is an important measurement for studies of the human gait. Running is a common activity with a high risk of knee injury. Studying the running gait in realistic situations is challenging because accurate joint angle measurements typically come from optical motion-capture systems constrained to laboratory settings. This study considers the use of shank and thigh inertial sensors within three different filtering algorithms to estimate the knee flexion angle for running without requiring sensor-to-segment mounting assumptions, body measurements, specific calibration poses, or magnetometers. The objective of this study is to determine the knee flexion angle within running applications using accelerometer and gyroscope information only. Data were collected for a single test participant (21-year-old female) at four different treadmill speeds and used to validate the estimation results for three filter variations with respect to a Vicon optical motion-capture system. The knee flexion angle filtering algorithms resulted in root-mean-square errors of approximately three degrees. The results of this study indicate estimation results that are within acceptable limits of five degrees for clinical gait analysis. Specifically, a complementary filter approach is effective for knee flexion angle estimation in running applications. Full article
(This article belongs to the Special Issue Wearable Sensors for Gait and Motion Analysis)
Show Figures

Figure 1

Figure 1
<p>Image of the IMU mounting on the participant.</p>
Full article ">Figure 2
<p>Diagram of IMU mounting locations, coordinate systems, and knee flexion angle, <span class="html-italic">α</span>.</p>
Full article ">Figure 3
<p>Flowchart summarizing the data processing prior to the knee flexion angle estimation filtering algorithms.</p>
Full article ">Figure 4
<p>Illustration of peak detection used to identify foot strikes in the shank acceleration signal in the pilot data. The unfiltered magnitude of the shank acceleration is used to identify peaks, while the filtered acceleration signals are used within the estimation algorithms.</p>
Full article ">Figure 5
<p>Excerpts from accelerometer measurements from the pilot data showing the unfiltered (solid line) and filtered (dotted line) accelerometer measurements from the pilot dataset for the IMU sensors mounted on the (<b>a</b>) shank and (<b>b</b>) thigh.</p>
Full article ">Figure 6
<p>Excerpts from gyroscope measurements from the pilot data showing the unfiltered (solid line) and filtered (dotted line) gyroscope measurements from the pilot dataset for the IMU sensors mounted on the (<b>a</b>) shank and (<b>b</b>) thigh.</p>
Full article ">Figure 7
<p>Excerpts from gyroscope measurements from the pilot data showing the original data (as dotted lines) and the regions that were selected for optimization (as solid lines) from the pilot dataset. (<b>a</b>) A portion of the shank gyroscope measurements. (<b>b</b>) A portion of the thigh gyroscope measurements.</p>
Full article ">Figure 8
<p>Excerpts from knee flexion angle estimation from gyroscope only and accelerometer only with respect to the Vicon motion-capture system reference measurements. (<b>a</b>) A portion of the knee flexion angle estimates at the beginning of the pilot dataset. (<b>b</b>) A portion of the knee flexion angle estimates toward the end of the pilot dataset.</p>
Full article ">Figure 9
<p>Excerpts from knee flexion angle estimations from CF, KF, and EKF with respect to the Vicon motion-capture system reference measurements. (<b>a</b>) A portion of the knee flexion angle estimates at the beginning of the pilot dataset. (<b>b</b>) A portion of the knee flexion angle estimates toward the end of the pilot dataset.</p>
Full article ">Figure 10
<p>RMSE tuning results for the filter tuning parameters for (<b>a</b>) dataset #1, (<b>b</b>) dataset #2, (<b>c</b>) dataset #3, and (<b>d</b>) dataset #4.</p>
Full article ">Figure 11
<p>Illustration of knee flexion angles from CF, KF, and EKF with respect to the Vicon motion-capture system reference measurements from the validation data in (<b>a</b>) dataset #1, (<b>b</b>) dataset #2, (<b>c</b>) dataset #3, and (<b>d</b>) dataset #4.</p>
Full article ">Figure 12
<p>Visualization of knee flexion angle distributions from CF, KF, and EKF with respect to the Vicon motion-capture system reference measurements from the validation data in (<b>a</b>) dataset #1, (<b>b</b>) dataset #2, (<b>c</b>) dataset #3, and (<b>d</b>) dataset #4. The gray shaded region represents a 95% confidence interval for the motion-capture knee angle estimates. This represents the variation in the knee angle throughout the dataset. Mean (solid lines) and mean ± 2 standard deviations (dotted lines) are shown for each estimation filter.</p>
Full article ">Figure 12 Cont.
<p>Visualization of knee flexion angle distributions from CF, KF, and EKF with respect to the Vicon motion-capture system reference measurements from the validation data in (<b>a</b>) dataset #1, (<b>b</b>) dataset #2, (<b>c</b>) dataset #3, and (<b>d</b>) dataset #4. The gray shaded region represents a 95% confidence interval for the motion-capture knee angle estimates. This represents the variation in the knee angle throughout the dataset. Mean (solid lines) and mean ± 2 standard deviations (dotted lines) are shown for each estimation filter.</p>
Full article ">
12 pages, 3603 KiB  
Article
Self-Biased Magneto-Electric Antenna for Very-Low-Frequency Communications: Exploiting Magnetization Grading and Asymmetric Structure-Induced Resonance
by Chung Ming Leung, Haoran Zheng, Jing Yang, Tao Wang and Feifei Wang
Sensors 2024, 24(2), 694; https://doi.org/10.3390/s24020694 - 22 Jan 2024
Cited by 1 | Viewed by 1880
Abstract
VLF magneto-electric (ME) antennas have gained attention for their compact size and high radiation efficiency in lossy conductive environments. However, the need for a large DC magnetic field bias presents challenges for miniaturization, limiting portability. This study introduces a self-biased ME antenna with [...] Read more.
VLF magneto-electric (ME) antennas have gained attention for their compact size and high radiation efficiency in lossy conductive environments. However, the need for a large DC magnetic field bias presents challenges for miniaturization, limiting portability. This study introduces a self-biased ME antenna with an asymmetric design using two magneto materials, inducing a magnetization grading effect that reduces the resonant frequency during bending. Operating principles are explored, and performance parameters, including the radiation mechanism, intensity and driving power, are experimentally assessed. Leveraging its excellent direct and converse magneto-electric effect, the antenna proves adept at serving as both a transmitter and a receiver. The results indicate that, at 2.09 mW and a frequency of 24.47 kHz, the antenna has the potential to achieve a 2.44 pT magnetic flux density at a 3 m distance. A custom modulation–demodulation circuit is employed, applying 2ASK and 2PSK to validate communication capability at baseband signals of 10 Hz and 100 Hz. This approach offers a practical strategy for the lightweight and compact design of VLF communication systems. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Structural illustration and experimental prototype of the finished self-biased ME antenna. (<b>b</b>) Distribution diagram of the magnetic moment within the antenna during pre-magnetization (<span class="html-italic">H</span><sub>DC</sub>). (<b>c</b>) Distribution diagram of the magnetic moment within the antenna after debiasing (<span class="html-italic">H</span><sub>DC</sub> = 0).</p>
Full article ">Figure 2
<p>Impedance (<span class="html-italic">Z</span>) and phase (<span class="html-italic">θ</span>) spectra of the proposed self-biased ME antenna.</p>
Full article ">Figure 3
<p>(<b>a</b>) Spectra of reflection coefficient (<span class="html-italic">S</span><sub>11</sub>) for the self-biased ME antenna. (<b>b</b>) Forward transmission coefficient (<span class="html-italic">S</span><sub>21</sub>) for the self-biased ME antenna.</p>
Full article ">Figure 4
<p>Testing platform for assessing the transmission capabilities of the self-biased ME antenna.</p>
Full article ">Figure 5
<p>Relationship between electromagnetic radiation intensity and testing distance, with experimental data fitted to a curve for prediction.</p>
Full article ">Figure 6
<p>Relationship between electromagnetic radiation intensity and power consumption (driving voltage).</p>
Full article ">Figure 7
<p>Test platform for a VLF communication system featuring a self-biased magneto-electric antenna pair.</p>
Full article ">Figure 8
<p>Test platform for a VLF communication system featuring a self-biased ME antenna pair.</p>
Full article ">Figure 9
<p>Waveforms of signals measured at each transmission stage at a frequency of 10 Hz under 2ASK and 2PSK modulation schemes.</p>
Full article ">Figure 10
<p>Waveforms of signals measured at each transmission stage at a frequency of 100 Hz under 2ASK and 2PSK modulation schemes.</p>
Full article ">
20 pages, 25820 KiB  
Article
Enhanced Out-of-Stock Detection in Retail Shelf Images Based on Deep Learning
by Franko Šikić, Zoran Kalafatić, Marko Subašić and Sven Lončarić
Sensors 2024, 24(2), 693; https://doi.org/10.3390/s24020693 - 22 Jan 2024
Cited by 1 | Viewed by 2661
Abstract
The term out-of-stock (OOS) describes a problem that occurs when shoppers come to a store and the product they are seeking is not present on its designated shelf. Missing products generate huge sales losses and may lead to a declining reputation or the [...] Read more.
The term out-of-stock (OOS) describes a problem that occurs when shoppers come to a store and the product they are seeking is not present on its designated shelf. Missing products generate huge sales losses and may lead to a declining reputation or the loss of loyal customers. In this paper, we propose a novel deep-learning (DL)-based OOS-detection method that utilizes a two-stage training process and a post-processing technique designed for the removal of inaccurate detections. To develop the method, we utilized an OOS detection dataset that contains a commonly used fully empty OOS class and a novel class that represents the frontal OOS. We present a new image augmentation procedure in which some existing OOS instances are enlarged by duplicating and mirroring themselves over nearby products. An object-detection model is first pre-trained using only augmented shelf images and, then, fine-tuned on the original data. During the inference, the detected OOS instances are post-processed based on their aspect ratio. In particular, the detected instances are discarded if their aspect ratio is higher than the maximum or lower than the minimum instance aspect ratio found in the dataset. The experimental results showed that the proposed method outperforms the existing DL-based OOS-detection methods and detects fully empty and frontal OOS instances with 86.3% and 83.7% of the average precision, respectively. Full article
(This article belongs to the Special Issue Deep Learning-Based Neural Networks for Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Example of an image from our dataset with several out-of-stock (OOS) locations on shelves.</p>
Full article ">Figure 2
<p>Examples of OOS instances. The red and blue rectangles mark the exact location of the normal and front class instances, respectively. The top row shows (<b>a</b>) normal and (<b>b</b>) front class instances, whereas the bottom row shows challenging normal class instances where (<b>c</b>) an unrecognizable background or (<b>d</b>) a recognizable product can be seen through the shelf.</p>
Full article ">Figure 3
<p>Histogram of OOS instances distribution per image. Each bar displays the cumulative count and the share of each store section.</p>
Full article ">Figure 4
<p>Scheme of the proposed OOS-detection method. The training part and the inference part of the method are marked with orange and green dashed rounded rectangles, respectively.</p>
Full article ">Figure 5
<p>Example of the OOS mirroring procedure. In the first iteration, the proposed OOS mirroring technique is applied to (<b>a</b>) the original image to produce (<b>b</b>) the first augmented image. In the second iteration, the augmentation technique is applied to (<b>b</b>) to produce (<b>c</b>) the second augmented image. For each iteration, the newly extended OOS instance is marked with a color-coded rectangle, where red and blue colors represent normal and front classes, respectively.</p>
Full article ">Figure 6
<p>Ablation study of the proposed method for (<b>a</b>) YOLOv5, (<b>b</b>) YOLOv7, and (<b>c</b>) EfficientDet models. Each result represents the average mAP percentage of the five test folds. OP and PP represent the optimally pre-trained model and the use of post-processing, respectively.</p>
Full article ">Figure 7
<p>Examples of OOS detection results. The top row (<b>a</b>,<b>b</b>) displays the successfully analyzed images, whereas the bottom row (<b>c</b>,<b>d</b>) shows images with partially inaccurate results. The detected OOS instances of the normal and front classes are marked with red and blue bounding boxes, respectively. In (<b>b</b>), the dashed bounding boxes represent the OOS instances that were detected by the model but discarded after the post-processing was applied.</p>
Full article ">
19 pages, 7287 KiB  
Article
Preliminary Characterization of an Active CMOS Pad Detector for Tracking and Dosimetry in HDR Brachytherapy
by Thi Ngoc Hang Bui, Matthew Large, Joel Poder, Joseph Bucci, Edoardo Bianco, Raffaele Aaron Giampaolo, Angelo Rivetti, Manuel Da Rocha Rolo, Zeljko Pastuovic, Thomas Corradino, Lucio Pancheri and Marco Petasecca
Sensors 2024, 24(2), 692; https://doi.org/10.3390/s24020692 - 22 Jan 2024
Viewed by 1520
Abstract
We assessed the accuracy of a prototype radiation detector with a built in CMOS amplifier for use in dosimetry for high dose rate brachytherapy. The detectors were fabricated on two substrates of epitaxial high resistivity silicon. The radiation detection performance of prototypes has [...] Read more.
We assessed the accuracy of a prototype radiation detector with a built in CMOS amplifier for use in dosimetry for high dose rate brachytherapy. The detectors were fabricated on two substrates of epitaxial high resistivity silicon. The radiation detection performance of prototypes has been tested by ion beam induced charge (IBIC) microscopy using a 5.5 MeV alpha particle microbeam. We also carried out the HDR Ir-192 radiation source tracking at different depths and angular dose dependence in a water equivalent phantom. The detectors show sensitivities spanning from (5.8 ± 0.021) × 10−8 to (3.6 ± 0.14) × 10−8 nC Gy−1 mCi−1 mm−2. The depth variation of the dose is within 5% with that calculated by TG-43. Higher discrepancies are recorded for 2 mm and 7 mm depths due to the scattering of secondary particles and the perturbation of the radiation field induced in the ceramic/golden package. Dwell positions and dwell time are reconstructed within ±1 mm and 20 ms, respectively. The prototype detectors provide an unprecedented sensitivity thanks to its monolithic amplification stage. Future investigation of this technology will include the optimisation of the packaging technique. Full article
(This article belongs to the Special Issue Integrated Circuits and CMOS Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Detector top view: the detector is the light red square in the top-left corner. The CMOS preamplifiers are the pink areas on the right hand and bottom side of the chip; (<b>b</b>) Schematic representation of the diode structure and the external connections used to measured current-voltage characteristics.</p>
Full article ">Figure 2
<p>TIA and Probe circuit.</p>
Full article ">Figure 3
<p>Schematic of the sensor assembled on the probe and placed in the phantom along with the relative position of the Ir-192 catheter (diagram is not to scale).</p>
Full article ">Figure 4
<p>Probe assembled and placed above the PMMA phantom.</p>
Full article ">Figure 5
<p>(<b>a</b>) View of the sensor placed inside the vacuum chamber of the accelerator at ANSTO; the arrow describes the direction of the beam incident on the sample (<b>b</b>) beam view of the detector under the microscope.</p>
Full article ">Figure 6
<p>Current-Voltage characteristics of the samples fabricated on Wafer10; (<b>a</b>) for a biasing of the pixel (VPIX) with an offset of 1.202 V, (<b>b</b>) with an offset of 2.02 V. The solid lines represent the IV from the un-diced samples.</p>
Full article ">Figure 7
<p>(<b>a</b>) Wafer10 energy spectra; (<b>b</b>) Wafer20 energy spectra; (<b>c</b>) charge collection efficiency (CCE) as a function of the bias for W10 and W20 wafers with 100 and 48 micron substrate thicknesses, respectively. Error bars indicate one standard deviation from the mean value of the alpha peak energy.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Wafer10 energy spectra; (<b>b</b>) Wafer20 energy spectra; (<b>c</b>) charge collection efficiency (CCE) as a function of the bias for W10 and W20 wafers with 100 and 48 micron substrate thicknesses, respectively. Error bars indicate one standard deviation from the mean value of the alpha peak energy.</p>
Full article ">Figure 8
<p>Direct comparison of charge collection map at −31 V (<b>a</b>) and microphotography of the layout of the sensor. The faint light red mark in (<b>b</b>) shows the actual sensitive area of the detector.</p>
Full article ">Figure 9
<p>Wafer10 Median Energy Maps; (<b>a</b>) −28 V, (<b>b</b>) −29 V, (<b>c</b>) −31 V and (<b>d</b>) −32 V. The coordinates in x and y are obtained using a calibration factor that converts the electric potentials applied to the steering magnets of the accelerator to a physical distance of the spot at the plane where the device under test is positioned, please refer to [<a href="#B39-sensors-24-00692" class="html-bibr">39</a>] for details on the calibration procedure adopted at ANSTO. The coordinate frames of the pictures are consistent between the figures but they can be obtained using a different offset, which results in a shift of the map relative to the axis.</p>
Full article ">Figure 10
<p>Median energy maps obtained at a bias of −29 V: (<b>a</b>) window set between 0 and 5 MeV; (<b>b</b>) when the window is set between 5–5.1 MeV, (<b>c</b>) between 5.1–5.235 MeV, (<b>d</b>) and between 5.235–5.80 MeV.</p>
Full article ">Figure 11
<p>Median energy map when the sensor is biased at −31 V; In the energy window between 0–5 MeV, no events are registered in the area of the sensor nor in the area dedicated to the electronics; (<b>a</b>) is the map of the events registered in the window from 5 to 5.235 MeV and (<b>b</b>) events between 5.235 and 5.60 MeV.</p>
Full article ">Figure 12
<p>Variation of the response as a function of the accumulated dose for a bias voltage of −31 V. The symbol in the plot contains the error bars calculated as 1 standard deviation; the red line represents the linear fit used to calculate the calibration factor.</p>
Full article ">Figure 13
<p>(<b>a</b>) data from source travelling at 2 mm depth with baseline subtracted, (<b>b</b>) transient of the source travelling 3 mm away from the detector.</p>
Full article ">Figure 14
<p>(<b>a</b>) Dose difference respective to TG-43. (<b>b</b>) Reconstructed distance of the source travelling along the catheter placed at 2 mm and (<b>c</b>) 7 mm above the source compared to the TPS nominal plan.</p>
Full article ">Figure 15
<p>(<b>a</b>) dwell time at different dwell positions at 2 mm depth, (<b>b</b>) transient time at different dwell position at 2 mm depth.</p>
Full article ">
15 pages, 559 KiB  
Article
A Dynamic Framework for Internet-Based Network Time Protocol
by Kelum A. A. Gamage, Asher Sajid, Omar S. Sonbul, Muhammad Rashid and Amar Y. Jaffar
Sensors 2024, 24(2), 691; https://doi.org/10.3390/s24020691 - 22 Jan 2024
Viewed by 1485
Abstract
Time synchronization is vital for accurate data collection and processing in sensor networks. Sensors in these networks often operate under fluctuating conditions. However, an accurate timekeeping mechanism is critical even in varying network conditions. Consequently, a synchronization method is required in sensor networks [...] Read more.
Time synchronization is vital for accurate data collection and processing in sensor networks. Sensors in these networks often operate under fluctuating conditions. However, an accurate timekeeping mechanism is critical even in varying network conditions. Consequently, a synchronization method is required in sensor networks to ensure reliable timekeeping for correlating data accurately across the network. In this research, we present a novel dynamic NTP (Network Time Protocol) algorithm that significantly enhances the precision and reliability of the generalized NTP protocol. It incorporates a dynamic mechanism to determine the Round-Trip Time (RTT), which allows accurate timekeeping even in varying network conditions. The proposed approach has been implemented on an FPGA and a comprehensive performance analysis has been made, comparing three distinct NTP methods: dynamic NTP (DNTP), static NTP (SNTP), and GPS-based NTP (GNTP). As a result, key performance metrics such as variance, standard deviation, mean, and median accuracy have been evaluated. Our findings demonstrate that DNTP is markedly superior in dynamic network scenarios, a common characteristic in sensor networks. This adaptability is important for sensors installed in time-critical networks, such as real-time industrial IoTs, where precise and reliable time synchronization is necessary. Full article
Show Figures

Figure 1

Figure 1
<p>NTP packet exchange between client and server.</p>
Full article ">Figure 2
<p>Working mechanism of a typical INTP architecture.</p>
Full article ">Figure 3
<p>Working mechanism of a typical GNTP architecture.</p>
Full article ">Figure 4
<p>DNTP framework using RTT-based time synchronization.</p>
Full article ">Figure 5
<p>Experimental setup.</p>
Full article ">Figure 6
<p>Comparisons of GPS-based, static, and dynamic NTP algorithms.</p>
Full article ">
1 pages, 132 KiB  
Correction
Correction: Masi et al. Stress and Workload Assessment in Aviation—A Narrative Review. Sensors 2023, 23, 3556
by Giulia Masi, Gianluca Amprimo, Claudia Ferraris and Lorenzo Priano
Sensors 2024, 24(2), 690; https://doi.org/10.3390/s24020690 - 22 Jan 2024
Viewed by 917
Abstract
The published publication [...] Full article
19 pages, 13040 KiB  
Article
A Framework for Determining the Optimal Vibratory Frequency of Graded Gravel Fillers Using Hammering Modal Approach and ANN
by Xianpu Xiao, Taifeng Li, Feng Lin, Xinzhi Li, Zherui Hao and Jiashen Li
Sensors 2024, 24(2), 689; https://doi.org/10.3390/s24020689 - 22 Jan 2024
Cited by 2 | Viewed by 1188
Abstract
To address the uncertainty of optimal vibratory frequency fov of high-speed railway graded gravel (HRGG) and achieve high-precision prediction of the fov, the following research was conducted. Firstly, commencing with vibratory compaction experiments and the hammering modal analysis [...] Read more.
To address the uncertainty of optimal vibratory frequency fov of high-speed railway graded gravel (HRGG) and achieve high-precision prediction of the fov, the following research was conducted. Firstly, commencing with vibratory compaction experiments and the hammering modal analysis method, the resonance frequency f0 of HRGG fillers, varying in compactness K, was initially determined. The correlation between f0 and fov was revealed through vibratory compaction experiments conducted at different vibratory frequencies. This correlation was established based on the compaction physical–mechanical properties of HRGG fillers, encompassing maximum dry density ρdmax, stiffness Krd, and bearing capacity coefficient K20. Secondly, the gray relational analysis algorithm was used to determine the key feature influencing the fov based on the quantified relationship between the filler feature and fov. Finally, the key features influencing the fov were used as input parameters to establish the artificial neural network prediction model (ANN-PM) for fov. The predictive performance of ANN-PM was evaluated from the ablation study, prediction accuracy, and prediction error. The results showed that the ρdmax, Krd, and K20 all obtained optimal states when fov was set as f0 for different gradation HRGG fillers. Furthermore, it was found that the key features influencing the fov were determined to be the maximum particle diameter dmax, gradation parameters b and m, flat and elongated particles in coarse aggregate Qe, and the Los Angeles abrasion of coarse aggregate LAA. Among them, the influence of dmax on the ANN-PM predictive performance was the most significant. On the training and testing sets, the goodness-of-fit R2 of ANN-PM all exceeded 0.95, and the prediction errors were small, which indicated that the accuracy of ANN-PM predictions was relatively high. In addition, it was clear that the ANN-PM exhibited excellent robust performance. The research results provide a novel method for determining the fov of subgrade fillers and provide theoretical guidance for the intelligent construction of high-speed railway subgrades. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental material: (<b>a</b>) crushed limestone aggregate, (<b>b</b>) three typical gradation curves.</p>
Full article ">Figure 2
<p>Experimental equipment: (<b>a</b>) intelligent compaction equipment, (<b>b</b>) experimental data collection, and (<b>c</b>) indoor flatbed loading equipment.</p>
Full article ">Figure 3
<p>The diagram of vibratory compaction experiments.</p>
Full article ">Figure 4
<p>The diagram of hammer impact experiments.</p>
Full article ">Figure 5
<p>Hammer impact modal analysis: (<b>a</b>) acceleration time-domain amplitude, (<b>b</b>) acceleration amplitude-frequency spectrum.</p>
Full article ">Figure 6
<p>Relationship between <span class="html-italic">K</span>, gradation, and <span class="html-italic">f</span><sub>0</sub>: (<b>a</b>) relationship between <span class="html-italic">K</span> and <span class="html-italic">f</span><sub>0</sub>, (<b>b</b>) relationship between gradation and <span class="html-italic">f</span><sub>0</sub> when <span class="html-italic">K</span> = 0.96.</p>
Full article ">Figure 7
<p>Evolution of <span class="html-italic">K<sub>rd</sub></span>, <span class="html-italic">K</span><sub>20</sub>, and <span class="html-italic">ρ<sub>d</sub></span> of graded gravel (G1) under different vibratory frequencies: (<b>a</b>) <span class="html-italic">K<sub>rd</sub></span> time history curve, (<b>b</b>) <span class="html-italic">K</span><sub>20</sub> time history curve, (<b>c</b>) <span class="html-italic">ρ<sub>d</sub></span> time history curve.</p>
Full article ">Figure 8
<p>Relationship between vibratory frequency and maximum <span class="html-italic">K<sub>rd</sub></span>, <span class="html-italic">K</span><sub>20</sub>, and <span class="html-italic">ρ<sub>d</sub></span><sub>max</sub>: (<b>a</b>) relationship between vibratory frequency and maximum <span class="html-italic">K<sub>rd</sub></span>, (<b>b</b>) relationship between vibratory frequency and <span class="html-italic">K</span><sub>20</sub>, and (<b>c</b>) relationship between vibratory frequency and <span class="html-italic">ρ<sub>d</sub></span><sub>max</sub>.</p>
Full article ">Figure 9
<p>Performance feature experiments of fillers.</p>
Full article ">Figure 10
<p>Based on the GRA algorithm analysis of <span class="html-italic">f<sub>ov</sub></span> key characteristics: (<b>a</b>) flowchart of GRA algorithm, (<b>b</b>) characterization analysis results.</p>
Full article ">Figure 11
<p>Relationship between gradation characteristic parameters and curve shape: (<b>a</b>) <span class="html-italic">b</span> = −0.28, (<b>b</b>) <span class="html-italic">b</span> = 0.36, (<b>c</b>) <span class="html-italic">b</span> = 1.0, (<b>d</b>) <span class="html-italic">m</span> = 0.45, (<b>e</b>) <span class="html-italic">m</span> = 0.725, and (<b>f</b>) <span class="html-italic">m</span> = 1.0.</p>
Full article ">Figure 12
<p>Relationship between the key features and <span class="html-italic">f<sub>ov</sub></span>: (<b>a</b>) <span class="html-italic">d</span><sub>max</sub>, (<b>b</b>) <span class="html-italic">b</span>, (<b>c</b>) <span class="html-italic">m</span>, (<b>d</b>) <span class="html-italic">Q<sub>e</sub></span>, and (<b>e</b>) <span class="html-italic">LAA</span>.</p>
Full article ">Figure 13
<p>Architecture of artificial neural network.</p>
Full article ">Figure 14
<p>Schematic of <span class="html-italic">ANN</span>-based <span class="html-italic">f<sub>ov</sub></span> prediction model.</p>
Full article ">Figure 15
<p>Schematic of the Monte Carlo method.</p>
Full article ">Figure 16
<p><span class="html-italic">MAE</span> values versus some iterations using hybrid models.</p>
Full article ">Figure 17
<p>Predictive performance of <span class="html-italic">ANN-PM</span> in the training dataset: (<b>a</b>) <span class="html-italic">R</span><sup>2</sup>, (<b>b</b>) <span class="html-italic">R<sup>2</sup></span>, <span class="html-italic">MSE</span>, and <span class="html-italic">MAE</span>.</p>
Full article ">Figure 18
<p>The results of the ablation study: (<b>a</b>) <span class="html-italic">R</span><sup>2</sup>, (<b>b</b>) <span class="html-italic">MSE</span> and <span class="html-italic">MAE</span>.</p>
Full article ">Figure 19
<p>Predictive performance of <span class="html-italic">ANN-PM</span> in the test dataset: (<b>a</b>) <span class="html-italic">R</span><sup>2</sup>, (<b>b</b>) <span class="html-italic">R<sup>2</sup></span>, <span class="html-italic">MSE</span>, and <span class="html-italic">MAE</span>.</p>
Full article ">Figure 20
<p>Results of the Monte Carlo method: (<b>a</b>) <span class="html-italic">R</span><sup>2</sup>, (<b>b</b>) <span class="html-italic">MSE</span>.</p>
Full article ">
14 pages, 1962 KiB  
Article
Distributed Sequential Detection for Cooperative Spectrum Sensing in Cognitive Internet of Things
by Jun Wu, Zhaoyang Qiu, Mingyuan Dai, Jianrong Bao, Xiaorong Xu and Weiwei Cao
Sensors 2024, 24(2), 688; https://doi.org/10.3390/s24020688 - 22 Jan 2024
Cited by 1 | Viewed by 1134
Abstract
The rapid development of wireless communication technology has led to an increasing number of internet of thing (IoT) devices, and the demand for spectrum for these devices and their related applications is also increasing. However, spectrum scarcity has become an increasingly serious problem. [...] Read more.
The rapid development of wireless communication technology has led to an increasing number of internet of thing (IoT) devices, and the demand for spectrum for these devices and their related applications is also increasing. However, spectrum scarcity has become an increasingly serious problem. Therefore, we introduce a collaborative spectrum sensing (CSS) framework in this paper to identify available spectrum resources so that IoT devices can access them and, meanwhile, avoid causing harmful interference to the normal communication of the primary user (PU). However, in the process of sensing the PUs signal in IoT devices, the issue of sensing time and decision cost (the cost of determining whether the signal state of the PU is correct or incorrect) arises. To this end, we propose a distributed cognitive IoT model, which includes two IoT devices independently using sequential decision rules to detect the PU. On this basis, we define the sensing time and cost functions for IoT devices and formulate an average cost optimization problem in CSS. To solve this problem, we further regard the optimal sensing time problem as a finite horizon problem and solve the threshold of the optimal decision rule by person-by-person optimization (PBPO) methodology and dynamic programming. At last, numerical simulation results demonstrate the correctness of our proposal in terms of the global false alarm and miss detection probability, and it always achieves minimal average cost under various costs of each observation taken and thresholds. Full article
(This article belongs to the Special Issue Cognitive Radio for Wireless Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The periodic spectrum sensing frame structure of a cognitive IoT.</p>
Full article ">Figure 2
<p>The global false alarm probability vs. the tolerable false alarm probability.</p>
Full article ">Figure 3
<p>The global miss detection probability vs. the tolerable false alarm probability.</p>
Full article ">Figure 4
<p>The global false alarm probability vs. the tolerable miss detection probability.</p>
Full article ">Figure 5
<p>The global miss detection probability vs. the tolerable miss detection probability.</p>
Full article ">Figure 6
<p>The average cost vs. the tolerable false alarm probability under various costs of each observation taken.</p>
Full article ">Figure 7
<p>The average cost vs. the tolerable miss detection probability under various costs of each observation taken.</p>
Full article ">Figure 8
<p>The global false alarm and miss detection probabilities of three rules vs. the tolerable false alarm probability.</p>
Full article ">
17 pages, 8343 KiB  
Article
An Efficient Attentional Image Dehazing Deep Network Using Two Color Space (ADMC2-net)
by Samia Haouassi and Di Wu
Sensors 2024, 24(2), 687; https://doi.org/10.3390/s24020687 - 22 Jan 2024
Cited by 2 | Viewed by 1724
Abstract
Image dehazing has become a crucial prerequisite for most outdoor computer applications. The majority of existing dehazing models can achieve the haze removal problem. However, they fail to preserve colors and fine details. Addressing this problem, we introduce a novel high-performing attention-based dehazing [...] Read more.
Image dehazing has become a crucial prerequisite for most outdoor computer applications. The majority of existing dehazing models can achieve the haze removal problem. However, they fail to preserve colors and fine details. Addressing this problem, we introduce a novel high-performing attention-based dehazing model (ADMC2-net)that successfully incorporates both RGB and HSV color spaces to maintain color properties. This model consists of two parallel densely connected sub-models (RGB and HSV) followed by a new efficient attention module. This attention module comprises pixel-attention and channel-attention mechanisms to get more haze-relevant features. Experimental results analyses can validate that our proposed model (ADMC2-net) can achieve superior results on synthetic and real-world datasets and outperform most of state-of-the-art methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Hazy Image Degradation Model.</p>
Full article ">Figure 2
<p>Overall design of the proposed end−to−end model.</p>
Full article ">Figure 3
<p>Architecture of the proposed dehazing model (<math display="inline"> <semantics> <msup> <mi>ADMC</mi> <mn>2</mn> </msup> </semantics> </math>-net).</p>
Full article ">Figure 4
<p>Structure of D-unit.</p>
Full article ">Figure 5
<p>Sheme of proposed Attention module.</p>
Full article ">Figure 6
<p>Some Visual Results of Proposed Dehazing Model: The hazy image, our result, Ground truth, respectively.</p>
Full article ">Figure 7
<p>Subjective comparison on synthetic dataset RESIDE. (<b>a</b>) hazy image, (<b>b</b>) TA-3DP [<a href="#B36-sensors-24-00687" class="html-bibr">36</a>], (<b>c</b>) MB-TF [<a href="#B37-sensors-24-00687" class="html-bibr">37</a>], (<b>d</b>) GRIDdehaze-Net [<a href="#B26-sensors-24-00687" class="html-bibr">26</a>], (<b>e</b>) CMTnet [<a href="#B15-sensors-24-00687" class="html-bibr">15</a>], (<b>f</b>) GEN-ADV [<a href="#B38-sensors-24-00687" class="html-bibr">38</a>], (<b>g</b>) DP-IPN [<a href="#B25-sensors-24-00687" class="html-bibr">25</a>], (<b>h</b>) ADE-CGAN [<a href="#B39-sensors-24-00687" class="html-bibr">39</a>], (<b>i</b>) Ours. (<b>j</b>) Ground truth.</p>
Full article ">Figure 8
<p>Subjective comparison on Real-world datasets Dense-Haze, NH-HAZE, and O-HAZE. (<b>a</b>) hazy image, (<b>b</b>) TA-3DP [<a href="#B36-sensors-24-00687" class="html-bibr">36</a>], (<b>c</b>) MB-TF [<a href="#B37-sensors-24-00687" class="html-bibr">37</a>], (<b>d</b>) GRIDdehaze-Net [<a href="#B26-sensors-24-00687" class="html-bibr">26</a>], (<b>e</b>) CMTnet [<a href="#B15-sensors-24-00687" class="html-bibr">15</a>], (<b>f</b>) GEN-ADV [<a href="#B38-sensors-24-00687" class="html-bibr">38</a>], (<b>g</b>) DP-IPN [<a href="#B25-sensors-24-00687" class="html-bibr">25</a>], (<b>h</b>) ADE-CGAN [<a href="#B39-sensors-24-00687" class="html-bibr">39</a>], (<b>i</b>) Ours. (<b>j</b>) Ground truth.</p>
Full article ">Figure 9
<p>Some Visual comparisons of comparing methods on Real-world images (without ground truth). (<b>a</b>) hazy image, (<b>b</b>) TA-3DP [<a href="#B36-sensors-24-00687" class="html-bibr">36</a>], (<b>c</b>) MB-TF [<a href="#B37-sensors-24-00687" class="html-bibr">37</a>], (<b>d</b>) GRIDdehaze-Net [<a href="#B26-sensors-24-00687" class="html-bibr">26</a>], (<b>e</b>) CMTnet [<a href="#B15-sensors-24-00687" class="html-bibr">15</a>], (<b>f</b>) GEN-ADV [<a href="#B38-sensors-24-00687" class="html-bibr">38</a>], (<b>g</b>) DP-IPN [<a href="#B25-sensors-24-00687" class="html-bibr">25</a>], (<b>h</b>) ADE-CGAN [<a href="#B39-sensors-24-00687" class="html-bibr">39</a>], (<b>i</b>) Ours.</p>
Full article ">
15 pages, 2566 KiB  
Article
A Low-Cost Inertial Measurement Unit Motion Capture System for Operation Posture Collection and Recognition
by Mingyue Yin, Jianguang Li and Tiancong Wang
Sensors 2024, 24(2), 686; https://doi.org/10.3390/s24020686 - 21 Jan 2024
Cited by 4 | Viewed by 2229
Abstract
In factories, human posture recognition facilitates human–machine collaboration, human risk management, and workflow improvement. Compared to optical sensors, inertial sensors have the advantages of portability and resistance to obstruction, making them suitable for factories. However, existing product-level inertial sensing solutions are generally expensive. [...] Read more.
In factories, human posture recognition facilitates human–machine collaboration, human risk management, and workflow improvement. Compared to optical sensors, inertial sensors have the advantages of portability and resistance to obstruction, making them suitable for factories. However, existing product-level inertial sensing solutions are generally expensive. This paper proposes a low-cost human motion capture system based on BMI 160, a type of six-axis inertial measurement unit (IMU). Based on WIFI communication, the collected data are processed to obtain the displacement of human joints’ rotation angles around XYZ directions and the displacement in XYZ directions, then the human skeleton hierarchical relationship was combined to calculate the real-time human posture. Furthermore, the digital human model was been established on Unity3D to synchronously visualize and present human movements. We simulated assembly operations in a virtual reality environment for human posture data collection and posture recognition experiments. Six inertial sensors were placed on the chest, waist, knee joints, and ankle joints of both legs. There were 16,067 labeled samples obtained for posture recognition model training, and the accumulated displacement and the rotation angle of six joints in the three directions were used as input features. The bi-directional long short-term memory (BiLSTM) model was used to identify seven common operation postures: standing, slightly bending, deep bending, half-squatting, squatting, sitting, and supine, with an average accuracy of 98.24%. According to the experiment result, the proposed method could be used to develop a low-cost and effective solution to human posture recognition for factory operation. Full article
(This article belongs to the Special Issue Advanced Sensors for Real-Time Monitoring Applications ‖)
Show Figures

Figure 1

Figure 1
<p>The structure of the low-cost motion capture system based on IMU.</p>
Full article ">Figure 2
<p>The circuit diagram and physical diagram of the tracker. (<b>a</b>) The status of completed welding. (<b>b</b>) The tracker with casing and straps attached. (<b>c</b>) The tracker placement.</p>
Full article ">Figure 3
<p>The digital human model. (<b>a</b>) Skeletal model. (<b>b</b>) Digital human model with skinning.</p>
Full article ">Figure 4
<p>Real-time human body and digital human model.</p>
Full article ">Figure 5
<p>Operation posture collection experiment scene.</p>
Full article ">Figure 6
<p>Chest joint angle over time, each color represents a category of working posture, the gray area is the excluded periods.</p>
Full article ">Figure 7
<p>The operation posture recognition network structure.</p>
Full article ">Figure 8
<p>Training and validation loss.</p>
Full article ">Figure 9
<p>The confusion matrix from five tests.</p>
Full article ">Figure 10
<p>The sudden posture distortion when the network signal is unstable. (<b>a</b>) The normal posture. (<b>b</b>) The abnormal posture.</p>
Full article ">
Previous Issue
Back to TopTop