[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 23, January-1
Previous Issue
Volume 22, December-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 22, Issue 24 (December-2 2022) – 454 articles

Cover Story (view full-size image): In this paper, we propose a technique that detects whether there is a diversion on a pipe or not. The proposed model transmits ultrasound signals through a pipe using a custom-designed array of piezoelectric transmitters and receivers. We propose to use the Zadoff–Chu sequence to modulate the input signals, then utilize its correlation properties to estimate the pipe channel response. The processed signal is then fed to a DNN that extracts the features and decides whether there is a diversion or not. The proposed technique demonstrates an average classification accuracy of 90.3% (when one sensor is used) and 99.6% (when two sensors are used) on 3/4 inch pipes. The technique can be readily generalized for pipes of different diameters and materials. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1052 KiB  
Article
Flexible IoT Agriculture Systems for Irrigation Control Based on Software Services
by Eva Palomar-Cosín and Marisol García-Valls
Sensors 2022, 22(24), 9999; https://doi.org/10.3390/s22249999 - 19 Dec 2022
Cited by 3 | Viewed by 3127
Abstract
IoT technology applied to agriculture has produced a number of contributions in the recent years. Such solutions are, most of the time, fully tailored to a particular functional target and focus extensively on sensor-hardware development and customization. As a result, software-centered solutions for [...] Read more.
IoT technology applied to agriculture has produced a number of contributions in the recent years. Such solutions are, most of the time, fully tailored to a particular functional target and focus extensively on sensor-hardware development and customization. As a result, software-centered solutions for IoT system development are infrequent. This is not suitable, as the software is the bottleneck in modern computer systems, being the main source of performance loss, errors, and even cyber attacks. This paper takes a software-centric perspective to model and design IoT systems in a flexible manner. We contribute a software framework that supports the design of the IoT systems’ software based on software services in a client–server model with REST interactions; and it is exemplified on the domain of efficient irrigation in agriculture. We decompose the services’ design into the set of constituent functions and operations both at client and server sides. As a result, we provide a simple and novel view on the design of IoT systems in agriculture from a sofware perspective: we contribute simple design structure based on the identification of the front-end software services, their internal software functions and operations, and their interconnections as software services. We have implemented the software framework on an IoT irrigation use case that monitors the conditions of the field and processes the sampled data, detecting alarms when needed. We demonstrate that the temporal overhead of our solution is bounded and suitable for the target domain, reaching a response time of roughly 11 s for bursts of 3000 requests. Full article
(This article belongs to the Topic Advanced Systems Engineering: Theory and Applications)
Show Figures

Figure 1

Figure 1
<p>System overview.</p>
Full article ">Figure 2
<p>The structure of IoT platforms.</p>
Full article ">Figure 3
<p>IoT system federations.</p>
Full article ">Figure 4
<p>Service-based communication scheme.</p>
Full article ">Figure 5
<p>Service scheme at server and IoT devices.</p>
Full article ">Figure 6
<p>Expanded services scheme.</p>
Full article ">Figure 7
<p>Proposed solution and enhancement.</p>
Full article ">Figure 8
<p>Anomaly/alarm detection and associated activities.</p>
Full article ">Figure 9
<p>Summary of temporal behavior of test 1.</p>
Full article ">Figure 10
<p>Summary of response times.</p>
Full article ">
23 pages, 20762 KiB  
Article
Damage Assessment in Rural Environments Following Natural Disasters Using Multi-Sensor Remote Sensing Data
by Shiran Havivi, Stanley R. Rotman, Dan G. Blumberg and Shimrit Maman
Sensors 2022, 22(24), 9998; https://doi.org/10.3390/s22249998 - 19 Dec 2022
Cited by 2 | Viewed by 3248
Abstract
The damage caused by natural disasters in rural areas differs in nature extent, landscape, and structure, from the damage caused in urban environments. Previous and current studies have focused mainly on mapping damaged structures in urban areas after catastrophic events such as earthquakes [...] Read more.
The damage caused by natural disasters in rural areas differs in nature extent, landscape, and structure, from the damage caused in urban environments. Previous and current studies have focused mainly on mapping damaged structures in urban areas after catastrophic events such as earthquakes or tsunamis. However, research focusing on the level of damage or its distribution in rural areas is lacking. This study presents a methodology for mapping, characterizing, and assessing the damage in rural environments following natural disasters, both in built-up and vegetation areas, by combining synthetic-aperture radar (SAR) and optical remote sensing data. As a case study, we applied the methodology to characterize the rural areas affected by the Sulawesi earthquake and the subsequent tsunami event in Indonesia that occurred on 28 September 2018. High-resolution COSMO-SkyMed images obtained pre- and post-event, alongside Sentinel-2 images, were used as inputs. This study’s results emphasize that remote sensing data from rural areas must be treated differently from that of urban areas following a disaster. Additionally, the analysis must include the surrounding features, not only the damaged structures. Furthermore, the results highlight the applicability of the methodology for a variety of disaster events, as well as multiple hazards, and can be adapted using a combination of different optical and SAR sensors. Full article
(This article belongs to the Special Issue Remote Sensing, Sensor Networks and GIS for Hazards and Disasters)
Show Figures

Figure 1

Figure 1
<p>Algorithm outline. Grey blocks represent the original algorithm, proposed by ref. [<a href="#B1-sensors-22-09998" class="html-bibr">1</a>]. Green blocks represent the steps inserted to the modified process. The coherence (<span class="html-italic">γ</span>) is calculated by complex SAR images: <span class="html-italic">f<sub>k</sub></span> and <span class="html-italic">g<sub>k</sub></span>; <span class="html-italic">f<sub>k</sub>g*<sub>k</sub></span> is the expectation value operator. <span class="html-italic">N</span> represents the estimation window size in the azimuth and range directions. Light shades in the maps represent high values, and dark shades represent low values. Light shades in NDVI and MNDWI represent vegetation areas and water bodies, respectively. Light shades in the coherence map represent stable areas, and dark shades represent changed areas.</p>
Full article ">Figure 2
<p>Vegetation damage assessment process outline. Light shades in the difference map represent stable vegetation areas with no or slight changes and dark shades represent vegetation areas with changes.</p>
Full article ">Figure 3
<p>Research area: Palu, Central Sulawesi Province, Indonesia. Pre-event Sentinel-2 image (in visible bands 2—blue, 3—green, and 4—red) from 27 September 2018. The black line represents the Palu-Koro Fault (PKF). The black arrows show the dominate left-lateral movement of PKF, with trend of NNW–SSE and N-S.</p>
Full article ">Figure 4
<p>InSAR coherence map. Light tones (high coherence values) represent no change. Dark tones represent changes (low coherence values). The red line represents the Palu-Koro Fault. The red arrows show the dominate left-lateral movement of PKF, with trend of NNW–SSE and N-S.</p>
Full article ">Figure 5
<p>Damage density map per unit area of 50 m. Light colors represent a lower probability of damage, dark colors represent a higher probability of damage. Green represents vegetation. Blue represents water bodies. The black line represents the Palu-Koro Fault. Flow slide areas are outlined in yellow: (<b>A</b>) Balaroa (<b>B</b>) Petobo, and (<b>C</b>) Jono Oge.</p>
Full article ">Figure 6
<p>Illustration of the surface rupture caused by the 2018 Sulawesi Indonesia earthquake in (<b>A</b>) natural vegetation, (<b>B</b>) agricultural fields, and (<b>C</b>) a built-up area.</p>
Full article ">Figure 7
<p>Insets showing selected urban (coastline and inland) and rural areas from the entire scene (<a href="#sensors-22-09998-f005" class="html-fig">Figure 5</a>). Pre-event optical image (17 August 2018), post-event optical image (2 October 2018), and the damage assessment map (see legend in <a href="#sensors-22-09998-f005" class="html-fig">Figure 5</a>).</p>
Full article ">Figure 8
<p>Insets showing the three villages (<b>left</b> to <b>right</b>): Balaroa, Petobo and Jono Oge. Pre-event optical image (17 August 2018), post-event optical image (2 October 2018), and the damage assessment map (see legend in <a href="#sensors-22-09998-f005" class="html-fig">Figure 5</a>).</p>
Full article ">Figure 9
<p>Zoomed in damaged built-up areas in Jono Oge village (subfigures <b>A</b>–<b>E</b>) resulting from the second hazards, liquefication and flow slide. Pre-event optical image (17 August 2018), post-event optical image (2 October 2018), and the damage assessment map (see legend in <a href="#sensors-22-09998-f005" class="html-fig">Figure 5</a>).</p>
Full article ">Figure 10
<p>Damage assessment map vs. UNITAR database.</p>
Full article ">Figure 11
<p>Damage assessment map vs. UNITAR database. Insets of urban and rural areas.</p>
Full article ">Figure 12
<p>Damage level value distribution of the rural (Balaroa, Petobo, Jono Oge) and the urban (Palu) areas.</p>
Full article ">Figure 13
<p>Final damage assessment map (50 m per pixel) generated using both damage assessment methods for the built-up and vegetation areas. Light colors represent a lower probability of damage, dark colors represent a higher probability of damage. Blue represents water bodies. The black line represents the Palu-Koro Fault. Flow slide areas are outlined in yellow. Dark green represents undamaged vegetation. Light green represents a slight change in vegetation. Red represents severe damage to vegetation.</p>
Full article ">Figure 14
<p>Insets showing the three villages (<b>left</b> to <b>right</b>): Balaroa, Petobo and Jono Oge. Pre-event optical image (17 August 2018), post-event optical image (2 October 2018), and the overall damage assessment map: structures and vegetation (see legend in <a href="#sensors-22-09998-f013" class="html-fig">Figure 13</a>).</p>
Full article ">Figure A1
<p>Damage assessment map generated from medium spatial resolution (MSR) images using Sentinel-1 and Landsat-8 satellites.</p>
Full article ">
15 pages, 5236 KiB  
Article
Methodology for Designing an Optimal Test Stand for Camera Thermal Drift Measurements and Its Stability Verification
by Kohhei Nimura and Marcin Adamczyk
Sensors 2022, 22(24), 9997; https://doi.org/10.3390/s22249997 - 19 Dec 2022
Cited by 1 | Viewed by 2245
Abstract
The effects of temperature changes on cameras are realized by observing the drifts of characteristic points in the image plane. Compensation for these effects is crucial to maintain the precision of cameras applied in machine vision systems and those expected to work in [...] Read more.
The effects of temperature changes on cameras are realized by observing the drifts of characteristic points in the image plane. Compensation for these effects is crucial to maintain the precision of cameras applied in machine vision systems and those expected to work in environments with varying factors, including temperature changes. Generally, mathematical compensation models are built by measuring the changes in the intrinsic and extrinsic parameters under the temperature effect; however, due to the assumptions of certain factors based on the conditions of the test stand used for the measurements, errors can become apparent. In this paper, test stands for thermal image drift measurements used in other works are assessed, and a methodology to design a test stand, which can measure thermal image drifts while eliminating other external influences on the camera, is proposed. A test stand was built accordingly, and thermal image drift measurements were performed along with a measurement to verify that the test stand did eliminate external influences on the camera. The experiment was performed for various temperatures from 5 °C to 45 5 °C, and as a result, the thermal image drift measured with the designed test stand showed its maximum error of 16% during its most rapid temperature change from 25 °C to 5 °C. Full article
(This article belongs to the Special Issue Sensing Technologies and Applications in Infrared and Visible Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Example of a registered thermal image drift with respect to the initial positions of 56 markers forming arrays. All marker trajectories are scaled ×100 for better visualization. The color of each marker represents the temperature of the camera during its frame capture. (<b>b</b>) Plot representing the temperature values of the camera during each frame capture. The color scale is consistent with the drifts shown in (<b>a</b>).</p>
Full article ">Figure 2
<p>Designed test stand. (<b>a</b>) The schematic view of the test stand with the thermal chamber and the invar frame. The invar frame is clamped onto an optical bench, which is stationed on a granite block. A linear–rotary table is mounted on the invar frame, which can control the position of the calibration artifact along the optical axis. The thermal chamber has an opening for the invar frame to pass through and another opening for inspection purposes, which has a controllable automatic flap mounted. (<b>b</b>) Photo of the actual test stand stored in the laboratory. Here, the LED panels and the temperature sensor to record the room temperature are visible.</p>
Full article ">Figure 3
<p>(<b>a</b>) The geometry of the test stand used for simulation. The blue line represents the optical axis of the tested camera. The red camera ray is refracted while leaving the thermal chamber due to the changes in the refractive index of air caused by the temperature difference. (<b>b</b>) The simulated deformations in the calibration artifact plane caused by the increase in temperature of +25 °C.</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) The geometry of the test stand used for simulation. The blue line represents the optical axis of the tested camera. The red camera ray is refracted while leaving the thermal chamber due to the changes in the refractive index of air caused by the temperature difference. (<b>b</b>) The simulated deformations in the calibration artifact plane caused by the increase in temperature of +25 °C.</p>
Full article ">Figure 4
<p>Photo of the test stand with the additional camera attached for the stability measurement. The light-emitting diode (LED) panels were temporally removed for a better view of camera.</p>
Full article ">Figure 5
<p>Scene of the tested camera and the markers on the surface of the invar camera stage observed from the observing camera, which is positioned outside the thermal chamber.</p>
Full article ">Figure 6
<p>Temperature data recorded during the stability measurement. The blue line refers to the temperature of the observed camera inside the thermal chamber, recorded by a sensor inside the camera; the orange line refers to the room temperature of the lab recorded by the sensor positioned near the image artifact; the yellow line refers to the temperature inside the thermal chamber, recorded by the sensor positioned near the observed camera; the purple line refers to the temperature inside the thermal chamber recorded by the sensor positioned near the inspection flap.</p>
Full article ">Figure 7
<p>(<b>a</b>) The registered center drift of marker 6 (the rightmost of the four markers on the camera stage). The presented maximum and the average drift distance were obtained after excluding the outliers and are in units of pixels. (<b>b</b>) The graphical representation of the drifts in the horizontal (I coordinate) and vertical (J coordinate) of the image plane measured in pixels. The color of the points represents the ambient temperature inside the thermal chamber during the measurement.</p>
Full article ">Figure 8
<p>The registered thermal image drift recorded by the tested camera inside the thermal chamber during the parallel measurement with the stability observation. The mean and the maximum drift value for each marker are shown in pixel values. The drifts are scaled ×100 for better visualization. The color of the markers represents the temperature of the camera during the frame capture and is consistent with the temperature scale shown in <a href="#sensors-22-09997-f007" class="html-fig">Figure 7</a>a.</p>
Full article ">
13 pages, 2276 KiB  
Article
Performance of the SABAT Neutron-Based Explosives Detector Integrated with an Unmanned Ground Vehicle: A Simulation Study
by Michał Silarski and Marek Nowakowski
Sensors 2022, 22(24), 9996; https://doi.org/10.3390/s22249996 - 19 Dec 2022
Cited by 7 | Viewed by 2827
Abstract
The effective and safe detection of illicit materials, explosives in particular, is currently of growing importance taking into account the geopolitical situation and increasing risk of a terrorist attack. The commonly used methods of detection are based predominantly on metal detectors and georadars, [...] Read more.
The effective and safe detection of illicit materials, explosives in particular, is currently of growing importance taking into account the geopolitical situation and increasing risk of a terrorist attack. The commonly used methods of detection are based predominantly on metal detectors and georadars, which show only the shapes of the possible dangerous objects and do not allow for exact identification and risk assessment. A supplementary or even alternative method may be based on neutron activation analysis, which provides the possibility of a stoichiometric analysis of the suspected object and its non-invasive identification. One such sensor is developed by the SABAT collaboration, with its primary application being underwater threat detection. In this article, we present performance studies of this sensor, integrated with a mobile robot, in terms of the minimal detectable quantity of commonly used explosives in different environmental conditions. The paper describes the functionality of the used platform considering electronics, sensors, onboard computing power, and communication system to carry out manual operation and remote control. Robotics solutions based on modularized structures allow the extension of sensors and effectors that can significantly improve the safety of personnel as well as work efficiency, productivity, and flexibility. Full article
(This article belongs to the Special Issue Monitoring System for Aircraft, Vehicle and Transport Systems)
Show Figures

Figure 1

Figure 1
<p>Scheme of the neutron-based sensor developed within the SABAT project. Neutrons are generated through deuterium–tritium (DT) collisions, which also result in α particle creation. Signals from both the γ-rays and α particles are transferred to the data acquisition system, which measures their charges and times of arrival. Events with coincident registration of both particles are then transferred to the data-processing module. Moreover, this mode of operation significantly reduces the environmental background.</p>
Full article ">Figure 2
<p>Schematic view of the SABAT sensor integrated with a ground vehicle simulated in this work. The sensor (light blue) is mounted on a manipulator arm with a camera (dark blue).</p>
Full article ">Figure 3
<p>System architecture of the UGV which will be used to carry the neutron-based sensor.</p>
Full article ">Figure 4
<p>Operational modes of mesh topology: cascade connection (<b>a</b>), star topology (<b>b</b>), grid (<b>c</b>).</p>
Full article ">Figure 5
<p>Simulated gamma ray energy deposition distribution simulated for background (red curve) and TNT mine (black). The sensor was placed 2 cm above the TNT.</p>
Full article ">Figure 6
<p>Elemental ratios of carbon and oxygen (<b>a</b>), carbon and hydrogen (<b>b</b>), carbon and nitrogen (<b>c</b>) and nitrogen and hydrogen (<b>d</b>), simulated for a sample of TNT (black) and background (red). In the calculations, we have taken into account the escape peaks for oxygen and carbon lines by adding their integrals to those for the original lines. These results correspond to the interrogation time of 1 s.</p>
Full article ">Figure 7
<p>C/O elemental ratio as a function of the depth at which the TNT sample was buried for the detector positioned 2 cm above the ground.</p>
Full article ">Figure 8
<p>Elemental ratios of carbon and oxygen (<b>a</b>), carbon and hydrogen (<b>b</b>), carbon and nitrogen (<b>c</b>) and nitrogen and hydrogen (<b>d</b>), obtained for simulations performed for different masses of TNT with the γ-ray detector 2cm above the sample. The C/O and C/H ratios were fitted with a parabola function with parameter values equal to (<b>a</b>) A = 0.969 ± 0.021, B = 0.041 ± 0.005, C = 0.0011 ± 0.0002; (<b>b</b>) A = 2.262 ± 0.087, B = 0.116 ± 0.018, C = −0.0037 ± 0.0008.</p>
Full article ">
15 pages, 6714 KiB  
Article
The Application of PVDF-Based Piezoelectric Patches in Energy Harvesting from Tire Deformation
by Kevin Nguyen, Matthew Bryant, In-Hyouk Song, Byoung Hee You and Seyedmeysam Khaleghian
Sensors 2022, 22(24), 9995; https://doi.org/10.3390/s22249995 - 19 Dec 2022
Cited by 8 | Viewed by 2870
Abstract
The application of Polyvinylidene Fluoride or Polyvinylidene Difluoride (PVDF) in harvesting energy from tire deformation was investigated in this study. An instrumented tire with different sizes of PVDF-based piezoelectric patches and a tri-axial accelerometer attached to its inner liner was used for this [...] Read more.
The application of Polyvinylidene Fluoride or Polyvinylidene Difluoride (PVDF) in harvesting energy from tire deformation was investigated in this study. An instrumented tire with different sizes of PVDF-based piezoelectric patches and a tri-axial accelerometer attached to its inner liner was used for this purpose and was tested under different conditions on asphalt and concrete surfaces. The results demonstrated that on both pavement types, the generated voltage was directly proportional to the size of the harvester patches, the longitudinal velocity, and the normal load. Additionally, the generated voltage was inversely proportional to the tire inflation pressure. Moreover, the range of generated voltages was slightly higher on asphalt compared to the same testing conditions on the concrete surface. Based on the results, it was concluded that in addition to the potential role of the PVDF-based piezoelectric film in harvesting energy from tire deformation, they demonstrate great potential to be used as self-powered sensors to estimate the tire-road contact parameters. Full article
(This article belongs to the Special Issue On-Board and Remote Sensors in Intelligent Vehicles)
Show Figures

Figure 1

Figure 1
<p>The arrangement of different sensors in the instrumented tire.</p>
Full article ">Figure 2
<p>The sensors attached to the tire’s inner liner: (<b>a</b>) PVDF-based piezoelectric sensor, and (<b>b</b>) tri-axial accelerometer.</p>
Full article ">Figure 3
<p>(<b>a</b>) High-pressure connectors (male and female) used in this study. (<b>b</b>) The waterproof slip ring used in this study.</p>
Full article ">Figure 4
<p>The instrumented truck used in this study.</p>
Full article ">Figure 5
<p>Schematic of the data collecting system used in the instrumented truck.</p>
Full article ">Figure 6
<p>The overall shape of the output signal, from the 6 cm piezo patch, with a tire pressure of 35 psi and a longitudinal velocity of 25 mph.</p>
Full article ">Figure 7
<p>The effect of sensor length on the output signal at the longitudinal velocity of (<b>a</b>) 15 mph (<b>b</b>) 20 mph, (<b>c</b>) 25 mph, and (<b>d</b>) 30 mph, all from a tire with an inflation pressure of 35 psi and no additional normal load.</p>
Full article ">Figure 8
<p>Sensitivity analysis of the average of maximum voltage in each tire revolution to the length of the piezoelectric patches for a tire pressure of (<b>a</b>) 35 psi and (<b>b</b>) 25 psi, both without the presence of additional normal load.</p>
Full article ">Figure 9
<p>The effect of longitudinal velocity on the output signal of the (<b>a</b>) 5 cm patch and (<b>b</b>) 6 cm patch, at 35 psi with no additional load.</p>
Full article ">Figure 10
<p>Sensitivity analysis of the average of maximum voltage in each tire revolution to the longitudinal velocity in tire pressures of (<b>a</b>) 35 psi and (<b>b</b>) 25 psi, both without the presence of additional normal load.</p>
Full article ">Figure 11
<p>Width of the output voltage in different longitudinal velocities from the 6 cm patch at 35 psi with no additional load condition.</p>
Full article ">Figure 12
<p>The effect of external load on the output voltage for (<b>a</b>) 4 cm piezo patches and (<b>b</b>) 5 cm piezo patches, both at a longitudinal velocity of 20 mph and 35 psi.</p>
Full article ">Figure 13
<p>Sensitivity analysis of the average maximum voltage in each tire revolution to the longitudinal velocity in a tire pressure of 35 psi with and without the presence of the additional load.</p>
Full article ">Figure 14
<p>The effect of tire pressure on the output voltage for (<b>a</b>) 4 cm piezo patches and (<b>b</b>) 6 cm piezo patches, both at a longitudinal velocity of 20 mph, with no additional load, and on asphalt.</p>
Full article ">Figure 15
<p>Sensitivity analysis of the average maximum voltage in each tire revolution to the longitudinal velocity in tire pressure (<b>a</b>) without presence of the additional load and (<b>b</b>) with presence of the additional load.</p>
Full article ">Figure 16
<p>The effect of length on the generated voltage: (<b>a</b>) 20 mph and (<b>b</b>) 25 mph, both for the tire with 35 psi and with no additional load condition while on concrete.</p>
Full article ">Figure 17
<p>The effect of longitudinal velocity on the generated voltage in the 5 cm piezo patch under 35 psi of tire pressure and with no additional load while on concrete.</p>
Full article ">Figure 18
<p>The effect of tire normal load on the generated voltage for the 5 cm patch at a velocity of 20 mph with a tire pressure of 35 psi while on concrete.</p>
Full article ">Figure 19
<p>The effect of tire pressure on the generated voltage for the 5 cm patch at a velocity of 20 mph and with no additional load while on concrete.</p>
Full article ">Figure 20
<p>The generated voltage from the 5 cm piezoelectric patch for a tire pressure of (<b>a</b>) 35 psi and (<b>b</b>) 25 psi, both at a longitudinal velocity of 20 mph and with no additional load.</p>
Full article ">Figure 21
<p>The effect of surface type on the generated signal from the 5 cm piezoelectric patch at a longitudinal velocity of 20 mph, tire pressure of 35 psi, and with no additional load condition.</p>
Full article ">
15 pages, 1189 KiB  
Article
Plasmonic Sensors beyond the Phase Matching Condition: A Simplified Approach
by Alessandro Tuniz, Alex Y. Song, Giuseppe Della Valle and C. Martijn de Sterke
Sensors 2022, 22(24), 9994; https://doi.org/10.3390/s22249994 - 19 Dec 2022
Cited by 3 | Viewed by 2611
Abstract
The conventional approach to optimising plasmonic sensors is typically based entirely on ensuring phase matching between the excitation wave and the surface plasmon supported by the metallic structure. However, this leads to suboptimal performance, even in the simplest sensor configuration based on the [...] Read more.
The conventional approach to optimising plasmonic sensors is typically based entirely on ensuring phase matching between the excitation wave and the surface plasmon supported by the metallic structure. However, this leads to suboptimal performance, even in the simplest sensor configuration based on the Otto geometry. We present a simplified coupled mode theory approach for evaluating and optimizing the sensing properties of plasmonic waveguide refractive index sensors. It only requires the calculation of propagation constants, without the need for calculating mode overlap integrals. We apply our method by evaluating the wavelength-, device length- and refractive index-dependent transmission spectra for an example silicon-on-insulator-based sensor of finite length. This reveals all salient spectral features which are consistent with full-field finite element calculations. This work provides a rapid and convenient framework for designing dielectric-plasmonic sensor prototypes—its applicability to the case of fibre plasmonic sensors is also discussed. Full article
(This article belongs to the Special Issue Plasmonic Optical Fiber Sensors: Technology and Applications)
Show Figures

Figure 1

Figure 1
<p>Concept schematic of the challenge of calculating resonances in plasmonic sensors. (<b>a</b>) The simple Otto configuration relies on monitoring the reflectivity <span class="html-italic">R</span> of plane waves propagating in semi-infinite media as a function of angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math>. At the angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>SPP</mi> </msub> </semantics></math> a SPP is excited. (<b>b</b>) <math display="inline"><semantics> <mi>θ</mi> </semantics></math>-dependent reflectance spectrum for <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>800</mn> <mspace width="0.166667em"/> <mi>nm</mi> </mrow> </semantics></math>, and <span class="html-italic">w</span> as labelled. Also shown is the full colourmap of the reflectance as a function of <math display="inline"><semantics> <mi>θ</mi> </semantics></math> and <span class="html-italic">w</span> for (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1.33</mn> </mrow> </semantics></math>. Note that the spectral maps are subtly dependent on both <math display="inline"><semantics> <msub> <mi>n</mi> <mi>a</mi> </msub> </semantics></math> and <span class="html-italic">w</span>.</p>
Full article ">Figure 2
<p>Schematic of the HPWG sensor and the coupled mode theory picture. The modes in the dielectric and plasmonic regions, <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>2</mn> </msub> </semantics></math> respectively, couple linearly as described by Equation (<a href="#FD6-sensors-22-09994" class="html-disp-formula">6</a>). The power in the dielectric at output is given by <math display="inline"><semantics> <mrow> <mrow> <mi>T</mi> <mo>=</mo> <mo>|</mo> </mrow> <msub> <mi>ψ</mi> <mn>1</mn> </msub> <msup> <mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> </semantics></math>. The periodic exchange of power between waveguides can lead to a resonant spectrum that in general depends on both the length of the device <span class="html-italic">L</span> and the analyte index <math display="inline"><semantics> <msub> <mi>n</mi> <mi>a</mi> </msub> </semantics></math> [<a href="#B25-sensors-22-09994" class="html-bibr">25</a>].</p>
Full article ">Figure 3
<p>Effective index <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>eff</mi> </msub> <mo>=</mo> <mi>β</mi> <mo>/</mo> <msub> <mi>k</mi> <mn>0</mn> </msub> </mrow> </semantics></math> as a function of wavelength for the geometry shown in <a href="#sensors-22-09994-f002" class="html-fig">Figure 2</a> when (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1.4</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> in the lossless case. The dashed line shows the isolated plasmonic- and dielectric- modes, respectively. The solid lines show the hybrid eigenmodes. (<b>d</b>–<b>f</b>) show the associated calculated coupling coefficients, following the simple expression in Equation (<a href="#FD8-sensors-22-09994" class="html-disp-formula">8</a>) (black line). Top row shows a schematic of the magnetic field for the plotted isolated- or hybrid-/super-modes.</p>
Full article ">Figure 4
<p>Real part of the effective index <math display="inline"><semantics> <mrow> <mo>ℜ</mo> <mi>e</mi> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>eff</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>ℜ</mo> <mi>e</mi> <mrow> <mo>(</mo> <mi>β</mi> <mo>/</mo> <msub> <mi>k</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> as a function of wavelength for the geometry shown in <a href="#sensors-22-09994-f002" class="html-fig">Figure 2</a> when (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1.4</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, using the lossy Drude model for the gold permittivity. The dashed line shows the isolated plasmonic- and dielectric- modes, respectively. The solid lines show the hybrid eigenmodes according to the “exact” solution (dark) and obtained from CMT via the eigenvalues of Equation (<a href="#FD9-sensors-22-09994" class="html-disp-formula">9</a>) (light). (<b>d</b>–<b>f</b>) show the associated <math display="inline"><semantics> <mrow> <mo>ℑ</mo> <mi>m</mi> <mo>(</mo> <msub> <mi>n</mi> <mi>eff</mi> </msub> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Transmitted power by the plasmonic sensor as a function of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and <math display="inline"><semantics> <msub> <mi>n</mi> <mi>a</mi> </msub> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>10</mn> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> using (<b>a</b>) CMT, and (<b>b</b>) FEM. (<b>c</b>,<b>d</b>): same as (<b>a</b>,<b>b</b>) for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>15</mn> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>. (<b>e</b>,<b>f</b>): same as (<b>a</b>,<b>b</b>) for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>20</mn> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>. (<b>g</b>,<b>h</b>): same as (<b>a</b>,<b>b</b>) for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>50</mn> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>. EP: exceptional point.</p>
Full article ">Figure 6
<p>Calculated colour maps of (<b>a</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo>−</mo> <msubsup> <mi>β</mi> <mn>2</mn> <mi>R</mi> </msubsup> <mrow> <mo>|</mo> <mo>/</mo> </mrow> <msub> <mi>k</mi> <mn>0</mn> </msub> <mo>+</mo> <mrow> <mo>|</mo> <mi>κ</mi> <mo>−</mo> <msubsup> <mi>β</mi> <mn>2</mn> <mi>I</mi> </msubsup> <mo>/</mo> <mn>2</mn> <mo>|</mo> </mrow> <mo>/</mo> <msub> <mi>k</mi> <mn>0</mn> </msub> </mrow> </semantics></math> using CMT and (<b>b</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <mover accent="true"> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo stretchy="false">˜</mo> </mover> <mo>−</mo> <mover accent="true"> <msub> <mi>β</mi> <mn>2</mn> </msub> <mo stretchy="false">˜</mo> </mover> <mrow> <mo>|</mo> <mo>/</mo> </mrow> <msub> <mi>k</mi> <mn>0</mn> </msub> </mrow> </semantics></math> using the exact supermodes. The global minima in the phase space show the location of the exceptional point using our CMT model and the exact solution, as per Equations (<a href="#FD10-sensors-22-09994" class="html-disp-formula">10</a>) and (<a href="#FD11-sensors-22-09994" class="html-disp-formula">11</a>).</p>
Full article ">Figure 7
<p>(<b>a</b>) Green (right axis): phase matching wavelength <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>PM</mi> </msub> </semantics></math> where <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo>=</mo> <msubsup> <mi>β</mi> <mn>2</mn> <mi>R</mi> </msubsup> </mrow> </semantics></math>, and associated half beat length <math display="inline"><semantics> <msub> <mi>L</mi> <mi>b</mi> </msub> </semantics></math> according to the supermodes obtained with CMT (orange) and “exact” calculations (blue). (<b>b</b>) Associated absorption length <math display="inline"><semantics> <msub> <mi>L</mi> <mi>a</mi> </msub> </semantics></math>. Solid lines indicate the average <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>a</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>L</mi> <mi>a</mi> <mn>1</mn> </msubsup> <mo>+</mo> <msubsup> <mi>L</mi> <mi>a</mi> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>; shaded regions encompass the <math display="inline"><semantics> <msubsup> <mi>L</mi> <mi>a</mi> <mn>1</mn> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>L</mi> <mi>a</mi> <mn>2</mn> </msubsup> </semantics></math> boundaries.</p>
Full article ">Figure 8
<p>(<b>a</b>) Transmission spectrum using the “conventional” approach of Equation (<a href="#FD14-sensors-22-09994" class="html-disp-formula">14</a>), as a function of wavelength, for the three analyte indices as labelled, using <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>10</mn> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>. Also shown are the resonant wavelength <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>R</mi> </msub> </semantics></math>, corresponding to the spectral minimum and the <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>λ</mi> </mrow> </semantics></math>, corresponding to the FWHM. (<b>b</b>) Associated <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>R</mi> </msub> </semantics></math> vs. <math display="inline"><semantics> <msub> <mi>n</mi> <mi>a</mi> </msub> </semantics></math> (green circles, left axis), second order polynomial fit (green line), and resulting sensitivity <span class="html-italic">S</span> (orange line, right axis.) Also shown in (<b>c</b>) are the <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>λ</mi> </mrow> </semantics></math> vs. <math display="inline"><semantics> <msub> <mi>n</mi> <mi>a</mi> </msub> </semantics></math> (orange curve, left axis) and the total FOM = <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>/</mo> <mi>δ</mi> <mi>λ</mi> </mrow> </semantics></math>. (<b>d</b>–<b>f</b>): same as (<b>a</b>–<b>c</b>), obtained from the CMT approach, using a subset of the data shown in <a href="#sensors-22-09994-f005" class="html-fig">Figure 5</a>a as labelled. (<b>g</b>–<b>i</b>): same as (<b>d</b>–<b>f</b>), obtained from FEM calculations, using a subset of the data shown in <a href="#sensors-22-09994-f005" class="html-fig">Figure 5</a>b as labelled.</p>
Full article ">
24 pages, 11528 KiB  
Article
Driver Take-Over Behaviour Study Based on Gaze Focalization and Vehicle Data in CARLA Simulator
by Javier Araluce, Luis M. Bergasa, Manuel Ocaña, Elena López-Guillén, Rodrigo Gutiérrez-Moreno and J. Felipe Arango
Sensors 2022, 22(24), 9993; https://doi.org/10.3390/s22249993 - 19 Dec 2022
Cited by 7 | Viewed by 4185
Abstract
Autonomous vehicles are the near future of the automobile industry. However, until they reach Level 5, humans and cars will share this intermediate future. Therefore, studying the transition between autonomous and manual modes is a fascinating topic. Automated vehicles may still need to [...] Read more.
Autonomous vehicles are the near future of the automobile industry. However, until they reach Level 5, humans and cars will share this intermediate future. Therefore, studying the transition between autonomous and manual modes is a fascinating topic. Automated vehicles may still need to occasionally hand the control to drivers due to technology limitations and legal requirements. This paper presents a study of driver behaviour in the transition between autonomous and manual modes using a CARLA simulator. To our knowledge, this is the first take-over study with transitions conducted on this simulator. For this purpose, we obtain driver gaze focalization and fuse it with the road’s semantic segmentation to track to where and when the user is paying attention, besides the actuators’ reaction-time measurements provided in the literature. To track gaze focalization in a non-intrusive and inexpensive way, we use a method based on a camera developed in previous works. We devised it with the OpenFace 2.0 toolkit and a NARMAX calibration method. It transforms the face parameters extracted by the toolkit into the point where the user is looking on the simulator scene. The study was carried out by different users using our simulator, which is composed of three screens, a steering wheel and pedals. We distributed this proposal in two different computer systems due to the computational cost of the simulator based on the CARLA simulator. The robot operating system (ROS) framework is in charge of the communication of both systems to provide portability and flexibility to the proposal. Results of the transition analysis are provided using state-of-the-art metrics and a novel driver situation-awareness metric for 20 users in two different scenarios. Full article
(This article belongs to the Special Issue Feature Papers in Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Automation levels, with the level of attention required by the driver.</p>
Full article ">Figure 2
<p>Experiment scenario take-over process timeline.</p>
Full article ">Figure 3
<p>Framework proposed to analyse take-over request (TOR). Green boxes determine the first subsystem/computer and blue boxes the second one.</p>
Full article ">Figure 4
<p>Point of view of the driver during the experiment.</p>
Full article ">Figure 5
<p>Circular uncertainty gaze area with a radius of 30 pixels on the image following a conical projection represented with an orange cone.</p>
Full article ">Figure 6
<p>Adaptive cruise control scenario. Ego-vehicle is represented in blue and the other agent in white.</p>
Full article ">Figure 7
<p>Adaptive cruise control while another vehicle overtakes the ego-vehicle in the side lane scenario. Ego-vehicle is the blue car, and the other agent is the white one.</p>
Full article ">Figure 8
<p>CTRV model prediction and IOU between both vehicles to determine TTC.</p>
Full article ">Figure 9
<p>Gaze focalization analysis with semantic segmentation. (<b>a</b>) Experiment: ACC. Driver attention: focus. (<b>b</b>) Experiment: ACC + passing. Driver attention: focus. (<b>c</b>) Experiment: ACC. Driver attention: using the mobile phone. (<b>d</b>) Experiment: ACC + passing. Driver attention: using the mobile phone. (<b>e</b>) Experiment: ACC. Driver attention: reading. (<b>f</b>) Experiment: ACC + passing. Driver attention: reading. (<b>g</b>) Experiment: ACC. Driver attention: talking to passenger. (<b>h</b>) Experiment: ACC + passing. Driver attention: talking to passenger.</p>
Full article ">Figure 10
<p>Velocity analysis after transition (mean and standard deviation). Transitions are split in autonomous to manual. (<b>a</b>) Experiment: ACC. Driver attention: focus. (<b>b</b>) Experiment: ACC + passing. Driver attention: focus. (<b>c</b>) Experiment: ACC. Driver attention: using the mobile phone. (<b>d</b>) Experiment: ACC + passing. Driver attention: using the mobile phone. (<b>e</b>) Experiment: ACC. Driver attention: reading. (<b>f</b>) Experiment: ACC + passing. Driver attention: reading. (<b>g</b>) Experiment: ACC. Driver attention: talking to passenger. (<b>h</b>) Experiment: ACC + passing. Driver attention: talking to passenger.</p>
Full article ">Figure 11
<p>Throttle and brake analysis after transition (mean and standard deviation). Transitions are split into autonomous to manual (A/M). (<b>a</b>) Experiment: ACC. Driver attention: focus. (<b>b</b>) Experiment: ACC + passing. Driver attention: focus. (<b>c</b>) Experiment: ACC. Driver attention: using the mobile phone. (<b>d</b>) Experiment: ACC + passing. Driver attention: using the mobile phone. (<b>e</b>) Experiment: ACC. Driver attention: reading. (<b>f</b>) Experiment: ACC + passing. Driver attention: reading. (<b>g</b>) Experiment: ACC. Driver attention: talking to passenger. (<b>h</b>) Experiment: ACC + passing. Driver attention: talking to passenger.</p>
Full article ">Figure 11 Cont.
<p>Throttle and brake analysis after transition (mean and standard deviation). Transitions are split into autonomous to manual (A/M). (<b>a</b>) Experiment: ACC. Driver attention: focus. (<b>b</b>) Experiment: ACC + passing. Driver attention: focus. (<b>c</b>) Experiment: ACC. Driver attention: using the mobile phone. (<b>d</b>) Experiment: ACC + passing. Driver attention: using the mobile phone. (<b>e</b>) Experiment: ACC. Driver attention: reading. (<b>f</b>) Experiment: ACC + passing. Driver attention: reading. (<b>g</b>) Experiment: ACC. Driver attention: talking to passenger. (<b>h</b>) Experiment: ACC + passing. Driver attention: talking to passenger.</p>
Full article ">Figure 12
<p>Steer analysis after transition (mean and standard deviation). Transitions are splitted in Autonomous to Manual (A/M). (<b>a</b>) Experiment: ACC. Driver attention: focus. (<b>b</b>) Experiment: ACC + passing. Driver attention: focus. (<b>c</b>) Experiment: ACC. Driver attention: using the mobile phone. (<b>d</b>) Experiment: ACC + passing. Driver attention: using the mobile phone. (<b>e</b>) Experiment: ACC. Driver attention: reading. (<b>f</b>) Experiment: ACC + passing. Driver attention: reading. (<b>g</b>) Experiment: ACC. Driver attention: talking to passenger. (<b>h</b>) Experiment: ACC + passing. Driver attention: talking to passenger.</p>
Full article ">Figure 12 Cont.
<p>Steer analysis after transition (mean and standard deviation). Transitions are splitted in Autonomous to Manual (A/M). (<b>a</b>) Experiment: ACC. Driver attention: focus. (<b>b</b>) Experiment: ACC + passing. Driver attention: focus. (<b>c</b>) Experiment: ACC. Driver attention: using the mobile phone. (<b>d</b>) Experiment: ACC + passing. Driver attention: using the mobile phone. (<b>e</b>) Experiment: ACC. Driver attention: reading. (<b>f</b>) Experiment: ACC + passing. Driver attention: reading. (<b>g</b>) Experiment: ACC. Driver attention: talking to passenger. (<b>h</b>) Experiment: ACC + passing. Driver attention: talking to passenger.</p>
Full article ">Figure 13
<p>Lane error analysis after transition (mean and standard deviation). Transitions are splitted in Autonomous to Manual (A/M). (<b>a</b>) Experiment: ACC. Driver attention: focus. (<b>b</b>) Experiment: ACC + passing. Driver attention: focus. (<b>c</b>) Experiment: ACC. Driver attention: using the mobile phone. (<b>d</b>) Experiment: ACC + passing. Driver attention: using the mobile phone. (<b>e</b>) Experiment: ACC. Driver attention: reading. (<b>f</b>) Experiment: ACC + passing. Driver attention: reading. (<b>g</b>) Experiment: ACC. Driver attention: talking to passenger. (<b>h</b>) Experiment: ACC + passing. Driver attention: talking to passenger.</p>
Full article ">Figure 13 Cont.
<p>Lane error analysis after transition (mean and standard deviation). Transitions are splitted in Autonomous to Manual (A/M). (<b>a</b>) Experiment: ACC. Driver attention: focus. (<b>b</b>) Experiment: ACC + passing. Driver attention: focus. (<b>c</b>) Experiment: ACC. Driver attention: using the mobile phone. (<b>d</b>) Experiment: ACC + passing. Driver attention: using the mobile phone. (<b>e</b>) Experiment: ACC. Driver attention: reading. (<b>f</b>) Experiment: ACC + passing. Driver attention: reading. (<b>g</b>) Experiment: ACC. Driver attention: talking to passenger. (<b>h</b>) Experiment: ACC + passing. Driver attention: talking to passenger.</p>
Full article ">
12 pages, 5793 KiB  
Article
Development of a Large-Scale Roadside Facility Detection Model Based on the Mapillary Dataset
by Zhehui Yang, Chenbo Zhao, Hiroya Maeda and Yoshihide Sekimoto
Sensors 2022, 22(24), 9992; https://doi.org/10.3390/s22249992 - 19 Dec 2022
Cited by 6 | Viewed by 3557
Abstract
The detection of road facilities or roadside structures is essential for high-definition (HD) maps and intelligent transportation systems (ITSs). With the rapid development of deep-learning algorithms in recent years, deep-learning-based object detection techniques have provided more accurate and efficient performance, and have become [...] Read more.
The detection of road facilities or roadside structures is essential for high-definition (HD) maps and intelligent transportation systems (ITSs). With the rapid development of deep-learning algorithms in recent years, deep-learning-based object detection techniques have provided more accurate and efficient performance, and have become an essential tool for HD map reconstruction and advanced driver-assistance systems (ADASs). Therefore, the performance evaluation and comparison of the latest deep-learning algorithms in this field is indispensable. However, most existing works in this area limit their focus to the detection of individual targets, such as vehicles or pedestrians and traffic signs, from driving view images. In this study, we present a systematic comparison of three recent algorithms for large-scale multi-class road facility detection, namely Mask R-CNN, YOLOx, and YOLOv7, on the Mapillary dataset. The experimental results are evaluated according to the recall, precision, mean F1-score and computational consumption. YOLOv7 outperforms the other two networks in road facility detection, with a precision and recall of 87.57% and 72.60%, respectively. Furthermore, we test the model performance on our custom dataset obtained from the Japanese road environment. The results demonstrate that models trained on the Mapillary dataset exhibit sufficient generalization ability. The comparison presented in this study aids in understanding the strengths and limitations of the latest networks in multiclass object detection on large-scale street-level datasets. Full article
(This article belongs to the Special Issue AI Applications in Smart Networks and Sensor Devices)
Show Figures

Figure 1

Figure 1
<p>Sample annotation of Japanese road-view dataset.</p>
Full article ">Figure 2
<p>Architecture of Mask R-CNN with Swin Transformer.</p>
Full article ">Figure 3
<p>Illustration of removed classes.</p>
Full article ">Figure 4
<p>Number of labeled instances per class and class merging.</p>
Full article ">Figure 5
<p>Normalized confusion matrices of detection results.</p>
Full article ">Figure 6
<p>Qualitative comparsion of output from different models on custom dataset and Mapillary dataset.</p>
Full article ">
15 pages, 1894 KiB  
Article
A Mobility Model for a 3D Non-Stationary Geometry Cluster-Based Channel Model for High Speed Trains in MIMO Wireless Channels
by Eva Assiimwe and Yihenew Wondie Marye
Sensors 2022, 22(24), 10019; https://doi.org/10.3390/s222410019 - 19 Dec 2022
Cited by 3 | Viewed by 2123
Abstract
During channel modeling for high-mobility channels, such as high-speed train (HST) channels, the velocity of the mobile radio station is assumed to be constant. However, this might not be realistic due to the dynamic movement of the train along the track. Therefore, in [...] Read more.
During channel modeling for high-mobility channels, such as high-speed train (HST) channels, the velocity of the mobile radio station is assumed to be constant. However, this might not be realistic due to the dynamic movement of the train along the track. Therefore, in this paper, an enhanced Gauss–Markov mobility model with a 3D non-stationary geometry based stochastic model (GBSM) for HST in MIMO Wireless Channels is proposed. The non-isotropic scatterers within a cluster are assumed to be around the sphere in which the mobile relay station (MRS) is located. The multi-path components (MPCs) are modeled with varying velocities, whereas the mobility model is a function of time. The MPCs are represented in a death–birth cluster using the Markov process. Furthermore, the channel statistics, i.e., the space-time correlation function, the root-mean-square Doppler shift, and the quasi-stationary interval, are derived from the non-stationary model. The model shows how the quasi-stationary time increases from 0.21 to 0.451 s with a decreasing acceleration of 0.6 to 0.2 m/s2 of the HST. In addition, the impact of the distribution of the angles on the channel statistics is presented. Finally, the simulated results are compared with the measured results. Therefore, there is a close relationship between the proposed model and the measured results, and the model can be used to characterize the channel’s properties. Full article
Show Figures

Figure 1

Figure 1
<p>A 3D mobility non-stationary cluster GBSM for HST channels.</p>
Full article ">Figure 2
<p>The death–birth process of the total number of clusters versus time.</p>
Full article ">Figure 3
<p>The absolute TCFs at any time instant <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The relationship between the <span class="html-italic">k</span> distributions of intra-cluster paths and time correlations for azimuth angles.</p>
Full article ">Figure 5
<p>The relationship between the <span class="html-italic">k</span> distributions of intra-cluster paths and time correlations for elevation angles.</p>
Full article ">Figure 6
<p>The percentage relative errors of the RMS-DS for various accelerations.</p>
Full article ">Figure 7
<p>The stationary intervals using the proposed model for time-varying angle, cluster power, and the IMT-A channel.</p>
Full article ">Figure 8
<p>The stationary interval of the proposed model, measured channel, and the IMT-A channel model.</p>
Full article ">
12 pages, 3308 KiB  
Article
Joint Estimation of Mass and Center of Gravity Position for Distributed Drive Electric Vehicles Using Dual Robust Embedded Cubature Kalman Filter
by Zhiguo Zhang, Guodong Yin and Zhixin Wu
Sensors 2022, 22(24), 10018; https://doi.org/10.3390/s222410018 - 19 Dec 2022
Cited by 8 | Viewed by 3018
Abstract
The accurate estimation of the mass and center of gravity (CG) position is key to vehicle dynamics modeling. The perturbation of key parameters in vehicle dynamics models can result in a reduction of accurate vehicle control and may even cause serious traffic accidents. [...] Read more.
The accurate estimation of the mass and center of gravity (CG) position is key to vehicle dynamics modeling. The perturbation of key parameters in vehicle dynamics models can result in a reduction of accurate vehicle control and may even cause serious traffic accidents. A dual robust embedded cubature Kalman filter (RECKF) algorithm, which takes into account unknown measurement noise, is proposed for the joint estimation of mass and CG position. First, the mass parameters are identified based on directly obtained longitudinal forces in the distributed drive electric vehicle tires using the whole vehicle longitudinal dynamics model and the RECKF. Then, the CG is estimated with the RECKF using the mass estimation results and the vertical vehicle model. Finally, different virtual tests show that, compared with the cubature Kalman algorithm, the RECKF reduces the root mean square error of mass and CG by at least 7.4%, and 2.9%, respectively. Full article
(This article belongs to the Topic Vehicle Dynamics and Control)
Show Figures

Figure 1

Figure 1
<p>The vehicle model.</p>
Full article ">Figure 2
<p>The framework of the estimation method.</p>
Full article ">Figure 3
<p>The flowchart of joint estimation.</p>
Full article ">Figure 4
<p>The vehicle speed in the case of acceleration.</p>
Full article ">Figure 5
<p>The acceleration in the case of acceleration.</p>
Full article ">Figure 6
<p>The tire forces in the case of acceleration.</p>
Full article ">Figure 7
<p>The estimated mass in the case of acceleration.</p>
Full article ">Figure 8
<p>The estimated CG in the case of acceleration.</p>
Full article ">Figure 9
<p>The vehicle speed in the case of deceleration.</p>
Full article ">Figure 10
<p>The acceleration in the case of deceleration.</p>
Full article ">Figure 11
<p>The tire forces in the case of deceleration.</p>
Full article ">Figure 12
<p>The estimated mass in the case of deceleration.</p>
Full article ">Figure 13
<p>The estimated CG in the case of deceleration.</p>
Full article ">
11 pages, 9939 KiB  
Brief Report
Spatial Distribution of Muscular Effects of Acute Whole-Body Electromyostimulation at the Mid-Thigh and Lower Leg—A Pilot Study Applying Magnetic Resonance Imaging
by Marina Götz, Rafael Heiss, Simon von Stengel, Frank Roemer, Joshua Berger, Armin Nagel, Michael Uder and Wolfgang Kemmler
Sensors 2022, 22(24), 10017; https://doi.org/10.3390/s222410017 - 19 Dec 2022
Cited by 4 | Viewed by 2487
Abstract
Whole-body electromyostimulation (WB-EMS) is an innovative training method that stimulates large areas simultaneously. In order to determine the spatial distribution of WB-EMS with respect to volume involvement and stimulation depth, we determined the extent of intramuscular edema using magnetic resonance imaging (MRI) as [...] Read more.
Whole-body electromyostimulation (WB-EMS) is an innovative training method that stimulates large areas simultaneously. In order to determine the spatial distribution of WB-EMS with respect to volume involvement and stimulation depth, we determined the extent of intramuscular edema using magnetic resonance imaging (MRI) as a marker of structural effects. Intense WB-EMS first application (20 min, bipolar, 85 Hz, 350 µs) was conducted with eight physically less trained students without previous WB-EMS experience. Transversal T2-weighted MRI was performed at baseline and 72 h post WB-EMS to identify edema at the mid-thigh and lower leg. The depth of the edema ranged from superficial to maximum depth with superficial and deeper muscle groups of the mid-thigh or lower leg area approximately affected in a similar fashion. However, the grade of edema differed between the muscle groups, which suggests that the intensity of EMS-induced muscular contraction was not identical for all muscles. WB-EMS of the muscles via surface cuff electrodes has an effect on deeper parts of the stimulated anatomy. Reviewing the spatial and volume distribution, we observed a heterogeneous pattern of edema. We attribute this finding predominately to different stimulus thresholds of the muscles and differences in the stress resistance of the muscles. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Supervised video-guided WB-EMS application with two trainees supervised by two instructors (left image) (<b>b</b>) electrode placement at the mid-thigh (center image) (<b>c</b>) electrode placement at the mid-calf (right image).</p>
Full article ">Figure 2
<p>Muscle status with (left side) or without (right side) WB-EMS application at the mid-thigh region of interest (ROI) of three participants (<b>a</b>–<b>c</b>) as determined by magnetic resonance imaging. (<b>a</b>) Intramuscular edema at m. vastus lateralis, vastus intermedius, vastus medialis, long head and short head of biceps femoris, semitendinosus, semimembranosus (treated leg). All muscles show a normal appearance without any sign of WB-EMS induced muscle edema (control leg). (<b>b</b>) Intramuscular edema at m. vastus lateralis, long head of biceps femoris, and semimembranosus (treated leg). All muscles show a normal appearance without any sign of WB-EMS-induced muscle edema (control leg). (<b>c</b>) Intramuscular edema at m. biceps femoris caput longum, semitendinosus, semimembranosus, adductor magnus (treated leg). All muscles show a normal appearance without any sign of WB-EMS-induced muscle edema (control leg).</p>
Full article ">Figure 3
<p>Muscle status with (left side) or without (right side) WB-EMS application at the lower leg ROI of three participants (<b>a</b>–<b>c</b>) as determined by magnetic resonance imaging. (<b>a</b>) Intramuscular edema at m. gastrocnemius medialis und lateralis, soleus, fibularis longus, tibialis posterior (treated leg). All muscles show a normal appearance without any sign of WB-EMS-induced muscle edema (control leg). (<b>b</b>) Intramuscular edema at m. gastrocnemius medialis and soleus (treated leg). All muscles show a normal appearance without any sign of WB-EMS-induced muscle edema (control leg). (<b>c</b>) Intramuscular edema at m. gastrocnemius medialis und lateralis, soleus, fibularis longus and tibialis posterior (treated leg). All muscles show a normal appearance without any sign of WB-EMS-induced muscle edema (control leg).</p>
Full article ">
16 pages, 33181 KiB  
Article
CamNuvem: A Robbery Dataset for Video Anomaly Detection
by Davi D. de Paula, Denis H. P. Salvadeo and Darlan M. N. de Araujo
Sensors 2022, 22(24), 10016; https://doi.org/10.3390/s222410016 - 19 Dec 2022
Cited by 4 | Viewed by 5807
Abstract
(1) Background: The research area of video surveillance anomaly detection aims to automatically detect the moment when a video surveillance camera captures something that does not fit the normal pattern. This is a difficult task, but it is important to automate, improve, and [...] Read more.
(1) Background: The research area of video surveillance anomaly detection aims to automatically detect the moment when a video surveillance camera captures something that does not fit the normal pattern. This is a difficult task, but it is important to automate, improve, and lower the cost of the detection of crimes and other accidents. The UCF–Crime dataset is currently the most realistic crime dataset, and it contains hundreds of videos distributed in several categories; it includes a robbery category, which contains videos of people stealing material goods using violence, but this category only includes a few videos. (2) Methods: This work focuses only on the robbery category, presenting a new weakly labelled dataset that contains 486 new real–world robbery surveillance videos acquired from public sources. (3) Results: We have modified and applied three state–of–the–art video surveillance anomaly detection methods to create a benchmark for future studies. We showed that in the best scenario, taking into account only the anomaly videos in our dataset, the best method achieved an AUC of 66.35%. When all anomaly and normal videos were taken into account, the best method achieved an AUC of 88.75%. (4) Conclusion: This result shows that there is a huge research opportunity to create new methods and approaches that can improve robbery detection in video surveillance. Full article
Show Figures

Figure 1

Figure 1
<p>CamNuvem dataset statistics. (<b>a</b>) Anomaly percentage of each test video. (<b>b</b>) The number of videos in the dataset.</p>
Full article ">Figure 2
<p>Two samples with clearly different view fields are shown. Four frames from two different videos are illustrated here. The camera used to capture the images from the first video (<b>a</b>–<b>d</b>) is at a greater height and the people were captured from above. The camera used to capture the images from the second video (<b>e</b>–<b>h</b>) is at a lower height and the people were captured from the front.</p>
Full article ">Figure 3
<p>Samples in the abnormal category. Four frames from two different videos are illustrated here. Several image elements can be used to detect a robbery. In the first video (<b>a</b>–<b>d</b>), the robbery is indicated by the man crouching in the first (<b>a</b>) and last (<b>d</b>) frames. In the second video (<b>e</b>–<b>h</b>), the robbery is indicated by the man raising his hands in the first frame (<b>e</b>).</p>
Full article ">Figure 4
<p>RTFM, WSAL, and RADS ROC curves without 10–crop for CamNuvem dataset. (<b>a</b>) Entire test set. (<b>b</b>) Only the anomaly videos.</p>
Full article ">Figure 5
<p>RTFM, WSAL, and RADS ROC curves with 10–crop for CamNuvem dataset. (<b>a</b>) Entire test set. (<b>b</b>) Only the anomaly videos.</p>
Full article ">
17 pages, 2584 KiB  
Article
Evaluation of the Influence of Machine Tools on the Accuracy of Indoor Positioning Systems
by Till Neuber, Anna-Maria Schmitt, Bastian Engelmann and Jan Schmitt
Sensors 2022, 22(24), 10015; https://doi.org/10.3390/s222410015 - 19 Dec 2022
Cited by 3 | Viewed by 2190
Abstract
In recent years, the use of indoor localization techniques has increased significantly in a large number of areas, including industry and healthcare, primarily for monitoring and tracking reasons. From the field of radio frequency technologies, an ultra-wideband (UWB) system offers comparatively high accuracy [...] Read more.
In recent years, the use of indoor localization techniques has increased significantly in a large number of areas, including industry and healthcare, primarily for monitoring and tracking reasons. From the field of radio frequency technologies, an ultra-wideband (UWB) system offers comparatively high accuracy and is therefore suitable for use cases with high precision requirements in position determination, for example for localizing an employee when interacting with a machine tool on the shopfloor. Indoor positioning systems with radio signals are influenced by environmental obstacles. Although the influence of building structures like walls and furniture was already analysed in the literature before, the influence of metal machine tools was not yet evaluated concerning the accuracy of the position determination. Accordingly, the research question for this article is defined: To what extent is the positioning accuracy of the UWB system influenced by a metal machine tool?The accuracy was measured in a test setup, which consists of a total of four scenarios in a production environment. For this purpose, the visual contact between the transmitter and the receiver modules, including the influence of further interfering factors of a commercially available indoor positioning system, was improved step by step from scenario 1 to 4. A laser tracker was used as the reference measuring device. The data was analysed based on the type A evaluation of standard uncertainty according to the guide to the expression of uncertainty in measurement (GUM). It was possible to show an improvement in standard deviation from 87.64cm±32.27cm to 6.07cm±2.24cm with confidence level 95% and thus provides conclusions about the setup of an indoor positioning system on the shopfloor. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Germany 2022)
Show Figures

Figure 1

Figure 1
<p>UWB measuring method [<a href="#B8-sensors-22-10015" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>Overview of measurement deviations [<a href="#B21-sensors-22-10015" class="html-bibr">21</a>].</p>
Full article ">Figure 3
<p>Setup with laser tracker.</p>
Full article ">Figure 4
<p>Grid measuring points.</p>
Full article ">Figure 5
<p>Visualization scenario 1: Two lines of sight.</p>
Full article ">Figure 6
<p>Visualization scenario 2: Three lines of sight.</p>
Full article ">Figure 7
<p>Visualization scenario 3: Four lines of sight.</p>
Full article ">Figure 8
<p>Visualization scenario 4: Four lines of sight in free environment.</p>
Full article ">Figure 9
<p>Data visualization scenario 1: Two lines of sight.</p>
Full article ">Figure 10
<p>Data visualization scenario 2: Three lines of sight.</p>
Full article ">Figure 11
<p>Data visualization scenario 3: Four lines of sight.</p>
Full article ">Figure 12
<p>Data visualization scenario 4: Four lines of sight in a free environment.</p>
Full article ">
20 pages, 1671 KiB  
Article
Low-Complexity Lossless Coding of Asynchronous Event Sequences for Low-Power Chip Integration
by Ionut Schiopu and Radu Ciprian Bilcu
Sensors 2022, 22(24), 10014; https://doi.org/10.3390/s222410014 - 19 Dec 2022
Cited by 5 | Viewed by 2199
Abstract
The event sensor provides high temporal resolution and generates large amounts of raw event data. Efficient low-complexity coding solutions are required for integration into low-power event-processing chips with limited memory. In this paper, a novel lossless compression method is proposed for encoding the [...] Read more.
The event sensor provides high temporal resolution and generates large amounts of raw event data. Efficient low-complexity coding solutions are required for integration into low-power event-processing chips with limited memory. In this paper, a novel lossless compression method is proposed for encoding the event data represented as asynchronous event sequences. The proposed method employs only low-complexity coding techniques so that it is suitable for hardware implementation into low-power event-processing chips. A first, novel, contribution consists of a low-complexity coding scheme which uses a decision tree to reduce the representation range of the residual error. The decision tree is formed by using a triplet threshold parameter which divides the input data range into several coding ranges arranged at concentric distances from an initial prediction, so that the residual error of the true value information is represented by using a reduced number of bits. Another novel contribution consists of an improved representation, which divides the input sequence into same-timestamp subsequences, wherein each subsequence collects the same timestamp events in ascending order of the largest dimension of the event spatial information. The proposed same-timestamp representation replaces the event timestamp information with the same-timestamp subsequence length and encodes it together with the event spatial and polarity information into a different bitstream. Another novel contribution is the random access to any time window by using additional header information. The experimental evaluation on a highly variable event density dataset demonstrates that the proposed low-complexity lossless coding method provides an average improvement of 5.49%, 11.45%, and 35.57% compared with the state-of-the-art performance-oriented lossless data compression codecs Bzip2, LZMA, and ZLIB, respectively. To our knowledge, the paper proposes the first low-complexity lossless compression method for encoding asynchronous event sequences that are suitable for hardware implementation into low-power chips. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed low-complexity lossless coding framework. The input asynchronous event sequence, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">S</mi> <mi mathvariant="script">T</mi> </msub> <mo>,</mo> </mrow> </semantics></math> is first represented by using the proposed event representation as a set of same-timestamp subsequences, <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="script">S</mi> <mi>k</mi> </msup> <mo>,</mo> </mrow> </semantics></math> having same-timestamp <math display="inline"><semantics> <mrow> <msup> <mi>t</mi> <mi>k</mi> </msup> <mo>,</mo> </mrow> </semantics></math> and then encoded losslessly by employing the proposed method. The output bitstream of each same-timestamp subsequence can be stored in memory as a compressed file. Moreover, it can also be collected as a package bitstream for all the timestamps found in a time period <math display="inline"><semantics> <msub> <mo>Δ</mo> <mrow> <mi>R</mi> <mi>A</mi> </mrow> </msub> </semantics></math> and then stored in memory together with bitstream-length information stored as a header as a compressed file with RA, so that the proposed codec can provide RA to any time window of size <math display="inline"><semantics> <msub> <mo>Δ</mo> <mrow> <mi>R</mi> <mi>A</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 2
<p>The proposed representation based the proposed same-timestamp (ST) order (on the right) in comparison with the sensor’s event-by-event (EE) order (on the left). The red arrow shows the write-to-file order used to generate the input data files feed to the traditional methods.</p>
Full article ">Figure 3
<p>The proposed low-complexity coding scheme, triple threshold-based range partition (TTP). (<b>a</b>) TTP range partition. (<b>b</b>) TTP decision tree. (<b>c</b>) TTP<math display="inline"><semantics> <msub> <mrow/> <mi>y</mi> </msub> </semantics></math> range partition. (<b>d</b>) TTP<math display="inline"><semantics> <msub> <mrow/> <mi>y</mi> </msub> </semantics></math> decision tree. (<b>e</b>) TTP<math display="inline"><semantics> <msub> <mrow/> <mi>e</mi> </msub> </semantics></math> range partition. (<b>f</b>) TTP<math display="inline"><semantics> <msub> <mrow/> <mi>e</mi> </msub> </semantics></math> decision tree. (<b>g</b>) TTP<math display="inline"><semantics> <msub> <mrow/> <mi>L</mi> </msub> </semantics></math> range partition. (<b>h</b>) TTP<math display="inline"><semantics> <msub> <mrow/> <mi>L</mi> </msub> </semantics></math> range partition.</p>
Full article ">Figure 4
<p>Deterministic cases: (<b>a</b>) if <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math> or <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>&gt;</mo> <mi>H</mi> <mo>,</mo> </mrow> </semantics></math> then condition (c4) is not checked when building the context tree and one bit is saved. (<b>b</b>) If <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <msup> <mn>2</mn> <mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mspace width="0.166667em"/> <msup> <mn>2</mn> <mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>]</mo> </mrow> </semantics></math>, then <span class="html-italic">x</span> is represented by using one bit less than in the case when <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mspace width="0.166667em"/> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <msup> <mn>2</mn> <mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>]</mo> </mrow> </semantics></math> or <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mo>(</mo> <msup> <mn>2</mn> <mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mspace width="0.166667em"/> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The encoding workflow using the proposed LLC-ARES method as an asynchronous event sequence of <math display="inline"><semantics> <mrow> <mn>2</mn> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> </mrow> </semantics></math>s time-length, containing 23 events. The input sequence, represented by using the EE order, is first grouped and rearranged by using the proposed ST order. LLC-ARES encodes each data structure by using different TTP variations as an output bitstream of 316 bits stored by using 40 bytes, i.e., 40 numbers having an 8-bit representation.</p>
Full article ">Figure 6
<p>(<b>a</b>) The DSEC sequence time length (s) and event density (Mevps), where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density and the sequence time length was constrained to contain only the first <math display="inline"><semantics> <mrow> <mi mathvariant="script">T</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>8</mn> </msup> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> </mrow> </semantics></math>s (100 s) of the captured event data. (<b>b</b>) The cumulated number of events (Mev) over the first 10 s of the DSEC sequences having the lowest (SeqID: 01), medium (SeqID: 41), and highest (SeqID: 82) acquired event density.</p>
Full article ">Figure 7
<p>The compression ratio (CR) results over the DSEC dataset [<a href="#B34-sensors-22-10014" class="html-bibr">34</a>], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.</p>
Full article ">Figure 8
<p>The bitrate (BR) results over DSEC [<a href="#B34-sensors-22-10014" class="html-bibr">34</a>], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.</p>
Full article ">Figure 9
<p>The encoded event density results over the DSEC dataset [<a href="#B34-sensors-22-10014" class="html-bibr">34</a>], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.</p>
Full article ">Figure 10
<p>The time ratio (TR) results over the DSEC dataset [<a href="#B34-sensors-22-10014" class="html-bibr">34</a>], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.</p>
Full article ">Figure 11
<p>Encoding runtime results over the DSEC dataset [<a href="#B34-sensors-22-10014" class="html-bibr">34</a>], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.</p>
Full article ">Figure 12
<p>Decoding runtime results over the DSEC dataset [<a href="#B34-sensors-22-10014" class="html-bibr">34</a>], where the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.</p>
Full article ">Figure 13
<p>The relative compression (RC) results for RA results over the DSEC dataset [<a href="#B34-sensors-22-10014" class="html-bibr">34</a>], wherein the asynchronous event sequences are sorted in ascending order of the sequence acquisition density.</p>
Full article ">
21 pages, 4936 KiB  
Article
Versatile Confocal Raman Imaging Microscope Built from Off-the-Shelf Opto-Mechanical Components
by Deseada Diaz Barrero, Genrich Zeller, Magnus Schlösser, Beate Bornschein and Helmut H. Telle
Sensors 2022, 22(24), 10013; https://doi.org/10.3390/s222410013 - 19 Dec 2022
Cited by 2 | Viewed by 3298
Abstract
Confocal Raman microscopic (CRM) imaging has evolved to become a key tool for spatially resolved, compositional analysis and imaging, down to the μm-scale, and nowadays one may choose between numerous commercial instruments. That notwithstanding, situations may arise which exclude the use of a [...] Read more.
Confocal Raman microscopic (CRM) imaging has evolved to become a key tool for spatially resolved, compositional analysis and imaging, down to the μm-scale, and nowadays one may choose between numerous commercial instruments. That notwithstanding, situations may arise which exclude the use of a commercial instrument, e.g., if the analysis involves toxic or radioactive samples/environments; one may not wish to render an expensive instrument unusable for other uses, due to contamination. Therefore, custom-designed CRM instrumentation—being adaptable to hazardous conditions and providing operational flexibility—may be beneficial. Here, we describe a CRM setup, which is constructed nearly in its entirety from off-the-shelf optomechanical and optical components. The original aim was to develop a CRM suitable for the investigation of samples exposed to tritium. For increased flexibility, the CRM system incorporates optical fiber coupling to both the Raman excitation laser and the spectrometer. Lateral raster scans and axial profiling of samples are facilitated by the use of a motorized xyz-translation assembly. Besides the description of the construction and alignment of the CRM system, we also provide (i) the experimental evaluation of system performance (such as, e.g., spatial resolution) and (ii) examples of Raman raster maps and axial profiles of selected thin-film samples (such as, e.g., graphene sheets). Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Overview of the confocal Raman microscope constructed from opto-mechanical cage system components and optical elements of the <span class="html-italic">Thorlabs</span> collection; note that the DPSS excitation laser, the Raman spectrometer, and the motorized sample stage are not included here. The conceptual CRM groups are: <b>A</b>—excitation laser coupling and Raman light collection; <b>B</b>—the wide-field imaging arm; <b>C</b>—the confocal light collection arm. Key optical components are annotated, together with their spatial adjustment; for further details, see main text.</p>
Full article ">Figure 2
<p>Overview image of the confocal Raman microscope system, as shown schematically in <a href="#sensors-22-10013-f001" class="html-fig">Figure 1</a>, together with the xyz-motorized sample stage and the (fiber-coupled) excitation laser. Insert: a GFET sample mounted on the xyz-stage, illuminated by laser light through the CRM objective. Key structural elements in the image are annotated; breadboard tapped-hole spacing = 25 mm.</p>
Full article ">Figure 3
<p>Concept of Raman spectral mapping, exemplified for a graphene sample (GFET-S10, <span class="html-italic">Graphenea</span>; image reproduced with permission), with the recorded data related to the indicated 50 × 50 μm<sup>2</sup> GFET device. The area is raster-scanned (here with a step-increment of ΔS = 7 (equivalent to ~9 μm)), with a sequence of full spectra recorded at each raster point. Out of the hyperspectral stack, a particular slice is selected to generate a Raman map of the sample (here for the data at about 1330 cm<sup>−1</sup>). Selected spectra from three specific locations in the GFET structure—graphene layer, gold-contact, and substrate (marked by the white squares)—are displayed in the lower left, highlighting the distinct spectral differences associated with the (chemical) sample constituents. For further details, see text.</p>
Full article ">Figure 4
<p>Determination of the laser focal beam diameter (= FBD) on the target surface, through the ×10 microscope objective. Line-scan for the focused laser beam across a chromium grid line of a silicon SEM-finder grid substrate (EM-Tec FG1, <span class="html-italic">Micro to Nano</span>), using the edge-scan method [<a href="#B25-sensors-22-10013" class="html-bibr">25</a>]; the scan data are for the 2TO spectral line of silicon. For details, see text.</p>
Full article ">Figure 5
<p>Raman image maps of a GFET device (200 × 200 μm<sup>2</sup>, <span class="html-italic">Graphenea</span>). (<b>a</b>) Schematic GFET structure; the area scanned by CRM—about 80 × 80 μm<sup>2</sup>—is indicated by the red square; the spatial reference point for the scans is the left-most contact. (<b>b</b>–<b>d</b>) Maps of the 2D-, G- and D-peak signal, respectively, scanned with a step increment of ΔS = 2. Note that 1 step-increment (ΔS = 1) corresponds to a spatial displacement of ~1.25 μm. For further details, see text.</p>
Full article ">
16 pages, 4632 KiB  
Article
Nanocomposite Based on HA/PVTMS/Cl2FeH8O4 as a Gas and Temperature Sensor
by Sohrab Nasiri, Marzieh Rabiei, Ieva Markuniene, Mozhgan Hosseinnezhad, Reza Ebrahimi-Kahrizsangi, Arvydas Palevicius, Andrius Vilkauskas and Giedrius Janusas
Sensors 2022, 22(24), 10012; https://doi.org/10.3390/s222410012 - 19 Dec 2022
Viewed by 2678
Abstract
In this paper, a novel nanocrystalline composite material of hydroxyapatite (HA)/polyvinyltrimethoxysilane (PVTMS)/iron(II)chloride tetrahydrate (Cl2FeH8-O4) with hexagonal structure is proposed for the fabrication of a gas/temperature sensor. Taking into account the sensitivity of HA to high temperatures, to [...] Read more.
In this paper, a novel nanocrystalline composite material of hydroxyapatite (HA)/polyvinyltrimethoxysilane (PVTMS)/iron(II)chloride tetrahydrate (Cl2FeH8-O4) with hexagonal structure is proposed for the fabrication of a gas/temperature sensor. Taking into account the sensitivity of HA to high temperatures, to prevent the collapse and breakdown of bonds and the leakage of volatiles without damaging the composite structure, a freeze-drying machine is designed and fabricated. X-ray diffraction, FTIR, SEM, EDAX, TEM, absorption and photoluminescence analyses of composite are studied. XRD is used to confirm the material structure and the crystallite size of the composite is calculated by the Monshi–Scherrer method, and a value of 81.60 ± 0.06 nm is obtained. The influence of the oxygen environment on the absorption and photoluminescence measurements of the composite and the influence of vaporized ethanol, N2 and CO on the SiO2/composite/Ag sensor device are investigated. The sensor with a 30 nm-thick layer of composite shows the highest response to vaporized ethanol, N2 and ambient CO. Overall, the composite and sensor exhibit a good selectivity to oxygen, vaporized ethanol, N2 and CO environments. Full article
(This article belongs to the Special Issue Recent Advances in Thin Film Gas Sensors)
Show Figures

Figure 1

Figure 1
<p>The chemical structures of the composite components (<b>a</b>) HA, (<b>b</b>) PVTMS and (<b>c</b>) Cl<sub>2</sub>FeH<sub>8</sub>O<sub>4</sub>.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic of the freeze-drying system and (<b>b</b>) the TEC: interior view of TEC and the connection of the heat side of TEC to the fan via a shaft.</p>
Full article ">Figure 3
<p>(<b>a</b>) Synthesis route of HA, (<b>b</b>) polymerization route of VTMS and (<b>c</b>) fabrication route of the HA/PVTMS/Cl<sub>2</sub>FeH<sub>8</sub>O<sub>4</sub> Composite.</p>
Full article ">Figure 4
<p>(<b>a</b>) X-ray diffraction, (<b>b</b>) Linear plots of the Monshi–Scherrer equation, and (<b>c</b>) cif file and the view of (211) of the composite.</p>
Full article ">Figure 5
<p>(<b>a</b>) FTIR spectrum of composite and (<b>b</b>) schematic of Fe and Cl substitution in the composite.</p>
Full article ">Figure 6
<p>(<b>a</b>) SEM and (<b>b</b>) TEM images of composite.</p>
Full article ">Figure 7
<p>(<b>a</b>) Absorption spectra and (<b>b</b>) content of oxygen versus absorption intensity and optical band gap of the vapor-coated composite on film.</p>
Full article ">Figure 8
<p>(<b>a</b>) PL spectra and (<b>b</b>) content of oxygen versus PL intensity of the composite in film mode.</p>
Full article ">Figure 9
<p>The schematic system for measuring the sensor resistance.</p>
Full article ">Figure 10
<p>(<b>a</b>) Arrhenius curves of sensors with composite layers of thickness of 30 and 60 nm and (<b>b</b>) sensitivity of the devices based on ethanol concentrations.</p>
Full article ">Figure 11
<p>The response of the sensor in the CO and N<sub>2</sub> environments.</p>
Full article ">
23 pages, 11988 KiB  
Article
Equipment Identification and Localization Method Based on Improved YOLOv5s Model for Production Line
by Ming Yu, Qian Wan, Songling Tian, Yanyan Hou, Yimiao Wang and Jian Zhao
Sensors 2022, 22(24), 10011; https://doi.org/10.3390/s222410011 - 19 Dec 2022
Cited by 9 | Viewed by 3348
Abstract
Intelligent video surveillance based on artificial intelligence, image processing, and other advanced technologies is a hot topic of research in the upcoming era of Industry 5.0. Currently, low recognition accuracy and low location precision of devices in intelligent monitoring remain a problem in [...] Read more.
Intelligent video surveillance based on artificial intelligence, image processing, and other advanced technologies is a hot topic of research in the upcoming era of Industry 5.0. Currently, low recognition accuracy and low location precision of devices in intelligent monitoring remain a problem in production lines. This paper proposes a production line device recognition and localization method based on an improved YOLOv5s model. The proposed method can achieve real-time detection and localization of production line equipment such as robotic arms and AGV carts by introducing CA attention module in YOLOv5s network model architecture, GSConv lightweight convolution method and Slim-Neck method in Neck layer, add Decoupled Head structure to the Detect layer. The experimental results show that the improved method achieves 93.6% Precision, 85.6% recall, and 91.8% [email protected], and the Pascal VOC2007 public dataset test shows that the improved method effectively improves the recognition accuracy. The research results can substantially improve the intelligence level of production lines and provide an important reference for manufacturing industries to realize intelligent and digital transformation. Full article
(This article belongs to the Topic Modern Technologies and Manufacturing Systems, 2nd Volume)
Show Figures

Figure 1

Figure 1
<p>The pixel coordinate system and the camera coordinate system.</p>
Full article ">Figure 2
<p>Implementation process of CA attention module.</p>
Full article ">Figure 3
<p>Structure of GSConv and VoV-GSCSP modules.</p>
Full article ">Figure 4
<p>Structure of Decoupled Head.</p>
Full article ">Figure 5
<p>Network structure of production line equipment identification and localization method based on improved YOLOv5s model.</p>
Full article ">Figure 6
<p>The dataset labeling process.</p>
Full article ">Figure 7
<p>Parameters generated during dataset labeling.</p>
Full article ">Figure 8
<p>Model metrics judgment.</p>
Full article ">Figure 9
<p>Comparison of performance parameters of YOLOv3, YOLOv5-6.0, YOLOv5-5.0, YOLOv5-Lite, and improved method iteration process.</p>
Full article ">Figure 10
<p>Comparison of the P–R curve of the improved model and the original model of YOLOv5-6.0.</p>
Full article ">Figure 10 Cont.
<p>Comparison of the P–R curve of the improved model and the original model of YOLOv5-6.0.</p>
Full article ">Figure 11
<p>Representative images showing the training process of the improved method.</p>
Full article ">Figure 11 Cont.
<p>Representative images showing the training process of the improved method.</p>
Full article ">Figure 12
<p>Precision rate of YOLOv5-6.0, YOLOv5-5.0, YOLOv5-Lite, and improved method model weights.</p>
Full article ">Figure 13
<p>Comparison of recognition test results between YOLOv5-5.0 and the improved method.</p>
Full article ">Figure 14
<p>Comparison of recognition test results between YOLOv5-6.0 and the improved method.</p>
Full article ">Figure 15
<p>Comparison of recognition test results of YOLOv3 and the improved method.</p>
Full article ">Figure 16
<p>Comparison of recognition test results between YOLOv5-Lite and the improved method.</p>
Full article ">Figure 17
<p>FPS test results of the improved method.</p>
Full article ">
14 pages, 3602 KiB  
Article
Generalized Scale Factor Calibration Method for an Off-Axis Digital Image Correlation-Based Video Deflectometer
by Long Tian, Tong Ding and Bing Pan
Sensors 2022, 22(24), 10010; https://doi.org/10.3390/s222410010 - 19 Dec 2022
Cited by 4 | Viewed by 1695
Abstract
When using off-axis digital image correlation (DIC) for non-contact, remote, and multipoint deflection monitoring of engineering structures, accurate calibration of the scale factor (SF), which converts image displacement to physical displacement for each measurement point, is critical to realize high-quality displacement measurement. In [...] Read more.
When using off-axis digital image correlation (DIC) for non-contact, remote, and multipoint deflection monitoring of engineering structures, accurate calibration of the scale factor (SF), which converts image displacement to physical displacement for each measurement point, is critical to realize high-quality displacement measurement. In this work, based on the distortion-free pinhole imaging model, a generalized SF calibration model is proposed for an off-axis DIC-based video deflectometer. Then, the transversal relationship between the proposed SF calibration method and three commonly used SF calibration methods was discussed. The accuracy of these SF calibration methods was also compared using indoor rigid body translation experiments. It is proved that the proposed method can be degraded to one of the existing calibration methods in most cases, but will provide more accurate results under the following four conditions: (1) the camera’s pitch angle is more than 20°, (2) the focal length is more than 25 mm, (3) the pixel size of the camera sensor is more than 5 um, and (4) the image y-coordinate corresponding to the measurement point after deformation is far from the image center. Full article
(This article belongs to the Collection Vision Sensors and Systems in Structural Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of off-axis DIC-based video deflectometer for deflection monitoring.</p>
Full article ">Figure 2
<p>Imaging model of off-axis DIC: (<b>a</b>) geometric model with off-axis imaging of camera; (<b>b</b>) the off-axis imaging relation diagram of the measurement point.</p>
Full article ">Figure 3
<p>Schematic of the two distance representation methods.</p>
Full article ">Figure 4
<p>Schematic diagram of camera roll angle correction: (<b>a</b>) original model; (<b>b</b>) corrected model.</p>
Full article ">Figure 5
<p>The video deflectometer and high-precision vertical displacement platform.</p>
Full article ">Figure 6
<p>Experiment setup of laboratory verification tests.</p>
Full article ">Figure 7
<p>Image displacement of two camera lenses with different focal lengths: (<b>a</b>) <span class="html-italic">f</span> = 8 mm, (<b>b</b>) <span class="html-italic">f</span> = 50 mm.</p>
Full article ">Figure 8
<p>Displacement was calculated by three calibration methods and two different lenses.</p>
Full article ">Figure 9
<p>SF variation with the camera and lens parameters: (<b>a</b>) SF-pitch angle curve for different focal lengths of the lens, (<b>b</b>) SF-pitch angle curve for different pixel sizes.</p>
Full article ">Figure 10
<p>Simulated results of full-field SFs before and after deformation for two calibration methods: (<b>a</b>) the proposed calibration method before deformation, (<b>b</b>) Pan’s calibration method before deformation, (<b>c</b>) the proposed calibration method after vertical translation 100 pixels, (<b>d</b>) difference between the proposed and Pan’s calibration after vertical translation 100 pixels.</p>
Full article ">
13 pages, 4174 KiB  
Article
Multi-Scale Strengthened Directional Difference Algorithm Based on the Human Vision System
by Yuye Zhang, Ying Zheng and Xiuhong Li
Sensors 2022, 22(24), 10009; https://doi.org/10.3390/s222410009 - 19 Dec 2022
Cited by 2 | Viewed by 1807
Abstract
The human visual system (HVS) mechanism has been successfully introduced into the field of infrared small target detection. However, most of the current detection algorithms based on the mechanism of the human visual system ignore the continuous direction information and are easily disturbed [...] Read more.
The human visual system (HVS) mechanism has been successfully introduced into the field of infrared small target detection. However, most of the current detection algorithms based on the mechanism of the human visual system ignore the continuous direction information and are easily disturbed by highlight noise and object edges. In this paper, a multi-scale strengthened directional difference (MSDD) algorithm is proposed. It is mainly divided into two parts: local directional intensity measure (LDIM) and local directional fluctuation measure (LDFM). In LDIM, an improved window is used to suppress most edge clutter, highlights, and holes and enhance true targets. In LDFM, the characteristics of the target area, the background area, and the connection between the target and the background are considered, which further highlights the true target signal and suppresses the corner clutter. Then, the MSDD saliency map is obtained by fusing the LDIM map and the LDFM map. Finally, an adaptive threshold segmentation method is employed to capture true targets. The experiments show that the proposed method achieves better detection performance in complex backgrounds than several classical and widely used methods. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Double-nested detection window. (<b>b</b>) Improved double-nested detection window. (<b>c</b>) Situations that the algorithm needs to handle. (<b>d</b>) Gaussian shape.</p>
Full article ">Figure 2
<p>Flowchart of our MSDD small target detection method.</p>
Full article ">Figure 3
<p>Newly divided into four directional blocks structure. (<b>a</b>) Upper left subblock. (<b>b</b>) Upper right subblock. (<b>c</b>) Lower right subblock. (<b>d</b>) Lower left subblock.</p>
Full article ">Figure 4
<p>Processing results of different algorithms. (<b>a</b>) Original image, (<b>b</b>) LCM, (<b>c</b>) MPCM, (<b>d</b>) TLLCM, (<b>e</b>) RLCM, (<b>f</b>) ADMD, (<b>g</b>) VAR-DIFF, and (<b>h</b>) proposed method. Note that the gray values of all images are normalized to the [0–255] interval.</p>
Full article ">Figure 4 Cont.
<p>Processing results of different algorithms. (<b>a</b>) Original image, (<b>b</b>) LCM, (<b>c</b>) MPCM, (<b>d</b>) TLLCM, (<b>e</b>) RLCM, (<b>f</b>) ADMD, (<b>g</b>) VAR-DIFF, and (<b>h</b>) proposed method. Note that the gray values of all images are normalized to the [0–255] interval.</p>
Full article ">Figure 5
<p>ROC curves of the four sequences and SIRST.</p>
Full article ">
17 pages, 5321 KiB  
Article
Research on Smart Tourism Oriented Sensor Network Construction and Information Service Mode
by Ruomei Tang, Chenyue Huang, Xinyu Zhao and Yunbing Tang
Sensors 2022, 22(24), 10008; https://doi.org/10.3390/s222410008 - 19 Dec 2022
Cited by 4 | Viewed by 2359
Abstract
Smart tourism is the latest achievement of tourism development at home and abroad. It is also an essential part of the smart city. Promoting the application of computer and sensor technology in smart tourism is conducive to improving the efficiency of public tourism [...] Read more.
Smart tourism is the latest achievement of tourism development at home and abroad. It is also an essential part of the smart city. Promoting the application of computer and sensor technology in smart tourism is conducive to improving the efficiency of public tourism services and guiding the innovation of the tourism public service mode. In this paper, we have proposed a new method of using data collected by sensor networks. We have developed and deployed sensors to collect data, which are transmitted to the modular cloud platform, and combined with cluster technology and an Uncertain Support Vector Classifier (A-USVC) location prediction method to assist in emergency events. Considering the attraction of tourists, the system also incorporated human trajectory analysis and intensity of interaction as consideration factors to validate the spatial dynamics of different interests and enhance the tourists’ experience. The system explored the innovative road of computer technology to boost the development of smart tourism, which helps to promote the high-quality development of tourism. Full article
(This article belongs to the Special Issue Smart Mobile and Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>Structure diagram of sensor test platform.</p>
Full article ">Figure 2
<p>Data-level fusion structure.</p>
Full article ">Figure 3
<p>The consumption comparison of three algorithms. (<b>a</b>) The comparison of node death rates of three algorithm models. (<b>b</b>) The comparison of energy consumption rates of three algorithm models.</p>
Full article ">Figure 4
<p>Super-brain system intelligent security system.</p>
Full article ">Figure 5
<p>Smart toilet.</p>
Full article ">Figure 6
<p>Hardware structure diagram of the wireless sensor node.</p>
Full article ">Figure 7
<p>Steps of the AODV energy-aware routing protocol.</p>
Full article ">Figure 8
<p>Time synchronization mechanism.</p>
Full article ">Figure 9
<p>Network node clustering graph.</p>
Full article ">Figure 10
<p>Adaptive weighted fusion algorithm model.</p>
Full article ">Figure 11
<p>Complementary system block diagram.</p>
Full article ">Figure 12
<p>Data collection. (<b>a</b>) Graph of round-trip times for requests across network hops. (<b>b</b>) Request round trip time graph.</p>
Full article ">Figure 13
<p>Data collection and comparison. (<b>a</b>) Sensor registration delay. (<b>b</b>) Data integration.</p>
Full article ">
16 pages, 5831 KiB  
Article
Design of a High-Efficiency DC-DC Boost Converter for RF Energy Harvesting IoT Sensors
by Juntae Kim and Ickjin Kwon
Sensors 2022, 22(24), 10007; https://doi.org/10.3390/s222410007 - 19 Dec 2022
Cited by 13 | Viewed by 3146
Abstract
In this paper, an optimal design of a high-efficiency DC-DC boost converter is proposed for RF energy harvesting Internet of Things (IoT) sensors. Since the output DC voltage of the RF-DC rectifier for RF energy harvesting varies considerably depending on the RF input [...] Read more.
In this paper, an optimal design of a high-efficiency DC-DC boost converter is proposed for RF energy harvesting Internet of Things (IoT) sensors. Since the output DC voltage of the RF-DC rectifier for RF energy harvesting varies considerably depending on the RF input power, the DC-DC boost converter following the RF-DC rectifier is required to achieve high power conversion efficiency (PCE) in a wide input voltage range. Therefore, based on the loss analysis and modeling of an inductor-based DC-DC boost converter, an optimal design method of design parameters, including inductance and peak inductor current, is proposed to obtain the maximum PCE by minimizing the total loss according to different input voltages in a wide input voltage range. A high-efficiency DC-DC boost converter for RF energy harvesting applications is designed using a 65 nm CMOS process. The modeled total losses agree well with the circuit simulation results and the proposed loss modeling results accurately predict the optimal design parameters to obtain the maximum PCE. Based on the proposed loss modeling, the optimally designed DC-DC boost converter achieves a power conversion efficiency of 96.5% at a low input voltage of 0.1 V and a peak efficiency of 98.4% at an input voltage of 0.4 V. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

Figure 1
<p>Block diagram of RF energy harvesting system consisting of RF energy harvesting RF-DC rectifier and DC-DC boost converter.</p>
Full article ">Figure 2
<p>Schematic of an inductor-based DC-DC boost converter.</p>
Full article ">Figure 3
<p>DCM operation and inductor current (<span class="html-italic">I<sub>L</sub></span>) waveform of DC-DC boost converter.</p>
Full article ">Figure 4
<p>Modeling results of each loss component according to inductance.</p>
Full article ">Figure 5
<p>Modeling results of each loss component according to the width of <span class="html-italic">M</span><sub>1</sub> and <span class="html-italic">M</span><sub>2</sub> switches.</p>
Full article ">Figure 6
<p>Modeling results of each loss component according to the peak inductor current.</p>
Full article ">Figure 7
<p>Schematic of the DC-DC boost converter with the control circuits.</p>
Full article ">Figure 8
<p>Modeled total loss according to inductance for different input voltages.</p>
Full article ">Figure 9
<p>Modeled PCE according to inductance for different input voltages.</p>
Full article ">Figure 10
<p>Modeled total loss according to peak inductor current for different input voltages.</p>
Full article ">Figure 11
<p>Modeled PCE according to peak inductor current for different input voltages.</p>
Full article ">Figure 12
<p>Waveforms of the switch control signals <span class="html-italic">V<sub>N</sub></span> and <span class="html-italic">V<sub>P</sub></span>, internal node voltage <span class="html-italic">V<sub>X</sub></span>, inductor current <span class="html-italic">I<sub>L</sub></span>, and output voltage <span class="html-italic">V<sub>OUT</sub></span> for an input voltage of 0.1 V.</p>
Full article ">Figure 13
<p>Total loss according to inductance for an input voltage of 0.1 V and comparison with the modeling results.</p>
Full article ">Figure 14
<p>PCE according to inductance for an input voltage of 0.1 V and comparison with the modeling results.</p>
Full article ">Figure 15
<p>Total loss according to the switch width for an input voltage of 0.1 V and comparison with the modeling results.</p>
Full article ">Figure 16
<p>PCE according to switch width for an input voltage of 0.1 V and comparison with the modeling results.</p>
Full article ">Figure 17
<p>Total loss according to the peak inductor current for an input voltage of 0.1 V and comparison with the modeling results.</p>
Full article ">Figure 18
<p>PCE according to the peak inductor current for an input voltage of 0.1 V and comparison with the modeling results.</p>
Full article ">Figure 19
<p>PCE according to input voltage comparison with the modeling results.</p>
Full article ">
12 pages, 2817 KiB  
Article
Non-Specific Responsive Nanogels and Plasmonics to Design MathMaterial Sensing Interfaces: The Case of a Solvent Sensor
by Nunzio Cennamo, Francesco Arcadio, Fiore Capasso, Devid Maniglio, Luigi Zeni and Alessandra Maria Bossi
Sensors 2022, 22(24), 10006; https://doi.org/10.3390/s222410006 - 19 Dec 2022
Cited by 3 | Viewed by 2273
Abstract
The combination of non-specific deformable nanogels and plasmonic optical probes provides an innovative solution for specific sensing using a generalistic recognition layer. Soft polyacrylamide nanogels that lack specific selectivity but are characterized by responsive behavior, i.e., shrinking and swelling dependent on the surrounding [...] Read more.
The combination of non-specific deformable nanogels and plasmonic optical probes provides an innovative solution for specific sensing using a generalistic recognition layer. Soft polyacrylamide nanogels that lack specific selectivity but are characterized by responsive behavior, i.e., shrinking and swelling dependent on the surrounding environment, were grafted to a gold plasmonic D-shaped plastic optical fiber (POF) probe. The nanogel–POF cyclically challenged with water or alcoholic solutions optically reported the reversible solvent-to-phase transitions of the nanomaterial, embodying a primary optical switch. Additionally, the non-specific nanogel–POF interface exhibited more degrees of freedom through which specific sensing was enabled. The real-time monitoring of the refractive index variations due to the time-related volume-to-phase transition effects of the nanogels enabled us to determine the environment’s characteristics and broadly classify solvents. Hence the nanogel–POF interface was a descriptor of mathematical functions for substance identification and classification processes. These results epitomize the concept of responsive non-specific nanomaterials to perform a multiparametric description of the environment, offering a specific set of features for the processing stage and particularly suitable for machine and deep learning. Thus, soft MathMaterial interfaces provide the ground to devise devices suitable for the next generation of smart intelligent sensing processes. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Figure 1
<p>Flexibility of the nanogels monitored as a modification of the hydrodynamic size with the addition of isopropanol to the water suspension. Blue track: isopropanol:water 1:99 <span class="html-italic">v</span>/<span class="html-italic">v</span>; Black track: isopropanol:water 1:19 <span class="html-italic">v</span>/<span class="html-italic">v</span>; Red track: isopropanol:water 3:17 <span class="html-italic">v</span>/<span class="html-italic">v</span>. Measurements were in triplicate; StDv &lt; 10%.</p>
Full article ">Figure 2
<p>The nanogel–POF interface in wet and dry conditions. Atomic force microscopy was used to investigate: (<b>A</b>) surface topography of nanogel–POF probe in a hydrated state, and correlated to its phase (<b>B</b>); (<b>C</b>) surface topography of dehydrating nanogels on the POF, and correlated to the phase (<b>D</b>).</p>
Full article ">Figure 3
<p>The nanogel–POF interface behaves as a reversible optical switch. The sensor was tested with the following cycle: water was placed on the interface (start of the cycle; t = 0, blue lines), followed by ethanol:water RI 1.339 (10% <span class="html-italic">w</span>/<span class="html-italic">v</span>), let incubate for 10 min, and then measured (t = 10, red lines). At the cycle’s completion, the sensor was placed in water, incubated for 10 min, and the next cycle was started (t = 0, blue line). Three out of five consecutive cycles performed on the sensor are reported. It can be observed that when alternatively placed in water (blue line) and ethanol:water RI = 1.338 (red line), the optical minima shift between typical positions, hinting at possible uses of the nano-gel-POF as an optical switch.</p>
Full article ">Figure 4
<p>(<b>A</b>) The shift of the plasmonic minimum (Δλ) is plotted as a function of the refractive index (RI) of the surrounding environment at the initial sample-dropping time (t = 0) for the bare gold POF and nanogel–POFs. For all reported conditions, the Δλ linearly correlated to the RI. (<b>B</b>) The Δλ is plotted as a function of the RI of the surrounding environment (water and water:isopropanol mixes) both at the initial sample-dropping time (t = 0) and after 4 min (t = 4) for both bare gold POF and nanogel–POFs. The comparison between incubation times t = 0 and t = 4 min shows that the bare gold POF has an optical response uniquely related to the RI of the surrounding environment. In contrast, for the nanogel–POF, the isopropanol:water mixes that have a documented shrinking effect on the nanogels [<a href="#B27-sensors-22-10006" class="html-bibr">27</a>] produced a significant drop in the optical responses (t = 0 red solid squares; t = 4 red open squares). The result highlights optical effects associated with the softness of the nanogels on the probe, hence the possibility to sense the different environments by exploiting soft non-specific materials and monitoring their over-time deformations.</p>
Full article ">Figure 5
<p>The MathMaterial sensor. Nanogel–POF was placed in contact with different environments: glycerol:water RI 1.350 (17% <span class="html-italic">v</span>/<span class="html-italic">v</span> glycerol) white; isopropanol:water RI 1.349 (20% <span class="html-italic">v</span>/<span class="html-italic">v</span> isopropanol) grey; isopropanol:water RI 1.339 (10% <span class="html-italic">v</span>/<span class="html-italic">v</span> isopropanol) blue. At time t = 0, the optical readouts (Δλ shift) informed us about the RI of the surrounding environments. Extending the measurements to longer incubation times (t = 4) permits monitoring deformations of the nanogel in terms of further Δλ shifts. In the example, glycerol:water had no collapsing effects on the nanogel. The signal (λ) was stable, whereas shrinking environments resulted in Δλ, enabling us to distinguish the nature of the surrounding solvent. Additionally, multiple sampling times (t = 0, t = 4, t = 10) enabled monitoring of the extent of the deformation of the nanogel, gaining kinetics information on the deformation process, and, hence, on the concentration of the solvent.</p>
Full article ">Figure 6
<p>Outline of the MathMaterial sensing approach.</p>
Full article ">Figure 7
<p>Scatter plot of the implemented dataset: two solutions with the same RI, but with (red, isopropanol:water) and without (blue, glycerol:water) the alcoholic component.</p>
Full article ">Figure 8
<p>Confusion matrices obtained for the predictions over a stratified cross-validation procedure.</p>
Full article ">
16 pages, 6263 KiB  
Article
Retrieval of Suspended Sediment Concentration from Bathymetric Bias of Airborne LiDAR
by Xinglei Zhao, Jianfei Gao, Hui Xia and Fengnian Zhou
Sensors 2022, 22(24), 10005; https://doi.org/10.3390/s222410005 - 19 Dec 2022
Cited by 1 | Viewed by 1927
Abstract
In addition to depth measurements, airborne LiDAR bathymetry (ALB) has shown usefulness in suspended sediment concentration (SSC) inversion. However, SSC retrieval using ALB based on waveform decomposition or near-water-surface penetration by green lasers requires access to full-waveform data or infrared laser data, which [...] Read more.
In addition to depth measurements, airborne LiDAR bathymetry (ALB) has shown usefulness in suspended sediment concentration (SSC) inversion. However, SSC retrieval using ALB based on waveform decomposition or near-water-surface penetration by green lasers requires access to full-waveform data or infrared laser data, which are not always available for users. Thus, in this study we propose a new SSC inversion method based on the depth bias of ALB. Artificial neural networks were used to build an empirical inversion model by connecting the depth bias and SSC. The proposed method was verified using an ALB dataset collected through Optech coastal zone mapping and imaging LiDAR systems. The results showed that the mean square error of the predicted SSC based on the empirical model of ALB depth bias was less than 2.564 mg/L in the experimental area. The proposed method was compared with the waveform decomposition and regression methods. The advantages and limits of the proposed method were analyzed and summarized. The proposed method can effectively retrieve SSC and only requires ALB-derived and sonar-derived water bottom points, eliminating the dependence on the use of green full-waveforms and infrared lasers. This study provides an alternative means of conducting SSC inversion using ALB. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of SSC retrieval using ALB: (<b>a</b>) green laser propagation, (<b>b</b>) SSC retrieval using ALB [<a href="#B10-sensors-22-10005" class="html-bibr">10</a>,<a href="#B11-sensors-22-10005" class="html-bibr">11</a>].</p>
Full article ">Figure 2
<p>ALB system used in the experiment: (<b>a</b>) Y-12 aircraft, (<b>b</b>) Optech CZMIL.</p>
Full article ">Figure 3
<p>Locations of the ALB and sonar measurements. (<b>a</b>) Locations and (<b>b</b>) local enlarged view of the research area.</p>
Full article ">Figure 4
<p>Probability density distribution of the model input and output data. (<b>a</b>) Depth bias, (<b>b</b>) depth, (<b>c</b>) sensor height, (<b>d</b>) beam scanning angle, and (<b>e</b>) SSC.</p>
Full article ">Figure 5
<p>Relationship of ALB depth bias and SSC and the fitted exponential model.</p>
Full article ">Figure 6
<p>Structure of the ANN-based SSC model.</p>
Full article ">Figure 7
<p>Spatial distributions of depth bias and the retrieved SSC. (<b>a</b>) Depth bias of ALB, and (<b>b</b>) retrieved SSC.</p>
Full article ">Figure 8
<p>Spatial distributions of ocean and land waveforms. (<b>a</b>) Amplitudes of IR waveforms, and (<b>b</b>) laser spot positions of corresponding separated ocean (blue) and land (yellow) waveforms.</p>
Full article ">Figure 9
<p>Waveform decomposition of a typical waveform and the distribution of VBR amplitudes. (<b>a</b>) Waveform decomposition, and (<b>b</b>) VBR amplitude.</p>
Full article ">Figure 10
<p>Distribution of VBR amplitude and retrieved SSC. (<b>a</b>) Relationship between SSC and VBR amplitude, and (<b>b</b>) retrieved SSC using the empirical model.</p>
Full article ">Figure A1
<p>Training state of the SSC model: (<b>a</b>) MSE, (<b>b</b>) gradient, Mu, and validation checks.</p>
Full article ">Figure A2
<p>Regression of SSC prediction results: (<b>a</b>) regression of training data, (<b>b</b>) regression of validation data, (<b>c</b>) regression of test data, (<b>d</b>) regression of all data.</p>
Full article ">
20 pages, 9260 KiB  
Article
Automated Identification of Overheated Belt Conveyor Idlers in Thermal Images with Complex Backgrounds Using Binary Classification with CNN
by Mohammad Siami, Tomasz Barszcz, Jacek Wodecki and Radoslaw Zimroz
Sensors 2022, 22(24), 10004; https://doi.org/10.3390/s222410004 - 19 Dec 2022
Cited by 5 | Viewed by 2933
Abstract
Mechanical industrial infrastructures in mining sites must be monitored regularly. Conveyor systems are mechanical systems that are commonly used for safe and efficient transportation of bulk goods in mines. Regular inspection of conveyor systems is a challenging task for mining enterprises, as conveyor [...] Read more.
Mechanical industrial infrastructures in mining sites must be monitored regularly. Conveyor systems are mechanical systems that are commonly used for safe and efficient transportation of bulk goods in mines. Regular inspection of conveyor systems is a challenging task for mining enterprises, as conveyor systems’ lengths can reach tens of kilometers, where several thousand idlers need to be monitored. Considering the harsh environmental conditions that can affect human health, manual inspection of conveyor systems can be extremely difficult. Hence, the authors proposed an automatic robotics-based inspection for condition monitoring of belt conveyor idlers using infrared images, instead of vibrations and acoustic signals that are commonly used for condition monitoring applications. The first step in the whole process is to segment the overheated idlers from the complex background. However, classical image segmentation techniques do not always deliver accurate results in the detection of target in infrared images with complex backgrounds. For improving the quality of captured infrared images, preprocessing stages are introduced. Afterward, an anomaly detection method based on an outlier detection technique is applied to the preprocessed image for the segmentation of hotspots. Due to the presence of different thermal sources in mining sites that can be captured and wrongly identified as overheated idlers, in this research, we address the overheated idler detection process as an image binary classification task. For this reason, a Convolutional Neural Network (CNN) was used for the binary classification of the segmented thermal images. The accuracy of the proposed condition monitoring technique was compared with our previous research. The metrics for the previous methodology reach a precision of 0.4590 and an F1 score of 0.6292. The metrics for the proposed method reach a precision of 0.9740 and an F1 score of 0.9782. The proposed classification method considerably improved our previous results in terms of the true identification of overheated idlers in the presence of complex backgrounds. Full article
(This article belongs to the Special Issue Application of Wireless Sensor Networks in Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>The proposed procedure flowchart.</p>
Full article ">Figure 2
<p>The location of a predefined ROI on an original frame.</p>
Full article ">Figure 3
<p>Comparison of a preprocessed and an original IR image after modifications through preprocessing stages.</p>
Full article ">Figure 4
<p>Comparison of detected outliers by mid and extreme values in a preprocessed ROI.</p>
Full article ">Figure 5
<p>(<b>a</b>) Mobile robot during inspection. (<b>b</b>) A general picture of the mining site.</p>
Full article ">Figure 6
<p>Examples of IR sources that are not related to idlers.</p>
Full article ">Figure 7
<p>Segmentation results of overheated objects that were not related to idlers.</p>
Full article ">Figure 8
<p>Simplified flowcharts of binary classification procedure.</p>
Full article ">Figure 9
<p>Summary of the proposed CNN architecture.</p>
Full article ">Figure 10
<p>Image annotation process base on fusion of IR and RGB images. (<b>a</b>) Segmentation of other thermal sources; (<b>b</b>) segmentation of an overheated idler.</p>
Full article ">Figure 11
<p>Confusion matrix of the binary classification.</p>
Full article ">Figure 12
<p>Training and validation loss plot for the binary classification model.</p>
Full article ">Figure 13
<p>Accuracy plot for the binary classification model.</p>
Full article ">Figure 14
<p>ROC Curves of the binary classification model.</p>
Full article ">
24 pages, 14166 KiB  
Article
Common Frame Dynamics for Conically-Constrained Spacecraft Attitude Control
by Arnold Christopher Cruz and Ahmad Bani Younes
Sensors 2022, 22(24), 10003; https://doi.org/10.3390/s222410003 - 19 Dec 2022
Viewed by 2014
Abstract
Attitude control subjected to pointing constraints is a requirement for most spacecraft missions carrying sensitive on-board equipment. Pointing constraints can be divided into two categories: exclusion zones that are defined for sensitive equipment such as telescopes or cameras that can be damaged from [...] Read more.
Attitude control subjected to pointing constraints is a requirement for most spacecraft missions carrying sensitive on-board equipment. Pointing constraints can be divided into two categories: exclusion zones that are defined for sensitive equipment such as telescopes or cameras that can be damaged from celestial objects, and inclusion zones that are defined for communication hardware and solar arrays. This work derives common frame dynamics that are fully derived for Modified Rodrigues Parameters and introduced to an existing novel technique for constrained spacecraft attitude control, which uses a kinematic steering law and servo sub-system. Lyapunov methods are used to redevelop the steering law and servo sub-system in the common frame for the tracking problem for both static and dynamic conic constraints. A numerical example and comparison between the original frame and the common frame for the static constrained tracking problem are presented under both unbounded and limited torque capabilities. Monte Carlo simulations are performed to validate the convergence of the constrained tracking problem for static conic constraints under small perturbations of the initial conditions. The performance of dynamic conic constraints in the tracking problem is addressed and a numerical example is presented. The result of using common frame dynamics in the constrained problem shows decreased control effort required to rotate the spacecraft. Full article
(This article belongs to the Special Issue Attitude Estimation Based on Data Processing of Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The James Webb Space Telescope artist rendition. Source: European Space Agency. (<b>b</b>) JWST Field of Regard with two exclusion zones [<a href="#B3-sensors-22-10003" class="html-bibr">3</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) SAMPEX Satellite, artist rendition (<b>b</b>) Cassini Satellite, artist rendition .</p>
Full article ">Figure 3
<p>(<b>a</b>) European Data Relay System Satellite Constellation. Source: European Space Agency (<b>b</b>) One of many Starlink satellites, artist rendition. Source: SpaceX.</p>
Full article ">Figure 4
<p>The relative coordinate system, with the inertially fixed earth axis <math display="inline"><semantics> <mi mathvariant="script">N</mi> </semantics></math>, reference trajectory <math display="inline"><semantics> <mi mathvariant="script">R</mi> </semantics></math>, and spacecraft body <math display="inline"><semantics> <mi mathvariant="script">B</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>The body and reference frames and angular velocities shown in [<a href="#B24-sensors-22-10003" class="html-bibr">24</a>,<a href="#B25-sensors-22-10003" class="html-bibr">25</a>].</p>
Full article ">Figure 6
<p>Control block proposed by [<a href="#B20-sensors-22-10003" class="html-bibr">20</a>]. The outer loop consists of the Kinematic Steering Law, while the inner loop consists of the servo sub-system.</p>
Full article ">Figure 7
<p>Static Pointing Constraints. The satellite in (<b>a</b>) has to keep its sensitive equipment from entering the exclusion cone defined by the sun while the satellite in (<b>b</b>) has an inclusion constraint defined by the sun while keeping its solar array pointed towards maximum power absorption.</p>
Full article ">Figure 8
<p>Dynamic Inclusion Constraint Geometry. A laser device is attached to a satellite on the left and must be pointed at another satellite’s receiver on the right. (<b>a</b>) is the initial position of the satellites and (<b>b</b>) is an arbitrary position after some specified time.</p>
Full article ">Figure 9
<p>Unbounded Tracking Results in the Common Frame: (<b>a</b>) is the Spacecraft Attitude, (<b>b</b>) is the Angular velocity of the spacecraft, (<b>c</b>) is the Angular Velocity Error in the Common frame, (<b>d</b>) is the Attitude Error. (<b>e</b>, <b>f</b>) is are the boresight trajectories with respect to the constraints.</p>
Full article ">Figure 10
<p>Comparison of Equation (<a href="#FD33-sensors-22-10003" class="html-disp-formula">33</a>) in the Common Frame and the Original representation.</p>
Full article ">Figure 11
<p>Comparison of Equation (<a href="#FD34-sensors-22-10003" class="html-disp-formula">34</a>) in the Common Frame and the Original representation.</p>
Full article ">Figure 12
<p>Transient time of the maneuver for the steering law where (<b>a</b>) is the overall commanded rates of the steering law, and (<b>b</b>) is the error plot of (<b>a</b>).</p>
Full article ">Figure 13
<p>(<b>a</b>) is the Control Effort Norm in the Standard and the common frame in the transient time. while (<b>b</b>) is the error plot between the common frame and the original formulation.</p>
Full article ">Figure 14
<p>Monte Carlo results for sensitive equipment placed in the x-body axis with respect to the exclusion constraints (<b>Boresight 1</b>) and for equipment placed in the y-body axis with respect to the inclusion constraints (<b>Boresight 2</b>).</p>
Full article ">Figure 15
<p>Angle progression of equipment with respect to the constraints. <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>4</mn> </msub> </semantics></math> represent the angle between Boresight 1 and the respective exclusion constraint, and <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>5</mn> </msub> </semantics></math> is the angle between Boresight 2 with respect to the inclusion constraint.</p>
Full article ">Figure 16
<p>Histogram for the Constraint calculated in Equation (<a href="#FD33-sensors-22-10003" class="html-disp-formula">33</a>) and Equation (<a href="#FD34-sensors-22-10003" class="html-disp-formula">34</a>) under exclusion and inclusion constraints.</p>
Full article ">Figure 17
<p>Monte Carlo results for sensitive equipment placed in the x-body axis (Boresight 1). The Inclusion Constraint is removed from the simulation.</p>
Full article ">Figure 18
<p>Boresight 1 trajectories with respect to the four exclusion constraints with the inclusion constraint condition removed.</p>
Full article ">Figure 19
<p>Histogram for the constraints calculated in Equation (<a href="#FD33-sensors-22-10003" class="html-disp-formula">33</a>) purely under exclusion constraints.</p>
Full article ">Figure 20
<p>Tracking Performance with Dynamic Inclusion Constraint described by Equation (<a href="#FD54-sensors-22-10003" class="html-disp-formula">54</a>). A <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> was used for the dynamic constraint, or half of <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>z</mi> </msub> </semantics></math> current angular velocity. The spacecraft is unbounded in torque. (<b>a</b>) is the spacecraft attitude history, (<b>b</b>) is the angular velocity history, (<b>c</b>) is the attitude error, (<b>d</b>) is the angular velocity error, (<b>e</b>) is the trajectory of boresight 1, and (<b>f</b>) is the angle history of boresight 2.</p>
Full article ">Figure 21
<p>Boresight 2 trajectory at different times of the simulation, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> is used.</p>
Full article ">
3 pages, 165 KiB  
Editorial
Advanced Fault Diagnosis and Health Monitoring Techniques for Complex Engineering Systems
by Yongbo Li, Bing Li, Jinchen Ji and Hamed Kalhori
Sensors 2022, 22(24), 10002; https://doi.org/10.3390/s222410002 - 19 Dec 2022
Cited by 3 | Viewed by 1767
Abstract
Fault diagnosis and health condition monitoring have always been critical issues in the engineering research community [...] Full article
20 pages, 7847 KiB  
Article
Table-Based Adaptive Digital Phase-Locked Loop for GNSS Receivers Operating in Moon Exploration Missions
by Young-Jin Song and Jong-Hoon Won
Sensors 2022, 22(24), 10001; https://doi.org/10.3390/s222410001 - 19 Dec 2022
Cited by 3 | Viewed by 2118
Abstract
An adaptive digital phase-locked loop (DPLL) continually adjusts the noise bandwidth of the loop filter in global navigation satellite system (GNSS) receivers to track signals by measuring the signal-to-noise ratio and/or dynamic stress. Such DPLLs have a relatively large amount of computational complexity [...] Read more.
An adaptive digital phase-locked loop (DPLL) continually adjusts the noise bandwidth of the loop filter in global navigation satellite system (GNSS) receivers to track signals by measuring the signal-to-noise ratio and/or dynamic stress. Such DPLLs have a relatively large amount of computational complexity compared with the conventional DPLL. A table-based adaptive DPLL is proposed that adjusts the noise bandwidth value by extracting it from the pre-generated table without additional calculations. The values of the noise bandwidth table are computed in an optimal manner in consideration of the thermal noise, oscillator phase noise, and dynamic stress error. The calculation method of the proper integration time to maintain the stability of the loop filter is presented. Additionally, the simulation is configured using the trajectory analysis results from the Moon exploration mission and shows that the proposed algorithm operates stably in harsh environments, while a conventional fixed bandwidth loop cannot. The proposed algorithm has a similar phase jitter performance to the existing adaptive DPLL algorithms and has an execution time that is approximately 2.4–5.4 times faster. It is verified that the proposed algorithm is computationally efficient while maintaining jitter performance. Full article
(This article belongs to the Special Issue GNSS Signals and Precise Point Positioning)
Show Figures

Figure 1

Figure 1
<p>Overall Moon exploration mission trajectory in the earth-centered, earth-fixed (ECEF) coordinate.</p>
Full article ">Figure 2
<p>Overall information on the Moon exploration spacecraft during the mission trajectory.</p>
Full article ">Figure 3
<p>Line-of-sight (LOS) carrier-to-noise-density ratio (C/N<sub>0</sub>) information of the Moon exploration spacecraft during the mission trajectory. Each color in the figure indicates a different global positioning system (GPS) satellite.</p>
Full article ">Figure 4
<p>LOS jerk information of the Moon exploration spacecraft during the mission trajectory. Each color in the figure indicates a different GPS satellite.</p>
Full article ">Figure 5
<p>Example of the measurement error calculation results with respect to the noise bandwidth variation for low signal-to-noise ratio (SNR) and high dynamics conditions.</p>
Full article ">Figure 6
<p>Result example of the generated optimal bandwidth table: (<b>a</b>) Overall shape; (<b>b</b>) Optimal bandwidth with respect to the C/N<sub>0</sub>; (<b>c</b>) Optimal bandwidth with respect to the jerk; (<b>d</b>) Top view.</p>
Full article ">Figure 7
<p>Simplified structure of conventional carrier tracking loop.</p>
Full article ">Figure 8
<p>Structure of the proposed table-based adaptive digital phase-locked loop (DPLL) algorithm.</p>
Full article ">Figure 9
<p>Configured simulation scenario information. The C/N<sub>0</sub> has a range of 5.4–57 dB-Hz and the maximum jerk is 411 g/s.</p>
Full article ">Figure 10
<p>Optimal bandwidth variation during the simulation. The noise bandwidth is narrowed to 0.7 Hz as the C/N<sub>0</sub> is lowered to 5.4 dB-Hz and widened to 213.3 Hz as the jerk dynamic stress increases to 411 g/s.</p>
Full article ">Figure 11
<p>Integration time variation during the simulation varied with a step of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>T</mi> </mrow> </semantics></math> (20 ms for this study). It increases to 420 ms for the C/N<sub>0</sub> of 5.4 dB-Hz and reduces to the lower bound value in the high dynamics region.</p>
Full article ">Figure 12
<p>Normalized bandwidth variation during the simulation. The normalized bandwidth is maintained below the target normalized bandwidth value (i.e., 0.3) most of the time, except for the high dynamics region due to the unavoidable limitation of the integration time by the lower bound condition.</p>
Full article ">Figure 13
<p>Carrier tracking results of the proposed table-based adaptive DPLL algorithm: (<b>a</b>) Carrier phase; (<b>b</b>) Doppler frequency; (<b>c</b>) Doppler rate. The proposed algorithm stably tracks the signal components for the simulation scenario.</p>
Full article ">Figure 14
<p>Carrier tracking error at the low SNR region (210–270 s): (<b>a</b>) Proposed algorithm; (<b>b</b>) Fixed bandwidth (B<sub>n</sub> = 15 Hz, T = 20 ms). The fixed-bandwidth loop loses lock at approximately 253 s (C/N<sub>0</sub> = 17 dB-Hz) while the proposed algorithm maintains its lock.</p>
Full article ">Figure 15
<p>Carrier tracking error at the high dynamics region (500–550 s): (<b>a</b>) Proposed algorithm; (<b>b</b>) Fixed bandwidth (B<sub>n</sub> = 15 Hz, T = 20 ms). The fixed-bandwidth loop loses lock at approximately 510 s immediately after the jerk dynamic stress occurs. The error for the proposed algorithm increases temporally while the jerk dynamic stress exists and converges to zero immediately after disappearing.</p>
Full article ">Figure 16
<p>Numerical jitter calculation result with respect to the C/N<sub>0</sub>. The proposed algorithm has a similar performance to the fast adaptive bandwidth (FAB). Fuzzy logic (FL) and loop-bandwidth control algorithm (LBCA) have slightly better performances.</p>
Full article ">Figure 17
<p>Numerical jitter calculation result with respect to the jerk dynamic stress. All adaptive DPLL algorithms have similar performances.</p>
Full article ">Figure 18
<p>Execution time measurement results of each adaptive DPLL algorithm. The proposed algorithm has approximately 2.4–5.4 times faster execution time compared to the other adaptive DPLL algorithms.</p>
Full article ">
14 pages, 3540 KiB  
Article
A Symbols Based BCI Paradigm for Intelligent Home Control Using P300 Event-Related Potentials
by Faraz Akram, Ahmed Alwakeel, Mohammed Alwakeel, Mohammad Hijji and Usman Masud
Sensors 2022, 22(24), 10000; https://doi.org/10.3390/s222410000 - 19 Dec 2022
Cited by 7 | Viewed by 3020
Abstract
Brain-Computer Interface (BCI) is a technique that allows the disabled to interact with a computer directly from their brain. P300 Event-Related Potentials (ERP) of the brain have widely been used in several applications of the BCIs such as character spelling, word typing, wheelchair [...] Read more.
Brain-Computer Interface (BCI) is a technique that allows the disabled to interact with a computer directly from their brain. P300 Event-Related Potentials (ERP) of the brain have widely been used in several applications of the BCIs such as character spelling, word typing, wheelchair control for the disabled, neurorehabilitation, and smart home control. Most of the work done for smart home control relies on an image flashing paradigm where six images are flashed randomly, and the users can select one of the images to control an object of interest. The shortcoming of such a scheme is that the users have only six commands available in a smart home to control. This article presents a symbol-based P300-BCI paradigm for controlling home appliances. The proposed paradigm comprises of a 12-symbols, from which users can choose one to represent their desired command in a smart home. The proposed paradigm allows users to control multiple home appliances from signals generated by the brain. The proposed paradigm also allows the users to make phone calls in a smart home environment. We put our smart home control system to the test with ten healthy volunteers, and the findings show that the proposed system can effectively operate home appliances through BCI. Using the random forest classifier, our participants had an average accuracy of 92.25 percent in controlling the home devices. As compared to the previous studies on the smart home control BCIs, the proposed paradigm gives the users more degree of freedom, and the users are not only able to control several home appliances but also have an option to dial a phone number and make a call inside the smart home. The proposed symbols-based smart home paradigm, along with the option of making a phone call, can effectively be used for controlling home through signals of the brain, as demonstrated by the results. Full article
(This article belongs to the Special Issue Signal Processing for Brain–Computer Interfaces)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the proposed system.</p>
Full article ">Figure 2
<p>Primary Display to control the smart home.</p>
Full article ">Figure 3
<p>Functional symbols on the main Interface and their description.</p>
Full article ">Figure 4
<p>Secondary display for making phone calls.</p>
Full article ">Figure 5
<p>Electrode placement (highlighted electrodes are used in this study).</p>
Full article ">Figure 6
<p>Data Collection. (<b>a</b>) Preparation for EEG data collection; (<b>b</b>) A participant during data collection.</p>
Full article ">Figure 7
<p>Waveforms for target and non−target stimuli. (<b>a</b>) Primary symbols−based display for controlling home appliances. (<b>b</b>) Secondary display (Numbers−based) for making phone calls.</p>
Full article ">Figure 7 Cont.
<p>Waveforms for target and non−target stimuli. (<b>a</b>) Primary symbols−based display for controlling home appliances. (<b>b</b>) Secondary display (Numbers−based) for making phone calls.</p>
Full article ">Figure 8
<p>Comparison of the ERPs for both the displays.</p>
Full article ">Figure 9
<p>Comparison of the classification accuracies for both the displays.</p>
Full article ">
16 pages, 8446 KiB  
Article
Design of a Ultra-Stable Low-Noise Space Camera Based on a Large Target CMOS Detector and Image Data Analysis
by Chao Shen, Caiwen Ma and Wei Gao
Sensors 2022, 22(24), 9991; https://doi.org/10.3390/s22249991 - 18 Dec 2022
Cited by 4 | Viewed by 2436
Abstract
To detect faint target stars of 22nd magnitude and above, an astronomical exploration project requires its space camera’s readout noise to be less than 5e with long-time working stability. Due to the limitation of satellite, the traditional CCD detector-based camera does not [...] Read more.
To detect faint target stars of 22nd magnitude and above, an astronomical exploration project requires its space camera’s readout noise to be less than 5e with long-time working stability. Due to the limitation of satellite, the traditional CCD detector-based camera does not meet the requirements, including volume, weight, and power consumption. Thereby, a low-noise ultra-stable camera based on 9 K × 9 K large target surface CMOS is designed to meet the needs. For the first time, the low-noise ultra-stable camera based on CMOS detector will be applied to space astronomy projects, remote sensing imaging, resource survey, atmospheric and oceanic observation and other fields. In this paper, the design of the camera is introduced in detail, and the camera is tested for several rounds at −40 °C; it also undergoes further testing and data analysis. Tests proved super stability and that the readout noise is lower than 4.5e. Dark current, nonlinearity and PTC indicators meet the requirements of the astronomical exploration project. Full article
(This article belongs to the Special Issue Sensing for Space Applications)
Show Figures

Figure 1

Figure 1
<p>Electronics of the SVOM astronomical camera.</p>
Full article ">Figure 2
<p>9 K × 9 K CMOS detector model.</p>
Full article ">Figure 3
<p>The camera prototype signal flow diagram.</p>
Full article ">Figure 4
<p>The camera secondary power supply block diagram.</p>
Full article ">Figure 5
<p>The signal flow block diagram of the CMOS driver board.</p>
Full article ">Figure 6
<p>Low-noise power supply and low-noise bias block diagram of CMOS detector.</p>
Full article ">Figure 7
<p>The ripple of input 6 V power supply for generate VDD5A.</p>
Full article ">Figure 8
<p>The ripple for positive input of the operational amplifier for generating VDCH bias voltage.</p>
Full article ">Figure 9
<p>The system workflow diagram.</p>
Full article ">Figure 10
<p>The space camera prototype.</p>
Full article ">Figure 11
<p>Test at room temperature. A dark and a flat image. (<b>a</b>) Dark image; (<b>b</b>) Flat image.</p>
Full article ">Figure 12
<p>The star map image used this CMOS camera take.</p>
Full article ">Figure 13
<p>Temperature control system accuracy test result.</p>
Full article ">Figure 14
<p>Schematic diagram of camera test system.</p>
Full article ">Figure 15
<p>Physical picture of camera test system.</p>
Full article ">Figure 16
<p>The readout noise test result of the camera. (<b>a</b>) 28 June 2022 test result; (<b>b</b>) 6 July 2022 test result; (<b>c</b>) 19 July 2022 test result.</p>
Full article ">Figure 17
<p>The nonlinearity test result of camera. (<b>a</b>) 28 June 2022 test result; (<b>b</b>) 6 July 2022 test result; (<b>c</b>) 19 July 2022 test result.</p>
Full article ">Figure 18
<p>The PTC test result of the camera. (<b>a</b>) 28 June 2022 test result; (<b>b</b>) 6 July 2022 test result; (<b>c</b>) 19 July 2022 test result.</p>
Full article ">Figure 19
<p>The dark current test result of the camera. (<b>a</b>) 28 June 2022 test result; (<b>b</b>) 6 July 2022 test result; (<b>c</b>) 19 July 2022 test result.</p>
Full article ">
11 pages, 1790 KiB  
Communication
Simulation of Rapid Thermal Cycle for Ultra-Fast PCR
by Zhuo Yang, Jiali Zhang, Xin Tong, Wenbing Li, Lijuan Liang, Bo Liu and Chang Chen
Sensors 2022, 22(24), 9990; https://doi.org/10.3390/s22249990 - 18 Dec 2022
Cited by 3 | Viewed by 2530
Abstract
The polymerase chain reaction (PCR) technology is a mainstream detection method used in medical diagnoses, environmental monitoring, food hygiene, and safety. However, the systematic analysis of a compact structure with fast temperature changes for an ultra-fast PCR device that is convenient for on-site [...] Read more.
The polymerase chain reaction (PCR) technology is a mainstream detection method used in medical diagnoses, environmental monitoring, food hygiene, and safety. However, the systematic analysis of a compact structure with fast temperature changes for an ultra-fast PCR device that is convenient for on-site detection still lacks investigation. To overcome the problems of low heating efficiency and non-portability of PCR devices currently used, a miniaturized PCR system based on a microfluidic chip, i.e., lab-on-chip technology, has been proposed. The main objective of this paper is to explore the feasibility of using a heat resistor that can reach a fast heating rate and temperature uniformity combined with air cooling technology for rapid cooling and to investigate the influences of various pattern designs and thicknesses of the resistor on heating rates and temperature uniformity. Additionally, a PCR chip made of various materials with different thermal properties, such as surface emissivity, thermal conductivity, mass density, and heat capacity at constant pressure is analyzed. In addition to the heat loss caused by the natural convection of air, the radiation loss of the simulation object is also considered, which makes the model much closer to the practical situation. Our research results provide a considerable reference for the design of the heating and cooling modules used in the ultra-fast PCR protocol, which has great potential in In Vitro Diagnosis (IVD) and the PCR detection of foodborne pathogens and bacteria. Full article
(This article belongs to the Special Issue Advanced Biosensors for Foodborne Pathogens)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The structure of the chip and the surface inside the red circle are the measurement areas. (<b>b</b>) Schematic diagram and specific sizes of three resistor patterns (mm).</p>
Full article ">Figure 2
<p>(<b>a</b>) The heating rates of different heat resistors at 2 volts. (<b>b</b>) X-axis and Y-axis lines were made on the surface of the microcavity, and the temperature gradients on the lines were measured to express the temperature uniformity of the microcavity surface with the temperature difference. (<b>c</b>) The influence of the thickness of resistors on the heating rate.</p>
Full article ">Figure 3
<p>(<b>a</b>) Comparison of the effect of thermal radiation losses on simulation results. (<b>b</b>) Relationship between heating rates of the microcavity surface/external side surface and thermal conductivity of materials.</p>
Full article ">Figure 4
<p>(<b>a</b>) The influence of insulating trenches on the heating rate. (<b>b</b>) The contrast of simulation results in temperature distribution. (<b>c</b>) There are three air cooling plans: air cooling 1—the green one chooses the lower side as the inlet and flows out from the upper side, air cooling 2—the orange one employs the lower side as the inlet and flows out from the horizontal side, and air cooling 3—the blue one uses the horizontal side as the inlet and flows out from the other side. (<b>d</b>) The cooling rates of air cooling under different conditions.</p>
Full article ">
Previous Issue
Back to TopTop