[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 23, July-2
Previous Issue
Volume 23, June-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 13 (July-1 2023) – 472 articles

Cover Story (view full-size image): Collaboration between a high-altitude platform (HAP) and several unmanned aerial vehicles (UAVs) is investigated for wireless communication networks. The aim is to maximize the total downlink throughput of the ground users by optimizing the three-dimensional (3D) UAV placement, and UAV–user and HAP–user associations. An optimization problem is formulated, and the proposed solutions include the following: exhaustive search for 3D UAV placement using either random-based or best-SNR-based UAV–user association, and a genetic algorithm-based joint 3D UAV placement and user association. The K-means algorithm is utilized to initialize the UAV placement and reduce the convergence times of the proposed solutions. It is shown that the HAP–UAV collaborative network achieves a higher total throughput compared to a scheme where a single HAP serves all users. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 3629 KiB  
Article
A Fast Algorithm for Intra-Frame Versatile Video Coding Based on Edge Features
by Shuai Zhao, Xiwu Shang, Guozhong Wang and Haiwu Zhao
Sensors 2023, 23(13), 6244; https://doi.org/10.3390/s23136244 - 7 Jul 2023
Cited by 4 | Viewed by 1860
Abstract
Versatile Video Coding (VVC) introduces many new coding technologies, such as quadtree with nested multi-type tree (QTMT), which greatly improves the efficiency of VVC coding. However, its computational complexity is higher, which affects the application of VVC in real-time scenarios. Aiming to solve [...] Read more.
Versatile Video Coding (VVC) introduces many new coding technologies, such as quadtree with nested multi-type tree (QTMT), which greatly improves the efficiency of VVC coding. However, its computational complexity is higher, which affects the application of VVC in real-time scenarios. Aiming to solve the problem of the high complexity of VVC intra coding, we propose a low-complexity partition algorithm based on edge features. Firstly, the Laplacian of Gaussian (LOG) operator was used to extract the edges in the coding frame, and the edges were divided into vertical and horizontal edges. Then, the coding unit (CU) was equally divided into four sub-blocks in the horizontal and vertical directions to calculate the feature values of the horizontal and vertical edges, respectively. Based on the feature values, we skipped unnecessary partition patterns in advance. Finally, for the CUs without edges, we decided to terminate the partition process according to the depth information of neighboring CUs. The experimental results show that compared with VTM-13.0, the proposed algorithm can save 54.08% of the encoding time on average, and the BDBR (Bjøntegaard delta bit rate) only increases by 1.61%. Full article
(This article belongs to the Special Issue Advances in Image and Video Encoding Algorithm and H/W Design)
Show Figures

Figure 1

Figure 1
<p>Example of the CU partition structure in VVC. (<b>a</b>) example of the partition structure obtained after recursive traversal; (<b>b</b>) the corresponding tree structure.</p>
Full article ">Figure 2
<p>(<b>a</b>) The partition result for Johnny under QP = 32. (<b>b</b>) The edge extraction for Johnny with σ = 0.1. (<b>c</b>) The edge extraction for Johnny with σ = 2.1.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) The partition result for Johnny under QP = 32. (<b>b</b>) The edge extraction for Johnny with σ = 0.1. (<b>c</b>) The edge extraction for Johnny with σ = 2.1.</p>
Full article ">Figure 3
<p>Depth difference ratios under different σ values.</p>
Full article ">Figure 4
<p>Image differentiation effect. (<b>a</b>) Original edge map. (<b>b</b>) Horizontal edges. (<b>c</b>) Vertical edges.</p>
Full article ">Figure 5
<p>Adjacent coding unit (CU)s.</p>
Full article ">Figure 6
<p>A flowchart of the proposed algorithm.</p>
Full article ">Figure 7
<p>The comparison of RD performance between the proposed algorithm and the original encoder. (<b>a</b>) RD performance of BasketballDrill. (<b>b</b>) RD performance of BQTerrace.</p>
Full article ">
19 pages, 8806 KiB  
Article
A Compact and Efficient Boost Converter in a 28 nm CMOS with 90 mV Self-Startup and Maximum Output Voltage Tracking ZCS for Thermoelectric Energy Harvesting
by Muhammad Ali, Seneke Chamith Chandrarathna, Seong-Yeon Moon, Mohammad Sami Jana, Arooba Shafique, Hamdi Qraiqea and Jong-Wook Lee
Sensors 2023, 23(13), 6243; https://doi.org/10.3390/s23136243 - 7 Jul 2023
Cited by 2 | Viewed by 2250
Abstract
There are increasing demands for the Internet of Things (IoT), wearable electronics, and medical implants. Wearable devices provide various important daily applications by monitoring real-life human activities. They demand low-cost autonomous operation in a miniaturized form factor, which is challenging to realize using [...] Read more.
There are increasing demands for the Internet of Things (IoT), wearable electronics, and medical implants. Wearable devices provide various important daily applications by monitoring real-life human activities. They demand low-cost autonomous operation in a miniaturized form factor, which is challenging to realize using a rechargeable battery. One promising energy source is thermoelectric generators (TEGs), considered the only way to generate a small amount of electric power for the autonomous operation of wearable devices. In this work, we propose a compact and efficient converter system for energy harvesting from TEGs. The system consists of an 83.7% efficient boost converter and a 90 mV self-startup, sharing a single inductor. Innovated techniques are applied to adaptive maximum power point tracking (A-MPPT) and indirect zero current switching (I-ZCS) controllers for efficient operation. The startup circuit is realized using a gain-boosted tri-state buffer, which achieves 69.8% improved gain at the input VIN = 200 mV compared to the conventional approach. To extract the maximum power, we use an A-MPPT controller based on a simple capacitive divider, achieving 95.2% tracking efficiency. To address the challenge of realizing accurate voltage or current sensors, we propose an I-ZCS controller based on a new concept of maximum output voltage tracking (MOVT). The integrated circuit (IC) is fabricated using a 28 nm CMOS in a compact chip area of 0.03 mm2. The compact size, which has not been obtained with previous designs, is suitable for wearable device applications. Measured results show successful startup operation at an ultralow input, VIN = 90 mV. A peak conversion efficiency of 85.9% is achieved for the output of 1.07 mW. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the proposed system.</p>
Full article ">Figure 2
<p>Waveforms of the converter operation.</p>
Full article ">Figure 3
<p>Schematic of the proposed self-startup circuit using the gain-boosted tri-state buffer. <span class="html-italic">C</span><sub>S1</sub><span class="html-italic">=</span> 100 fF; <span class="html-italic">C</span><sub>S2</sub> = 1.5 pF.</p>
Full article ">Figure 4
<p>(<b>a</b>) Transfer characteristics of the single inverter and the gain-boosted tri-state buffer. Gain comparison at three input values of (<b>b</b>) <span class="html-italic">V</span><sub>IN</sub> = 100 mV, (<b>c</b>) <span class="html-italic">V</span><sub>IN</sub> = 150 mV, and (<b>d</b>) <span class="html-italic">V</span><sub>IN</sub> = 200 mV.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Transfer characteristics of the single inverter and the gain-boosted tri-state buffer. Gain comparison at three input values of (<b>b</b>) <span class="html-italic">V</span><sub>IN</sub> = 100 mV, (<b>c</b>) <span class="html-italic">V</span><sub>IN</sub> = 150 mV, and (<b>d</b>) <span class="html-italic">V</span><sub>IN</sub> = 200 mV.</p>
Full article ">Figure 5
<p>Equivalent circuit of the startup circuit for loss analysis.</p>
Full article ">Figure 6
<p>Calculated loss and efficiency of the startup circuit: (<b>a</b>) power loss, (<b>b</b>) efficiency as a function of <span class="html-italic">V</span><sub>IN</sub> for different <span class="html-italic">R</span><sub>IN</sub>, (<b>c</b>) conduction loss, and (<b>d</b>) efficiency as a function of <span class="html-italic">R</span><sub>IN</sub> for different <span class="html-italic">V</span><sub>IN</sub>.</p>
Full article ">Figure 7
<p>Equivalent circuit of the boost converter for loss analysis. <span class="html-italic">C</span><sub>IN</sub> = 20 nF, <span class="html-italic">R</span><sub>IND</sub> = 0.12 Ω, <span class="html-italic">C</span><sub>L</sub> = 0.5 μF, and <span class="html-italic">R</span><sub>L</sub> = 500 kΩ.</p>
Full article ">Figure 8
<p>Calculated efficiency of the converter as a function of <span class="html-italic">V</span><sub>S</sub> and <span class="html-italic">R</span><sub>S</sub>.</p>
Full article ">Figure 9
<p>(<b>a</b>) Schematic of the A-MPPT controller; (<b>b</b>) simulated waveforms. <span class="html-italic">C</span><sub>D</sub> = 80 fF; C<sub>M1</sub> = C<sub>M2</sub> = 1 nF.</p>
Full article ">Figure 10
<p>Flowchart of the MPPT operation.</p>
Full article ">Figure 11
<p>Schematic of the comparator.</p>
Full article ">Figure 12
<p>(<b>a</b>) Schematic of the I-ZCS controller; (<b>b</b>) logic operation. <span class="html-italic">C</span><sub>Z</sub> = 5 fF; <span class="html-italic">R</span><sub>Z</sub> = 36 kΩ; <span class="html-italic">C</span><sub>1</sub> = <span class="html-italic">C</span><sub>2</sub> = <span class="html-italic">C</span><sub>3</sub><b>=</b> 16 fF.</p>
Full article ">Figure 13
<p>Simulated waveforms of the I-ZCS controller.</p>
Full article ">Figure 14
<p>(<b>a</b>) Micrograph of the fabricated converter IC; (<b>b</b>) experimental setup.</p>
Full article ">Figure 15
<p>Measured waveforms of the oscillator. <span class="html-italic">C</span><sub>OSC</sub> = 22 pF. <span class="html-italic">R</span><sub>OSC</sub> = 100 kΩ is used to set the bias current of the oscillator.</p>
Full article ">Figure 16
<p>(<b>a</b>) Measured waveforms of the self-startup circuit using a power supply (left) and a commercial TEG (right). (<b>b</b>) Measured waveforms of the self-startup for <span class="html-italic">V</span><sub>IN</sub> = 90 mV. <span class="html-italic">R</span><sub>S</sub> = 30 Ω; <span class="html-italic">C</span><sub>IN</sub> = 0.47 μF.</p>
Full article ">Figure 17
<p>Measured waveforms of the boost converter. C<sub>IN</sub> = 0.47 μF.</p>
Full article ">Figure 18
<p>(<b>a</b>) Measured tracking efficiency of the converter; (<b>b</b>) measured conversion efficiency of the converter. <span class="html-italic">V</span><sub>IN</sub> = 200 mV; <span class="html-italic">R</span><sub>S</sub> = 18 Ω.</p>
Full article ">Figure 19
<p>Measured end-to-end efficiency of the converter as a function of <span class="html-italic">V</span><sub>S</sub> for different <span class="html-italic">R</span><sub>S</sub>. <span class="html-italic">R</span><sub>L</sub> = 4.7 kΩ.</p>
Full article ">Figure A1
<p>Inductor current waveform of the boost converter.</p>
Full article ">
17 pages, 14551 KiB  
Article
Identification of Geometric Features of the Corrugated Board Using Images and Genetic Algorithm
by Maciej Rogalka, Jakub Krzysztof Grabski and Tomasz Garbowski
Sensors 2023, 23(13), 6242; https://doi.org/10.3390/s23136242 - 7 Jul 2023
Cited by 3 | Viewed by 2008
Abstract
The corrugated board is a versatile and durable material that is widely used in the packaging industry. Its unique structure provides strength and cushioning, while its recyclability and bio-degradability make it an environmentally friendly option. The strength of the corrugated board depends on [...] Read more.
The corrugated board is a versatile and durable material that is widely used in the packaging industry. Its unique structure provides strength and cushioning, while its recyclability and bio-degradability make it an environmentally friendly option. The strength of the corrugated board depends on many factors, including the type of individual papers on flat and corrugated layers, the geometry of the flute, temperature, humidity, etc. This paper presents a new approach to the analysis of the geometric features of corrugated boards. The experimental set used in the work and the created software are characterized by high reliability and precision of measurement thanks to the use of an identification procedure based on image analysis and a genetic algorithm. In the applied procedure, the thickness of each layer, corrugated cardboard thickness, flute height and center line are calculated. In most cases, the proposed algorithm successfully approximated these parameters. Full article
Show Figures

Figure 1

Figure 1
<p>Flute types.</p>
Full article ">Figure 2
<p>Device for corrugated board image acquisition: (<b>a</b>) visualization of the device; (<b>b</b>) layout diagram of the most important components of the device: 1—corrugated board sample; 2—camera; 3—LED strip.</p>
Full article ">Figure 3
<p>Flowchart of the proposed algorithm.</p>
Full article ">Figure 4
<p>Results of the preprocessing stage: (<b>a</b>) a gray-scale cropped image of 800 × 800 pixels; (<b>b</b>) blurred image; (<b>c</b>) binary image.</p>
Full article ">Figure 5
<p>The results of the column-wise image scanning.</p>
Full article ">Figure 6
<p>The row sum curve and localizations of the vertical positions of liners: (<b>a</b>) the row sum curve (red line) and the smoothed curve (blue line); (<b>b</b>) the upper (red) and lower (green) liners’ ranges.</p>
Full article ">Figure 7
<p>The binary image with external boundaries of the liners (green lines), boundary lines for limiting the flute searching (red lines), and the central line (yellow line).</p>
Full article ">Figure 8
<p>(<b>a</b>) Results of the skeletonization process; (<b>b</b>) results after skeletonization process and removing side branches; (<b>c</b>) 3 parallel green lines for period limitations.</p>
Full article ">Figure 9
<p>(<b>a</b>) The eroded image; (<b>b</b>) an example result of the optimization process using the genetic algorithm (red line).</p>
Full article ">Figure 10
<p>Schematic representation of the algorithm for determination of the liners and flute thicknesses – red, blue and green areas – regions for determination of the upper liner, lower liner and flute thicknesses, respectively.</p>
Full article ">Figure 11
<p>An example of the results obtained by applying the proposed algorithm.</p>
Full article ">Figure 12
<p>Visualization of the recognized fluting shapes of corrugated board: (<b>a</b>) flute C sample; (<b>b</b>) results obtained for the flute C sample; (<b>c</b>) flute B sample; (<b>d</b>) results obtained for the flute B sample; (<b>e</b>) flute E sample; (<b>f</b>) results obtained for the flute F sample.</p>
Full article ">Figure 12 Cont.
<p>Visualization of the recognized fluting shapes of corrugated board: (<b>a</b>) flute C sample; (<b>b</b>) results obtained for the flute C sample; (<b>c</b>) flute B sample; (<b>d</b>) results obtained for the flute B sample; (<b>e</b>) flute E sample; (<b>f</b>) results obtained for the flute F sample.</p>
Full article ">Figure 13
<p>Examples of samples difficult to identify: (<b>a</b>) with many jagged edges; (<b>b</b>) results of identification of the sample with many jagged edges; (<b>c</b>) crushed sample; (<b>d</b>) results of identification of the crushed sample.</p>
Full article ">Figure 14
<p>Flute B example: (<b>a</b>) cardboard without damage; (<b>b</b>) results of identification; (<b>c</b>) damaged sample; (<b>d</b>) results of identification.</p>
Full article ">Figure 15
<p>Flute E example: (<b>a</b>) cardboard without damage; (<b>b</b>) results of identification; (<b>c</b>) damaged sample; (<b>d</b>) results of identification.</p>
Full article ">Figure 16
<p>Examples of the corrugated board with damages: (<b>a</b>,<b>b</b>) irregular edges formed at the sample cut; (<b>c</b>,<b>d</b>) damages caused by crushing the corrugated layer.</p>
Full article ">Figure 17
<p>(<b>a</b>) The original example (weak effect) and (<b>b</b>) the operation on the same image after changing this area.</p>
Full article ">
18 pages, 4674 KiB  
Article
A Wearable Multi-Sensor Array Enables the Recording of Heart Sounds in Homecare
by Noemi Giordano, Samanta Rosati, Gabriella Balestra and Marco Knaflitz
Sensors 2023, 23(13), 6241; https://doi.org/10.3390/s23136241 - 7 Jul 2023
Cited by 7 | Viewed by 2457
Abstract
The home monitoring of patients affected by chronic heart failure (CHF) is of key importance in preventing acute episodes. Nevertheless, no wearable technological solution exists to date. A possibility could be offered by Cardiac Time Intervals extracted from simultaneous recordings of electrocardiographic (ECG) [...] Read more.
The home monitoring of patients affected by chronic heart failure (CHF) is of key importance in preventing acute episodes. Nevertheless, no wearable technological solution exists to date. A possibility could be offered by Cardiac Time Intervals extracted from simultaneous recordings of electrocardiographic (ECG) and phonocardiographic (PCG) signals. Nevertheless, the recording of a good-quality PCG signal requires accurate positioning of the stethoscope over the chest, which is unfeasible for a naïve user as the patient. In this work, we propose a solution based on multi-source PCG. We designed a flexible multi-sensor array to enable the recording of heart sounds by inexperienced users. The multi-sensor array is based on a flexible Printed Circuit Board mounting 48 microphones with a high spatial resolution, three electrodes to record an ECG and a Magneto-Inertial Measurement Unit. We validated the usability over a sample population of 42 inexperienced volunteers and found that all subjects could record signals of good to excellent quality. Moreover, we found that the multi-sensor array is suitable for use on a wide population of at-risk patients regardless of their body characteristics. Based on the promising findings of this study, we believe that the described device could enable the home monitoring of CHF patients soon. Full article
(This article belongs to the Special Issue Physiological Sound Acquisition and Processing (Volume II))
Show Figures

Figure 1

Figure 1
<p>Process of decompensation in CHF patients. Adapted from [<a href="#B9-sensors-23-06241" class="html-bibr">9</a>].</p>
Full article ">Figure 2
<p>Comparison between (<b>A</b>) the traditional auscultation areas and (<b>B</b>) the distribution of the sensors in the wearable multi-sensor array.</p>
Full article ">Figure 3
<p>Architecture of the proposed system.</p>
Full article ">Figure 4
<p>Dimensioned drawing of the multi-sensor array (only relevant dimensions are provided for readability, i.e., dimensions that are either critical for reproducibility or impossible to determine from other dimensions). Dimensions in millimeters.</p>
Full article ">Figure 5
<p>Pictures of the implemented multi-sensor array from the chest side (<b>A</b>) and from the top side (<b>B</b>).</p>
Full article ">Figure 6
<p>Violin plots of the distributions of respectively BMI and thoracic circumference over the sample population, divided according to biological gender.</p>
Full article ">Figure 7
<p>Example of three heartbeats of an ECG signal and three PCG signals, recorded by different channels of the multi-sensor array. No filtering was applied to the signals.</p>
Full article ">Figure 8
<p>Boxplots of the distributions of the SNR of respectively S1 and S2, before and after digital filtering (20 Hz–100 Hz).</p>
Full article ">Figure 9
<p>Maps showing the percentage of recordings with respectively a higher SNR in S1 (panel <b>A</b>) and a higher SNR in S2 (panel <b>B</b>). Each circle represents a microphone, and the color of the circle represents the percentage of recordings according to the color bars. A comparison between the positioning on the chest of the best channels against the positioning of the traditional auscultation areas of the valves generating the corresponding heart sound is proposed.</p>
Full article ">Figure 10
<p>Scatter plots of the maximum SNR of respectively S1 and S2, in function of, respectively, the BMI, the thoracic circumference, and the biological sex.</p>
Full article ">
14 pages, 7748 KiB  
Essay
Monitoring and Analysis of the Collapse Process in Blasting Demolition of Tall Reinforced Concrete Chimneys
by Xiaowu Huang, Xianqi Xie, Jinshan Sun, Dongwang Zhong, Yingkang Yao and Shengwu Tu
Sensors 2023, 23(13), 6240; https://doi.org/10.3390/s23136240 - 7 Jul 2023
Cited by 2 | Viewed by 1703
Abstract
Aiming at the problem of displacement of collapse direction caused by the impact of the high-rise reinforced concrete chimney in the process of blasting demolition, combined with the monitoring methods such as high-speed photography observation, piezoelectric ceramic sensor, and blasting vibration monitor, the [...] Read more.
Aiming at the problem of displacement of collapse direction caused by the impact of the high-rise reinforced concrete chimney in the process of blasting demolition, combined with the monitoring methods such as high-speed photography observation, piezoelectric ceramic sensor, and blasting vibration monitor, the impact process of the 180 m high chimney was comprehensively analyzed. The results show that the chimney will experience multiple ‘weight loss’ and ‘overweight’ effects during the sit-down process, inducing compressive stress waves in the chimney. When the sit-down displacement is large, the broken reinforced concrete at the bottom can play a significant buffering effect, and the ‘overweight’ effect gradually weakens until the sit-down stops. The stress of the inner and outer sides of the chimney wall is obviously different in the process of collapsing and touching the ground. The waveform of the monitoring point of the piezoelectric ceramic sensor is divided into three stages, which specifically characterizes the evolution process of the explosion load and the impact of the chimney. The vibration induced by explosive explosion is mainly high-frequency vibration above 50 Hz, the vibration induced by chimney collapse is mainly low-frequency vibration below 10 Hz, and the vibration characteristics are obviously different. In the process of blasting demolition and collapse of high-rise reinforced concrete chimney, due to the impact of sitting down, the wall of the support tube is subjected to uneven force, resulting in the deviation of the collapse direction. In practical engineering, the control measures of chimney impact, blasting vibration, and collapse touchdown vibration should be fully strengthened to ensure the safety of the protection target around the blasting demolition object. Full article
Show Figures

Figure 1

Figure 1
<p>Blasting plan of the chimney; (<b>a</b>) Arrangement of the blasting notch; (<b>b</b>) Parameters of the blasting notch.</p>
Full article ">Figure 2
<p>Schematic diagram of piezoelectric ceramic sensor arrangement; (<b>a</b>) Sensor distribution; (<b>b</b>) Sensor distribution position (unit: mm).</p>
Full article ">Figure 3
<p>Chimney cylinder wall monitoring point arrangement process; (<b>a</b>) Drilling of monitoring holes; (<b>b</b>) Pre-embedded sensors; (<b>c</b>) Plugging and grouting.</p>
Full article ">Figure 4
<p>Micromate type vibration monitor.</p>
Full article ">Figure 5
<p>Blasting vibration measurement point layout diagram.</p>
Full article ">Figure 6
<p>Basic structure of piezoelectric ceramic sensors.</p>
Full article ">Figure 7
<p>Piezoelectric signal monitoring and analysis system.</p>
Full article ">Figure 8
<p>Crack propagation process in the support part; (<b>a</b>) 0.5 s; (<b>b</b>) 1.2 s; (<b>c</b>) 2.0 s; (<b>d</b>) 4.5 s.</p>
Full article ">Figure 9
<p>Displacement-time history curve of chimney sitting down.</p>
Full article ">Figure 10
<p>Velocity-time history curve of chimney sitting down.</p>
Full article ">Figure 11
<p>Acceleration-time history curve of chimney sitting down.</p>
Full article ">Figure 12
<p>Piezoelectric ceramic monitoring point waveform diagram; (<b>a</b>) 1# measuring point; (<b>b</b>) 2# measuring point; (<b>c</b>) 3# measuring point.</p>
Full article ">Figure 13
<p>Time course curve of vibration velocity at each measurement point; (<b>a</b>) 1# measuring point; (<b>b</b>) 2# measuring point; (<b>c</b>) 3# measuring point.</p>
Full article ">Figure 14
<p>Response spectrum of parameter variation with damping.</p>
Full article ">
25 pages, 12459 KiB  
Article
Eye-Gaze Controlled Wheelchair Based on Deep Learning
by Jun Xu, Zuning Huang, Liangyuan Liu, Xinghua Li and Kai Wei
Sensors 2023, 23(13), 6239; https://doi.org/10.3390/s23136239 - 7 Jul 2023
Cited by 9 | Viewed by 8374
Abstract
In this paper, we design a technologically intelligent wheelchair with eye-movement control for patients with ALS in a natural environment. The system consists of an electric wheelchair, a vision system, a two-dimensional robotic arm, and a main control system. The smart wheelchair obtains [...] Read more.
In this paper, we design a technologically intelligent wheelchair with eye-movement control for patients with ALS in a natural environment. The system consists of an electric wheelchair, a vision system, a two-dimensional robotic arm, and a main control system. The smart wheelchair obtains the eye image of the controller through a monocular camera and uses deep learning and an attention mechanism to calculate the eye-movement direction. In addition, starting from the relationship between the trajectory of the joystick and the wheelchair speed, we establish a motion acceleration model of the smart wheelchair, which reduces the sudden acceleration of the smart wheelchair during rapid motion and improves the smoothness of the motion of the smart wheelchair. The lightweight eye-movement recognition model is transplanted into an embedded AI controller. The test results show that the accuracy of eye-movement direction recognition is 98.49%, the wheelchair movement speed is up to 1 m/s, and the movement trajectory is smooth, without sudden changes. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>General block diagram of intelligent eye-tracking wheelchair system.</p>
Full article ">Figure 2
<p>Virtual eye-tracking data acquisition.</p>
Full article ">Figure 3
<p>Real eye-tracking data acquisition: (<b>a</b>) nine-grid and (<b>b</b>) real scene.</p>
Full article ">Figure 4
<p>Collecting Dlib key points to extract human eye pictures.</p>
Full article ">Figure 5
<p>t-SNE visualization effect diagram.</p>
Full article ">Figure 6
<p>Eye-tracking dataset.</p>
Full article ">Figure 7
<p>GazeNet structure diagram.</p>
Full article ">Figure 8
<p>Sensory fields corresponding to two different convolution operations: (<b>a</b>) 5 × 5 convolution kernel and (<b>b</b>) 3 × 3 convolution kernel.</p>
Full article ">Figure 9
<p>ResBlock module.</p>
Full article ">Figure 10
<p>Convolutional Block Attention Model.</p>
Full article ">Figure 11
<p>Hardware design diagram: (<b>a</b>) wheelchair physical picture and (<b>b</b>) hardware data flow diagram.</p>
Full article ">Figure 12
<p>Modified wheelchair controller: (<b>a</b>) stop, (<b>b</b>) left, (<b>c</b>) forward, and (<b>d</b>) right.</p>
Full article ">Figure 13
<p>Interactive interface.</p>
Full article ">Figure 14
<p>Fitting diagram of wheelchair speed and rocker angle.</p>
Full article ">Figure 15
<p>Polar coordinate diagram of rocker position.</p>
Full article ">Figure 16
<p><span class="html-italic">θ</span>-<span class="html-italic">t</span> diagram.</p>
Full article ">Figure 17
<p>Variation curve of <math display="inline"><semantics> <mrow> <mi>a</mi> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>t</mi> </mrow> </semantics></math> at different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>t</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Key points of the eye.</p>
Full article ">Figure 19
<p>System flow chart.</p>
Full article ">Figure 20
<p>GazeNet training process.</p>
Full article ">Figure 21
<p>Training models for four different networks.</p>
Full article ">Figure 22
<p>Experimental circuit diagram. (<b>a</b>) First lap, (<b>b</b>) actual route, and (<b>c</b>) second lap.</p>
Full article ">Figure 23
<p>Actual test diagram.</p>
Full article ">Figure 24
<p>Variation of deviation with distance travelled: (<b>a</b>) lap 1 and (<b>b</b>) lap 2.</p>
Full article ">Figure 25
<p>Servo duty cycle variation curve with motion time: (<b>a</b>) servo No. 1 and (<b>b</b>) servo No. 2.</p>
Full article ">
14 pages, 5023 KiB  
Article
Research on Road Scene Understanding of Autonomous Vehicles Based on Multi-Task Learning
by Jinghua Guo, Jingyao Wang, Huinian Wang, Baoping Xiao, Zhifei He and Lubin Li
Sensors 2023, 23(13), 6238; https://doi.org/10.3390/s23136238 - 7 Jul 2023
Cited by 10 | Viewed by 3396
Abstract
Road scene understanding is crucial to the safe driving of autonomous vehicles. Comprehensive road scene understanding requires a visual perception system to deal with a large number of tasks at the same time, which needs a perception model with a small size, fast [...] Read more.
Road scene understanding is crucial to the safe driving of autonomous vehicles. Comprehensive road scene understanding requires a visual perception system to deal with a large number of tasks at the same time, which needs a perception model with a small size, fast speed, and high accuracy. As multi-task learning has evident advantages in performance and computational resources, in this paper, a multi-task model YOLO-Object, Drivable Area, and Lane Line Detection (YOLO-ODL) based on hard parameter sharing is proposed to realize joint and efficient detection of traffic objects, drivable areas, and lane lines. In order to balance tasks of YOLO-ODL, a weight balancing strategy is introduced so that the weight parameters of the model can be automatically adjusted during training, and a Mosaic migration optimization scheme is adopted to improve the evaluation indicators of the model. Our YOLO-ODL model performs well on the challenging BDD100K dataset, achieving the state of the art in terms of accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Research Progress on Intelligent Electric Vehicles-2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Our road detection.</p>
Full article ">Figure 2
<p>Structure of the proposed YOLO-ODL model.</p>
Full article ">Figure 3
<p>Data augmentation.</p>
Full article ">Figure 4
<p>Migration optimization.</p>
Full article ">Figure 5
<p>Detection results of YOLO-ODL in multi-weather road conditions.</p>
Full article ">Figure 6
<p>Detection results of YOLO-ODL in multi-scenario road conditions.</p>
Full article ">Figure 7
<p>Comparison of YOLO-ODL and YOLOP object detection results. The red box in the figure is the false detection box, and the yellow box is the missed detection box.</p>
Full article ">Figure 8
<p>Comparison of YOLO-ODL and YOLOP drivable area detection results. The red circle in the figure is the false detection area, and the yellow circle is the missed detection area.</p>
Full article ">Figure 9
<p>Comparison of YOLO-ODL and YOLOP lane line detection results. The yellow circle is the missed detection area.</p>
Full article ">
38 pages, 13872 KiB  
Review
Patent Review of Lower Limb Rehabilitation Robotic Systems by Sensors and Actuation Systems Used
by Cristina Floriana Pană, Dorin Popescu and Virginia Maria Rădulescu
Sensors 2023, 23(13), 6237; https://doi.org/10.3390/s23136237 - 7 Jul 2023
Cited by 5 | Viewed by 2779
Abstract
Robotic systems for lower limb rehabilitation are essential for improving patients’ physical conditions in lower limb rehabilitation and assisting patients with various locomotor dysfunctions. These robotic systems mainly integrate sensors, actuation, and control systems and combine features from bionics, robotics, control, medicine, and [...] Read more.
Robotic systems for lower limb rehabilitation are essential for improving patients’ physical conditions in lower limb rehabilitation and assisting patients with various locomotor dysfunctions. These robotic systems mainly integrate sensors, actuation, and control systems and combine features from bionics, robotics, control, medicine, and other interdisciplinary fields. Several lower limb robotic systems have been proposed in the patent literature; some are commercially available. This review is an in-depth study of the patents related to robotic rehabilitation systems for lower limbs from the point of view of the sensors and actuation systems used. The patents awarded and published between 2013 and 2023 were investigated, and the temporal distribution of these patents is presented. Our results were obtained by examining the analyzed information from the three public patent databases. The patents were selected so that there were no duplicates after several filters were used in this review. For each patent database, the patents were analyzed according to the category of sensors and the number of sensors used. Additionally, for the main categories of sensors, an analysis was conducted depending on the type of sensors used. Afterwards, the actuation solutions for robotic rehabilitation systems for upper limbs described in the patents were analyzed, highlighting the main trends in their use. The results are presented with a schematic approach so that any user can easily find patents that use a specific type of sensor or a particular type of actuation system, and the sensors or actuation systems recommended to be used in some instances are highlighted. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Distribution of patents on the Google Patents platform, according to the number of sensors used.</p>
Full article ">Figure 2
<p>The distribution of patents on the Google Patents platform depends on the types of sensors used.</p>
Full article ">Figure 3
<p>Time distribution of patents that use bio-sensors on the Google Patents platform.</p>
Full article ">Figure 4
<p>Distribution of patents on the Patent-Scope platform, according to the number of sensors used.</p>
Full article ">Figure 5
<p>Distribution of patents on the Patent-Scope patents platform, according to the type of sensors used.</p>
Full article ">Figure 6
<p>Patents on the Lens platform, distributed according to the number of sensors used.</p>
Full article ">Figure 7
<p>Distribution of patents on the Lens Patents platform, depending on the type of sensors used.</p>
Full article ">Figure 8
<p>The distribution of patents on the Google Patents platform, according to the types of actuators used.</p>
Full article ">Figure 9
<p>The distribution of patents on the Google Patents platform, according to the types of conventional actuators used.</p>
Full article ">Figure 10
<p>Distribution of patents over time on the Google Patents platform, depending on the types of actuators used.</p>
Full article ">Figure 11
<p>Distribution of patents over time on the Google Patents platform, depending on the types of electric actuators used.</p>
Full article ">Figure 12
<p>The distribution of patents in time on the Patent-Scope platform, depending on the types of actuators used.</p>
Full article ">Figure 13
<p>Distribution of patents on the Patent-Scope platform, depending on the type of electric actuators used.</p>
Full article ">Figure 14
<p>The distribution of patents on the Lens platform, according to the types of actuators used.</p>
Full article ">Figure 15
<p>The distribution of patents on the Lens platform, according to the types of conventional actuators used.</p>
Full article ">Figure 16
<p>The distribution of patents over time on the Lens platform, depending on the type of actuators used.</p>
Full article ">Figure 17
<p>Distribution of patents on the three platforms, depending on the number of sensors used.</p>
Full article ">Figure 18
<p>Distribution of the patents, according to the number of pressure sensors used.</p>
Full article ">Figure 19
<p>Percentage distribution of patents, according to the type of actuation system used.</p>
Full article ">Figure 20
<p>Distribution of patents on the Google Patents platform, according to the types of sensors used and depending on the actuating system.</p>
Full article ">Figure 21
<p>Distribution of patents on the Patent-Scope platform, according to the types of sensors used and depending on the actuating system.</p>
Full article ">Figure 22
<p>Distribution of patents on the Lens platform, according to the types of sensors used and depending on the actuating system.</p>
Full article ">Figure 23
<p>Distribution of patents on all three platforms, according to the types of sensors used and depending on the actuating system.</p>
Full article ">Figure 24
<p>Features of sensors and actuation systems and recommendations of use.</p>
Full article ">
20 pages, 3731 KiB  
Article
AI-Assisted Ultra-High-Sensitivity/Resolution Active-Coupled CSRR-Based Sensor with Embedded Selectivity
by Mohammad Abdolrazzaghi, Nazli Kazemi, Vahid Nayyeri and Ferran Martin
Sensors 2023, 23(13), 6236; https://doi.org/10.3390/s23136236 - 7 Jul 2023
Cited by 54 | Viewed by 2837
Abstract
This research explores the application of an artificial intelligence (AI)-assisted approach to enhance the selectivity of microwave sensors used for liquid mixture sensing. We utilized a planar microwave sensor comprising two coupled rectangular complementary split-ring resonators operating at 2.45 GHz to establish a [...] Read more.
This research explores the application of an artificial intelligence (AI)-assisted approach to enhance the selectivity of microwave sensors used for liquid mixture sensing. We utilized a planar microwave sensor comprising two coupled rectangular complementary split-ring resonators operating at 2.45 GHz to establish a highly sensitive capacitive region. The sensor’s quality factor was markedly improved from 70 to approximately 2700 through the incorporation of a regenerative amplifier to compensate for losses. A deep neural network (DNN) technique is employed to characterize mixtures of methanol, ethanol, and water, using the frequency, amplitude, and quality factor as inputs. However, the DNN approach is found to be effective solely for binary mixtures, with a maximum concentration error of 4.3%. To improve selectivity for ternary mixtures, we employed a more sophisticated machine learning algorithm, the convolutional neural network (CNN), using the entire transmission response as the 1-D input. This resulted in a significant improvement in selectivity, limiting the maximum percentage error to just 0.7% (≈6-fold accuracy enhancement). Full article
(This article belongs to the Special Issue State-of-the-Art Technologies in Microwave Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic of coupled CSRR. (<b>b</b>) Coupled resonator modeled by a parallel RLC model. (<b>c</b>) Simplified circuit considering the equivalent coupling capacitors/inductors. (<b>d</b>) Odd mode analysis. (<b>e</b>) Even mode analysis.</p>
Full article ">Figure 2
<p>(<b>a</b>) Block diagram of regenerative amplifier for active resonator, (<b>b</b>) Equivalence of the coupled CSRR with a dual-band transmission profile, (<b>c</b>) Equivalence of the coupled CSRR with summation of two CSRR blocks.</p>
Full article ">Figure 3
<p>(<b>a</b>) The permittivity and (<b>b</b>) loss tangent of water, ethanol, and methanol using Debye function.</p>
Full article ">Figure 4
<p>(<b>a</b>) Layout of the proposed CCSRR. (<b>b</b>) The proposed active circuit for loss compensation in a regenerative amplification format.</p>
Full article ">Figure 5
<p>(<b>a</b>) Enhanced sensitivity with coupled CSRR in passive mode of sensors (amplifier is not applied). (<b>b</b>) Schematic of simulation model. (<b>c</b>) Corresponding frequency shift (black) and sensitivity (blue) vs. permittivity.</p>
Full article ">Figure 6
<p>(<b>a</b>) Impact of bias voltage on measured transmission (<math display="inline"><semantics><msub><mi>S</mi><mn>21</mn></msub></semantics></math>). (<b>b</b>) The schematic of resonator hosting a microfluidic channel. (<b>c</b>) Measured (solid line) with <math display="inline"><semantics><mrow><msub><mi>V</mi><mrow><mi>C</mi><mi>C</mi></mrow></msub><mo>=</mo><mn>2.5</mn></mrow></semantics></math> V vs. simulated (dashed line) <math display="inline"><semantics><msub><mi>S</mi><mn>21</mn></msub></semantics></math> of common chemicals. (<b>d</b>) Corresponding measured frequency shifts black and sensitivity red vs. permittivity.</p>
Full article ">Figure 7
<p>Capacitive loading of CCSRR in ADS simulations (from black (1 fF) to orange (50 fF)) towards obtaining the (<b>a</b>) magnitude and (<b>b</b>) phase of transmission.</p>
Full article ">Figure 8
<p>(<b>a</b>) Measurement setup with VNA (Vector Network Analyzer), (<b>b</b>) Closed-up view of the fabricated active CCSRR.</p>
Full article ">Figure 9
<p>Sensor response for binary mixtures with respect to (<b>a</b>) <math display="inline"><semantics><mrow><mrow><mo>|</mo></mrow><msub><mi>S</mi><mn>21</mn></msub><mrow><mo>|</mo></mrow></mrow></semantics></math> vs. <math display="inline"><semantics><msub><mi>f</mi><mrow><mi>r</mi><mi>e</mi><mi>s</mi></mrow></msub></semantics></math>, (<b>b</b>) Q-factor vs. <math display="inline"><semantics><mrow><mrow><mo>|</mo></mrow><msub><mi>S</mi><mn>21</mn></msub><mrow><mo>|</mo></mrow></mrow></semantics></math>, (<b>c</b>) Q-factor vs. <math display="inline"><semantics><msub><mi>f</mi><mrow><mi>r</mi><mi>e</mi><mi>s</mi></mrow></msub></semantics></math>, (<b>d</b>) 3D overview of the sensor calibration surface.</p>
Full article ">Figure 10
<p>(<b>a</b>) Architecture of a deep neural network with a decoder/encoder format. (<b>b</b>) Loss function.</p>
Full article ">Figure 11
<p>Architecture of convolutional-neural-network-based regression for three concentration of <math display="inline"><semantics><mrow><msub><mi>V</mi><mn>1</mn></msub><mo>,</mo><mspace width="3.33333pt"/><msub><mi>V</mi><mn>2</mn></msub><mo>,</mo><mspace width="3.33333pt"/><msub><mi>V</mi><mn>3</mn></msub></mrow></semantics></math>.</p>
Full article ">
18 pages, 6779 KiB  
Article
Automated Micro-Crack Detection within Photovoltaic Manufacturing Facility via Ground Modelling for a Regularized Convolutional Network
by Damilola Animashaun and Muhammad Hussain
Sensors 2023, 23(13), 6235; https://doi.org/10.3390/s23136235 - 7 Jul 2023
Cited by 4 | Viewed by 2076
Abstract
The manufacturing of photovoltaic cells is a complex and intensive process involving the exposure of the cell surface to high temperature differentials and external pressure, which can lead to the development of surface defects, such as micro-cracks. Currently, domain experts manually inspect the [...] Read more.
The manufacturing of photovoltaic cells is a complex and intensive process involving the exposure of the cell surface to high temperature differentials and external pressure, which can lead to the development of surface defects, such as micro-cracks. Currently, domain experts manually inspect the cell surface to detect micro-cracks, a process that is subject to human bias, high error rates, fatigue, and labor costs. To overcome the need for domain experts, this research proposes modelling cell surfaces via representative augmentations grounded in production floor conditions. The modelled dataset is then used as input for a custom ‘lightweight’ convolutional neural network architecture for training a robust, noninvasive classifier, essentially presenting an automated micro-crack detector. In addition to data modelling, the proposed architecture is further regularized using several regularization strategies to enhance performance, achieving an overall F1-score of 85%. Full article
(This article belongs to the Special Issue Intelligent Control and Digital Twins for Industry 4.0)
Show Figures

Figure 1

Figure 1
<p>Data investigation: (<b>A</b>) normal, (<b>B</b>) defective.</p>
Full article ">Figure 2
<p>Flip: (<b>A</b>) vertical, (<b>B</b>) horizontal.</p>
Full article ">Figure 3
<p>Rotation: (<b>A</b>) clockwise, (<b>B</b>) counter-clockwise, (<b>C</b>) 180 degrees shift.</p>
Full article ">Figure 4
<p>15 Degree: (<b>A</b>)vertical shift, (<b>B</b>) horizontal shift.</p>
Full article ">Figure 5
<p>15 Degree Rotation: (<b>A</b>) height shift, (<b>B</b>) width shift.</p>
Full article ">Figure 6
<p>Brightness: (<b>A</b>) input, (<b>B</b>) output.</p>
Full article ">Figure 7
<p>Exposure: (<b>A</b>) input, (<b>B</b>) output.</p>
Full article ">Figure 8
<p>Noise: (<b>A</b>) input, (<b>B</b>) output.</p>
Full article ">Figure 9
<p>Proposed architecture.</p>
Full article ">Figure 10
<p>Original data performance.</p>
Full article ">Figure 11
<p>Confusion matrix for the initial model evaluation metrics with original dataset.</p>
Full article ">Figure 12
<p>Augmented dataset performance.</p>
Full article ">Figure 13
<p>Modified architecture performance.</p>
Full article ">Figure 14
<p>Performance of modified architecture with batch normalization.</p>
Full article ">Figure 15
<p>Comparison of modified architecture with dropout rate (10% to 60%).</p>
Full article ">Figure 16
<p>Model performance using a combination of batch normalization and 60% dropout.</p>
Full article ">
17 pages, 8290 KiB  
Article
Compact Wideband Groove Gap Waveguide Bandpass Filters Manufactured with 3D Printing and CNC Milling Techniques
by Clara Máximo-Gutierrez, Juan Hinojosa, José Abad-López, Antonio Urbina-Yeregui and Alejandro Alvarez-Melcon
Sensors 2023, 23(13), 6234; https://doi.org/10.3390/s23136234 - 7 Jul 2023
Cited by 4 | Viewed by 1802
Abstract
This paper presents for the first time a compact wideband bandpass filter in groove gap waveguide (GGW) technology. The structure is obtained by including metallic pins along the central part of the GGW bottom plate according to an n-order Chebyshev stepped impedance [...] Read more.
This paper presents for the first time a compact wideband bandpass filter in groove gap waveguide (GGW) technology. The structure is obtained by including metallic pins along the central part of the GGW bottom plate according to an n-order Chebyshev stepped impedance synthesis method. The bandpass response is achieved by combining the high-pass characteristic of the GGW and the low-pass behavior of the metallic pins, which act as impedance inverters. This simple structure together with the rigorous design technique allows for a reduction in the manufacturing complexity for the realization of high-performance filters. These capabilities are verified by designing a fifth-order GGW Chebyshev bandpass filter with a bandwidth BW = 3.7 GHz and return loss RL = 20 dB in the frequency range of the WR-75 standard, and by implementing it using computer numerical control (CNC) machining and three-dimensional (3D) printing techniques. Three prototypes have been manufactured: one using a computer numerical control (CNC) milling machine and two others by means of a stereolithography-based 3D printer and a photopolymer resin. One of the two resin-based prototypes has been metallized from a silver vacuum thermal evaporation deposition technique, while for the other a spray coating system has been used. The three prototypes have shown a good agreement between the measured and simulated S-parameters, with insertion losses better than IL = 1.2 dB. Reduced size and high-performance frequency responses with respect to other GGW bandpass filters were obtained. These wideband GGW filter prototypes could have a great potential for future emerging satellite communications systems. Full article
(This article belongs to the Collection RF and Microwave Communications)
Show Figures

Figure 1

Figure 1
<p>Structure of the wideband Chebyshev bandpass filter based on GGW technology (with order <span class="html-italic">n</span> = 5).</p>
Full article ">Figure 2
<p>Dispersion diagram for the GGW dimensions defined in <a href="#sensors-23-06234-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 3
<p>Structure (without the upper plate) and equivalent circuit of the <span class="html-italic">i</span>th-section of the GGW filter shown in <a href="#sensors-23-06234-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 4
<p>|<span class="html-italic">S</span><sub>21</sub>|of the simulated <span class="html-italic">i</span>th-section (<a href="#sensors-23-06234-f003" class="html-fig">Figure 3</a>) as a function of the dimensions of the metallic pins: <span class="html-italic">t<sub>xi</sub></span> (<span class="html-italic">t<sub>zi</sub></span> = 2 mm and <span class="html-italic">t<sub>yi</sub></span> = 4 mm fixed), <span class="html-italic">t<sub>yi</sub></span> (<span class="html-italic">t<sub>xi</sub></span> = <span class="html-italic">t<sub>zi</sub></span> = 2 mm fixed) and <span class="html-italic">t<sub>zi</sub></span> (<span class="html-italic">t<sub>xi</sub></span> = 2 mm and <span class="html-italic">t<sub>yi</sub></span> = 4 mm fixed).</p>
Full article ">Figure 5
<p>Equivalent circuits of a fifth-order stepped impedance filter. (<b>a</b>) Implementation using stepped impedance transmission lines. (<b>b</b>) Implementation using impedance inverters and transmission lines with the same impedance and electrical length.</p>
Full article ">Figure 5 Cont.
<p>Equivalent circuits of a fifth-order stepped impedance filter. (<b>a</b>) Implementation using stepped impedance transmission lines. (<b>b</b>) Implementation using impedance inverters and transmission lines with the same impedance and electrical length.</p>
Full article ">Figure 6
<p>Theoretical and simulated <span class="html-italic">S</span>-parameters of the fifth-order wideband GGW Chebyshev bandpass filter.</p>
Full article ">Figure 7
<p>Unassembled fifth-order wideband GGW Chebyshev filter manufactured in aluminum using a CNC machining.</p>
Full article ">Figure 8
<p>Unassembled fifth-order wideband GGW Chebyshev filters manufactured with a stereolithography-based 3D printer and a resin. (<b>a</b>) Prototype without metallization. (<b>b</b>) Silver metallized prototype using a high-vacuum evaporation technique. (<b>c</b>) Conductive paint metallized prototype using a spray system.</p>
Full article ">Figure 8 Cont.
<p>Unassembled fifth-order wideband GGW Chebyshev filters manufactured with a stereolithography-based 3D printer and a resin. (<b>a</b>) Prototype without metallization. (<b>b</b>) Silver metallized prototype using a high-vacuum evaporation technique. (<b>c</b>) Conductive paint metallized prototype using a spray system.</p>
Full article ">Figure 9
<p>Photograph of the in-house high-vacuum evaporation system loaded with the bottom part of the polymerized resin-based GGW filter prototype during the metallization process.</p>
Full article ">Figure 10
<p>Photograph of the two parts of the conductive paint metallized resin-based GGW filter prototype using a spray system.</p>
Full article ">Figure 11
<p>EM simulated and measured <span class="html-italic">S</span>-parameters for the aluminum fifth-order wideband GGW Chebyshev filter (<a href="#sensors-23-06234-f007" class="html-fig">Figure 7</a>).</p>
Full article ">Figure 12
<p>EM simulated and measured <span class="html-italic">S</span>-parameters for the silver metallized resin-based fifth-order wideband GGW Chebyshev filter (<a href="#sensors-23-06234-f008" class="html-fig">Figure 8</a>b).</p>
Full article ">Figure 13
<p>EM simulated and measured <span class="html-italic">S</span>-parameters for the conductive paint metallized resin-based fifth-order wideband GGW Chebyshev filter (<a href="#sensors-23-06234-f008" class="html-fig">Figure 8</a>c).</p>
Full article ">
20 pages, 4964 KiB  
Article
Surgical Instrument Signaling Gesture Recognition Using Surface Electromyography Signals
by Melissa La Banca Freitas, José Jair Alves Mendes, Jr., Thiago Simões Dias, Hugo Valadares Siqueira and Sergio Luiz Stevan, Jr.
Sensors 2023, 23(13), 6233; https://doi.org/10.3390/s23136233 - 7 Jul 2023
Cited by 5 | Viewed by 4135
Abstract
Surgical Instrument Signaling (SIS) is compounded by specific hand gestures used by the communication between the surgeon and surgical instrumentator. With SIS, the surgeon executes signals representing determined instruments in order to avoid error and communication failures. This work presented the feasibility of [...] Read more.
Surgical Instrument Signaling (SIS) is compounded by specific hand gestures used by the communication between the surgeon and surgical instrumentator. With SIS, the surgeon executes signals representing determined instruments in order to avoid error and communication failures. This work presented the feasibility of an SIS gesture recognition system using surface electromyographic (sEMG) signals acquired from the Myo armband, aiming to build a processing routine that aids telesurgery or robotic surgery applications. Unlike other works that use up to 10 gestures to represent and classify SIS gestures, a database with 14 selected gestures for SIS was recorded from 10 volunteers, with 30 repetitions per user. Segmentation, feature extraction, feature selection, and classification were performed, and several parameters were evaluated. These steps were performed by taking into account a wearable application, for which the complexity of pattern recognition algorithms is crucial. The system was tested offline and verified as to its contribution for all databases and each volunteer individually. An automatic segmentation algorithm was applied to identify the muscle activation; thus, 13 feature sets and 6 classifiers were tested. Moreover, 2 ensemble techniques aided in separating the sEMG signals into the 14 SIS gestures. Accuracy of 76% was obtained for the Support Vector Machine classifier for all databases and 88% for analyzing the volunteers individually. The system was demonstrated to be suitable for SIS gesture recognition using sEMG signals for wearable applications. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Gestures for SIS used in this work: (<b>1</b>) Compress, (<b>2</b>) Medical Thread on a Spool, (<b>3</b>) Medical Thread Loose, (<b>4</b>) Plier Backhaus, (<b>5</b>) Hemostatic Forceps, (<b>6</b>) Kelly Hemostatic Forceps, (<b>7</b>) Farabeuf Retractor, (<b>8</b>) Bistouri, (<b>9</b>) Needle Holder, (<b>10</b>) Valve Doyen, (<b>11</b>) Allis Clamp, (<b>12</b>) Anatomical Tweezers, (<b>13</b>) Rat’s Tooth Forceps, (<b>14</b>) Scissors.</p>
Full article ">Figure 2
<p>Data acquisition flow. (<b>a</b>) sEMG signals are acquired from the Myo armband placed on the right forearm. (<b>b</b>) The sEMG signals are sent by Bluetooth from a mobile application. (<b>c</b>) The mobile application on the smartphone saves the signal for each acquisition in a “.csv” file, which is sent to a computer via a USB connection. (<b>d</b>) On the computer, data are processed using machine learning routines on MATLAB.</p>
Full article ">Figure 3
<p>Processing steps followed in this work. (<b>a</b>) After data acquisition, sEMG signals were segmented using an automatic routine that identified the start and the end of each sEMG activation. (<b>b</b>) Features were extracted from segmented signals, building feature sets based on the literature and previous works. (<b>c</b>) Feature sets were tested and selected using the classifiers. (<b>d</b>) Some additional analyses were performed, such as combining the classifiers as ensembles, evaluating the gestures, and determining feasibility from the volunteers.</p>
Full article ">Figure 4
<p>Example of sEMG signals obtained from the chosen 14 SIS. The gestures from (<b>1</b>) to (<b>14</b>) are the same as presented in <a href="#sensors-23-06233-f001" class="html-fig">Figure 1</a>: (<b>1</b>) Compress, (<b>2</b>) Medical Thread on a Spool, (<b>3</b>) Medical Thread Loose, (<b>4</b>) Plier Backhaus, (<b>5</b>) Hemostatic Forceps, (<b>6</b>) Kelly Hemostatic Forceps, (<b>7</b>) Farabeuf Retractor, (<b>8</b>) Bistouri, (<b>9</b>) Needle Holder, (<b>10</b>) Valve Doyen, (<b>11</b>) Allis Clamp, (<b>12</b>) Anatomical Tweezers, (<b>13</b>) Rat’s Tooth Forceps, and (<b>14</b>) Scissors.</p>
Full article ">Figure 5
<p>Example of the segmentation applied in this work. The resultant signal is a combination of the four most relevant channels. The threshold signal is used to find the 14 SIS gestures, as demonstrated in <a href="#sensors-23-06233-f001" class="html-fig">Figure 1</a> and <a href="#sensors-23-06233-f004" class="html-fig">Figure 4</a> (numbers are highlighted at the top). Moreover, the onset and offsets automatically found by the developed algorithm are presented by the green and orange lines, respectively.</p>
Full article ">Figure 6
<p>(<b>a</b>) Distribution of accuracies for each classifier considering the data from all volunteers. The markers at the top of the graph represent the results for the similar distributions to each classifier using the Tukey post hoc test from Friedman’s test (<span class="html-italic">p</span> &gt; 0.05). The letters represent each classifier (a—RF, b—SVM, c—MLP, d—LDA, e—QDA, and f—KNN). (<b>b</b>) Hit rates obtained for each feature set from G1 to G13. At the top, the average accuracies for each feature set are presented. The markers (*) indicate feature sets with statistically different distributions, obtained from the Tukey post hoc test from Friedman’s test (<span class="html-italic">p</span> &gt; 0.05).</p>
Full article ">Figure 7
<p>Distributions for the accuracies obtained from the ensemble analysis considering the data from all 10 volunteers from the training and test step. Ens_m is the hit rate from ensemble method for manual search of classifiers, which was the best classifier for the G13 feature set. Ens_a is the results obtained from the ensemble method with automatic search, considering all the classifiers. At the top, the results of the Tukey post hoc test from Friedman’s test are presented (<span class="html-italic">p</span> &gt; 0.05). The letters represent each classifier (a—RF, b—SVM, c—MLP, d—LDA, e—QDA, f—KNN, g—Ens_m, and h—Ens_a).</p>
Full article ">Figure 8
<p>Confusion matrices for the selected 14 SIS gestures (<a href="#sensors-23-06233-f001" class="html-fig">Figure 1</a>) and G13 feature set for SVM classifier (<b>a</b>), and for the manual (<b>b</b>) and automatic (<b>c</b>) ensembles. Null cells represent 0% misclassification. Gestures 1 from 14 refer to the gestures from <a href="#sensors-23-06233-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 9
<p>(<b>a</b>) Distributions for individual accuracies for the classifiers (using G13 feature set), considering the data from each volunteer for individual training and test steps. At the top, the Tukey post hoc test from Friedman’s test are presented (<span class="html-italic">p</span> &gt; 0.05). The letters represent each classifier (a—RF, b—SVM, c—MLP, d—LDA, e—QDA, f—KNN, g—Ens_m, and h—Ens_a). In (<b>b</b>), the accuracy obtained from each volunteer for the algorithms with the best accuracies, i.e., MLP and ensemble with automatic decision, are presented.</p>
Full article ">Figure 10
<p>Confusion matrices for (<b>a</b>) MPL and (<b>b</b>) ensemble with automatic search. Null cells represents less than 1% misclassification. Gestures 1 from 14 refer to the gestures from <a href="#sensors-23-06233-f001" class="html-fig">Figure 1</a>.</p>
Full article ">
30 pages, 10412 KiB  
Review
Applications of Nanosatellites in Constellation: Overview and Feasibility Study for a Space Mission Based on Internet of Space Things Applications Used for AIS and Fire Detection
by Kamel Djamel Eddine Kerrouche, Lina Wang, Abderrahmane Seddjar, Vahid Rastinasab, Souad Oukil, Yassine Mohammed Ghaffour and Larbi Nouar
Sensors 2023, 23(13), 6232; https://doi.org/10.3390/s23136232 - 7 Jul 2023
Cited by 3 | Viewed by 3508
Abstract
In some geographically challenging areas (such as deserts, seas, and forests) where direct connectivity to a terrestrial network is difficult, space communication is the only option. In these remote locations, Internet of Space Things (IoST) applications can also be used successfully. In this [...] Read more.
In some geographically challenging areas (such as deserts, seas, and forests) where direct connectivity to a terrestrial network is difficult, space communication is the only option. In these remote locations, Internet of Space Things (IoST) applications can also be used successfully. In this paper, the proposed payload for IoST applications demonstrates how an Automatic Identification System (AIS) and a fire detection system can be used effectively. A space mission based on efficient and low-cost communication can use a constellation of nanosatellites to better meet this need. These two applications, which use a constellation of nanosatellites, can provide relevant university-level data in several countries as an effective policy for the transfer of space technology in an educational initiative project. To enhance educational participation and interest in space technology, this paper shares the lessons learned from the project feasibility study based on an in-depth design of a nanosatellite with several analyses (data budget, link budget, power budget, and lifetime estimation). Lastly, this paper highlights by experiments the development and application of a cost-effective sensor node for fire detection and the use of GPS to enable AIS capabilities in the IoST framework. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Diagram of the technology transfer.</p>
Full article ">Figure 2
<p>AIS architecture.</p>
Full article ">Figure 3
<p>Autonomous ground segment with sensor network for fire detection.</p>
Full article ">Figure 4
<p>Ground/Space segment interactions for managing maritime transportation systems with the potential inclusion of other developing and emerging countries.</p>
Full article ">Figure 5
<p>Ground/Space segment interactions for IoST application based on fire detection.</p>
Full article ">Figure 6
<p>Nanosatellite number vs orbit parameters [<a href="#B38-sensors-23-06232" class="html-bibr">38</a>].</p>
Full article ">Figure 7
<p>Revisit time versus the number of satellites.</p>
Full article ">Figure 8
<p>Performance results of each constellation for areas in the South China Sea.</p>
Full article ">Figure 9
<p>Performance results of each constellation for the forest in the Chinese mainland.</p>
Full article ">Figure 10
<p>Proposed missions’ modes for the nanosatellite.</p>
Full article ">Figure 11
<p>Ground station block diagram.</p>
Full article ">Figure 12
<p>BUAA Beihang Ground Station Implementation: (1) radio station, (2) TNC, (3) polarization switch, (4) mission computer, (5) power supply, (6) VHF/UHF antenna, and (7) antenna pedestal.</p>
Full article ">Figure 13
<p>Electrical architecture of the proposed nanosatellite platform.</p>
Full article ">Figure 14
<p>VHF uplink block diagram used as payload.</p>
Full article ">Figure 15
<p>UHF uplink/UHF downlink transceiver block diagram.</p>
Full article ">Figure 16
<p>Proposed EPS configuration for CubeSat with open solar panels structure: (<b>a</b>) power regulation and control unit, and (<b>b</b>) power distribution unit.</p>
Full article ">Figure 17
<p>OBC block diagram.</p>
Full article ">Figure 18
<p>ADCS block diagram.</p>
Full article ">Figure 19
<p>Profiles of power consumption, power generation, and battery capacity.</p>
Full article ">Figure 20
<p>Altitude decay according to the orbit cycles.</p>
Full article ">Figure 21
<p>Battery capacity degradation according to the orbit cycles.</p>
Full article ">Figure 22
<p>Low-cost sensor node architecture.</p>
Full article ">Figure 23
<p>Data of the temperature and humidity changes observed in the experiment.</p>
Full article ">Figure 24
<p>Data of the detection of gases at different levels in the experiment.</p>
Full article ">Figure 25
<p>Data of flame level detection in the experiment.</p>
Full article ">Figure 26
<p>(<b>a</b>) Wind speed and (<b>b</b>) wind direction.</p>
Full article ">Figure 27
<p>Position coordinates obtained by GPS used for AIS.</p>
Full article ">
25 pages, 4374 KiB  
Article
Intelligent Drone Positioning via BIC Optimization for Maximizing LPWAN Coverage and Capacity in Suburban Amazon Environments
by Flávio Henry Cunha da Silva Ferreira, Miércio Cardoso de Alcântara Neto, Fabrício José Brito Barros and Jasmine Priscyla Leite de Araújo
Sensors 2023, 23(13), 6231; https://doi.org/10.3390/s23136231 - 7 Jul 2023
Cited by 2 | Viewed by 1410
Abstract
This paper aims to provide a metaheuristic approach to drone array optimization applied to coverage area maximization of wireless communication systems, with unmanned aerial vehicle (UAV) base stations, in the context of suburban, lightly to densely wooded environments present in cities of the [...] Read more.
This paper aims to provide a metaheuristic approach to drone array optimization applied to coverage area maximization of wireless communication systems, with unmanned aerial vehicle (UAV) base stations, in the context of suburban, lightly to densely wooded environments present in cities of the Amazon region. For this purpose, a low-power wireless area network (LPWAN) was analyzed and applied. LPWAN are systems designed to work with low data rates but keep, or even enhance, the extensive area coverage provided by high-powered networks. The type of LPWAN chosen is LoRa, which operates at an unlicensed spectrum of 915 MHz and requires users to connect to gateways in order to relay information to a central server; in this case, each drone in the array has a LoRa module installed to serve as a non-fixated gateway. In order to classify and optimize the best positioning for the UAVs in the array, three concomitant bioinspired computing (BIC) methods were chosen: cuckoo search (CS), flower pollination algorithm (FPA), and genetic algorithm (GA). Positioning optimization results are then simulated and presented via MATLAB for a high-range IoT-LoRa network. An empirically adjusted propagation model with measurements carried out on a university campus was developed to obtain a propagation model in forested environments for LoRa spreading factors (SF) of 8, 9, 10, and 11. Finally, a comparison was drawn between drone positioning simulation results for a theoretical propagation model for UAVs and the model found by the measurements. Full article
(This article belongs to the Topic IOT, Communication and Engineering)
Show Figures

Figure 1

Figure 1
<p>Drone positioning trigonometric variables and schematic.</p>
Full article ">Figure 2
<p>Path taken in all measurement campaigns at UFPA.</p>
Full article ">Figure 3
<p>Setup showing the receiving antenna and drone with LoRa gateway.</p>
Full article ">Figure 4
<p>Path loss models, expressed in RSSI, for (<b>a</b>) SF 10 at h = 60 m and (<b>b</b>) F 11 at h = 24 m.</p>
Full article ">Figure 5
<p>Flowchart of the simulation and optimization processes as made in MATLAB.</p>
Full article ">Figure 6
<p>Average outage probability for all simulations: (<b>a</b>) classical path loss model; (<b>b</b>) LDPL model; (<b>c</b>) CI model.</p>
Full article ">Figure 7
<p>Average spectral efficiency for all simulations (<b>a</b>) classical path loss model; (<b>b</b>) LDPL model; (<b>c</b>) CI model.</p>
Full article ">Figure 8
<p>Cell radius estimates for all path loss models.</p>
Full article ">Figure 9
<p>Drone positioning based on SINR values (see color bar) for SF 8: (<b>a</b>) classical model; (<b>b</b>) LDPL model; (<b>c</b>) CI model.</p>
Full article ">Figure 10
<p>Drone positioning based on SINR values (see color bar) for SF 11: (<b>a</b>) classical model; (<b>b</b>) LDPL model; (<b>c</b>) CI model.</p>
Full article ">
24 pages, 4736 KiB  
Article
A Novel No-Reference Quality Assessment Metric for Stereoscopic Images with Consideration of Comprehensive 3D Quality Information
by Liquan Shen, Yang Yao, Xianqiu Geng, Ruigang Fang and Dapeng Wu
Sensors 2023, 23(13), 6230; https://doi.org/10.3390/s23136230 - 7 Jul 2023
Cited by 3 | Viewed by 1530
Abstract
Recently, stereoscopic image quality assessment has attracted a lot attention. However, compared with 2D image quality assessment, it is much more difficult to assess the quality of stereoscopic images due to the lack of understanding of 3D visual perception. This paper proposes a [...] Read more.
Recently, stereoscopic image quality assessment has attracted a lot attention. However, compared with 2D image quality assessment, it is much more difficult to assess the quality of stereoscopic images due to the lack of understanding of 3D visual perception. This paper proposes a novel no-reference quality assessment metric for stereoscopic images using natural scene statistics with consideration of both the quality of the cyclopean image and 3D visual perceptual information (binocular fusion and binocular rivalry). In the proposed method, not only is the quality of the cyclopean image considered, but binocular rivalry and other 3D visual intrinsic properties are also exploited. Specifically, in order to improve the objective quality of the cyclopean image, features of the cyclopean images in both the spatial domain and transformed domain are extracted based on the natural scene statistics (NSS) model. Furthermore, to better comprehend intrinsic properties of the stereoscopic image, in our method, the binocular rivalry effect and other 3D visual properties are also considered in the process of feature extraction. Following adaptive feature pruning using principle component analysis, improved metric accuracy can be found in our proposed method. The experimental results show that the proposed metric can achieve a good and consistent alignment with subjective assessment of stereoscopic images in comparison with existing methods, with the highest SROCC (0.952) and PLCC (0.962) scores being acquired on the LIVE 3D database Phase I. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed NR-SIQA algorithm.</p>
Full article ">Figure 2
<p>Disparities estimated by different disparity estimation algorithms: (<b>a</b>) SAD-based algorithm, (<b>b</b>) SSIM-based algorithm, (<b>c</b>) Gaussian average SSIM-based algorithm.</p>
Full article ">Figure 3
<p>Cyclopean images synthesized by distorted stereo-pairs. (<b>a</b>,<b>b</b>) and (<b>d</b>,<b>e</b>) are left and right views of stereo-pairs, respectively. (<b>c</b>,<b>f</b>) are their respective cyclopean images.</p>
Full article ">Figure 4
<p>Probability density distributions of (<b>a</b>) neighbor difference, (<b>b</b>) neighbor product and (<b>c</b>) vertical gradient magnitude for six natural cyclopean images synthesized by a reference stereo-pair and its five distorted versions.</p>
Full article ">Figure 5
<p>Probability density distributions of (<b>a</b>) phase congruency, (<b>b</b>) real part of log-Gabor response, (<b>c</b>) log-Gabor phase response, (<b>d</b>) binocular disparity, (<b>e</b>) binocular disparity matching error and (<b>f</b>) binocular disparity consistency computed from a reference stereo-pair and its five distorted versions.</p>
Full article ">Figure 6
<p>Various pairwise differences computed to quantify neighboring pixel statistical correlation. Neighboring pixel differences are computed along four orientations—horizontal (H), vertical (V), main(on)-diagonal (D1) and secondary(off)-diagonal (D2).</p>
Full article ">Figure 7
<p>Various pairwise products computed to quantify neighboring pixels’ statistical correlations. Neighboring pixel products are computed along eight orientations—<math display="inline"><semantics><msup><mn>0</mn><mo>∘</mo></msup></semantics></math>, <math display="inline"><semantics><mrow><mn>22</mn><mo>.</mo><msup><mn>5</mn><mo>∘</mo></msup></mrow></semantics></math>, <math display="inline"><semantics><msup><mn>45</mn><mo>∘</mo></msup></semantics></math>, <math display="inline"><semantics><mrow><mn>67</mn><mo>.</mo><msup><mn>5</mn><mo>∘</mo></msup></mrow></semantics></math>, <math display="inline"><semantics><msup><mn>90</mn><mo>∘</mo></msup></semantics></math>, <math display="inline"><semantics><mrow><mn>112</mn><mo>.</mo><msup><mn>5</mn><mo>∘</mo></msup></mrow></semantics></math>, <math display="inline"><semantics><msup><mn>135</mn><mo>∘</mo></msup></semantics></math> and <math display="inline"><semantics><mrow><mn>157</mn><mo>.</mo><msup><mn>5</mn><mo>∘</mo></msup></mrow></semantics></math>—at a distance of 2 pixels from central pixel.</p>
Full article ">Figure 8
<p>PLCC of the proposed metric with various feature dimensions from LIVE Phase I and Phase II.</p>
Full article ">Figure 9
<p>The SVR parameters’ (<span class="html-italic">C</span>, <math display="inline"><semantics><mi>γ</mi></semantics></math>) selection process on (<b>a</b>) LIVE 3D Database Phase I and (<b>b</b>) Phase II. The number on the level contour denotes the SROCC value of the cross-validation.</p>
Full article ">
32 pages, 10424 KiB  
Article
Spatiotemporal Thermal Variations in Moroccan Cities: A Comparative Analysis
by Ahmed Derdouri, Yuji Murayama and Takehiro Morimoto
Sensors 2023, 23(13), 6229; https://doi.org/10.3390/s23136229 - 7 Jul 2023
Cited by 8 | Viewed by 2232
Abstract
This study examines the Land Surface Temperature (LST) trends in eight key Moroccan cities from 1990 to 2020, emphasizing the influential factors and disparities between coastal and inland areas. Geographically weighted regression (GWR), machine learning (ML) algorithms, namely XGBoost and LightGBM, and SHapley [...] Read more.
This study examines the Land Surface Temperature (LST) trends in eight key Moroccan cities from 1990 to 2020, emphasizing the influential factors and disparities between coastal and inland areas. Geographically weighted regression (GWR), machine learning (ML) algorithms, namely XGBoost and LightGBM, and SHapley Additive exPlanations (SHAP) methods are utilized. The study observes that urban areas are often cooler due to the presence of urban heat sinks (UHSs), more noticeably in coastal cities. However, LST is seen to increase across all cities due to urbanization and the degradation of vegetation cover. The increase in LST is more pronounced in inland cities surrounded by barren landscapes. Interestingly, XGBoost frequently outperforms LightGBM in the analyses. ML models and SHAP demonstrate efficacy in deciphering urban heat dynamics despite data quality and model tuning challenges. The study’s results highlight the crucial role of ongoing urbanization, topography, and the existence of water bodies and vegetation in driving LST dynamics. These findings underscore the importance of sustainable urban planning and vegetation cover in mitigating urban heat, thus having significant policy implications. Despite its contributions, this study acknowledges certain limitations, primarily the use of data from only four discrete years, thereby overlooking inter-annual, seasonal, and diurnal variations in LST dynamics. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Map of Morocco Köppen-Geiger climate classification [<a href="#B14-sensors-23-06229" class="html-bibr">14</a>] highlighting target cities and their geographical settings: Casablanca, Tangier, Agadir, Fes, Marrakech, Oujda, Laayoune, Errachidia. Landsat 8 and 9 images produced by the U.S. Geological Survey. Industrial zones and transportation networks data © OpenStreetMap contributors [<a href="#B15-sensors-23-06229" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>Methodology flowchart.</p>
Full article ">Figure 3
<p>Temporal evolution of LST during summer (June–August) for coastal Moroccan cities (Casablanca, Tangier, and Agadir) from 1990 to 2020. The color gradient represents LST intervals from 15 to 60 °C, with blue indicating lower temperatures, yellow medium temperatures, and red higher temperatures.</p>
Full article ">Figure 4
<p>Temporal evolution of LST during summer (June–August) for inland Moroccan cities (Fes, Marrakech, Oujda, Laayoune, and Errachidia) from 1990 to 2020. The color gradient represents LST intervals from 15 to 60 °C, with blue indicating lower temperatures, yellow medium temperatures, and red higher temperatures.</p>
Full article ">Figure 5
<p>Spectral indices (NDVI, NDBI, NDWI) plotted against LST over time (1990–2020) for the coastal cities Casablanca, Tangier, and Agadir. The color of the dots represents LST using the jet colormap, with darker blue indicating lower temperatures and reddish colors indicating higher temperatures. Note: data for Agadir in 1990 is unavailable; hence, data from 1995 is used for initial representation.</p>
Full article ">Figure 6
<p>Spectral indices (NDVI, NDBI, NDWI) plotted against LST over time (1990–2020) for the inland cities Fes, Marrakech, Oujda, Laayoune, and Errachidia. The color of the dots represents LST using the jet colormap, with darker blue indicating lower temperatures and reddish colors indicating higher temperatures.</p>
Full article ">Figure 7
<p>Bar plots illustrating temporal trends of GWR coefficients for spectral indices and geographical features across the target Moroccan cities for the years 1990, 2000, 2010, and 2020. The table below shows corresponding adjusted R-squared values, indicating model fit for each city and year. Note that DEM, ASPECT, SLOPE, and HILLSHADE are included in the legend for completeness, even though they may not be discernible in the figure due to their values.</p>
Full article ">Figure 8
<p>SHAP-value-based importance of LST driving factors in coastal Moroccan cities.</p>
Full article ">Figure 9
<p>SHAP-value-based importance of LST driving factors in inland Moroccan cities.</p>
Full article ">Figure 10
<p>Heatmap plots illustrating the interactions of different features influencing LST across coastal Moroccan cities from 1990 to 2020. The intensity of color represents the interaction score, with higher values indicating stronger interactions. Notable interactions are discussed in the text.</p>
Full article ">Figure 11
<p>Heatmap plots illustrating the interactions of different features influencing LST across inland Moroccan cities from 1990 to 2020. The intensity of color represents the interaction score, with higher values indicating stronger interactions. Notable interactions are discussed in the text.</p>
Full article ">Figure 12
<p>Dependence plots illustrating the most notable interactions between different factors influencing LST across coastal Moroccan cities from 1990 to 2020.</p>
Full article ">Figure 13
<p>Dependence plots illustrating the most notable interactions between different factors influencing LST across inland Moroccan cities from 1990 to 2020.</p>
Full article ">
16 pages, 5047 KiB  
Article
RFE-UNet: Remote Feature Exploration with Local Learning for Medical Image Segmentation
by Xiuxian Zhong, Lianghui Xu, Chaoqun Li, Lijing An and Liejun Wang
Sensors 2023, 23(13), 6228; https://doi.org/10.3390/s23136228 - 7 Jul 2023
Cited by 4 | Viewed by 1796
Abstract
Although convolutional neural networks (CNNs) have produced great achievements in various fields, many scholars are still exploring better network models, since CNNs have an inherent limitation—that is, the remote modeling ability of convolutional kernels is limited. On the contrary, the transformer has been [...] Read more.
Although convolutional neural networks (CNNs) have produced great achievements in various fields, many scholars are still exploring better network models, since CNNs have an inherent limitation—that is, the remote modeling ability of convolutional kernels is limited. On the contrary, the transformer has been applied by many scholars to the field of vision, and although it has a strong global modeling capability, its close-range modeling capability is mediocre. While the foreground information to be segmented in medical images is usually clustered in a small interval in the image, the distance between different categories of foreground information is uncertain. Therefore, in order to obtain a perfect medical segmentation prediction graph, the network should not only have a strong learning ability for local details, but also have a certain distance modeling ability. To solve these problems, a remote feature exploration (RFE) module is proposed in this paper. The most important feature of this module is that remote elements can be used to assist in the generation of local features. In addition, in order to better verify the feasibility of the innovation in this paper, a new multi-organ segmentation dataset (MOD) was manually created. While both the MOD and Synapse datasets label eight categories of organs, there are some images in the Synapse dataset that label only a few categories of organs. The proposed method achieved 79.77% and 75.12% DSC on the Synapse and MOD datasets, respectively. Meanwhile, the HD95 (mm) scores were 21.75 on Synapse and 7.43 on the MOD dataset. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Overview of the RFE-UNet.</p>
Full article ">Figure 2
<p>ResNet Layers.</p>
Full article ">Figure 3
<p>For the feature graph <math display="inline"><semantics><mi>F</mi></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mrow><mrow><mo>(</mo><mrow><mn>0</mn><mo>,</mo><mn>0</mn></mrow><mo>)</mo></mrow></mrow></msub></mrow></semantics></math>......<math display="inline"><semantics><mrow><msub><mi>N</mi><mrow><mrow><mo>(</mo><mrow><mn>3</mn><mo>,</mo><mn>3</mn></mrow><mo>)</mo></mrow></mrow></msub></mrow></semantics></math>, respectively, represent the specific element value at a certain point. Different letters represent different feature blocks.</p>
Full article ">Figure 4
<p>With feature graph <math display="inline"><semantics><mi>A</mi></semantics></math> as the base unit, remote elements <math display="inline"><semantics><mrow><mi>B</mi><mn>1</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>C</mi><mn>1</mn></mrow></semantics></math>, and <math display="inline"><semantics><mrow><mi>D</mi><mn>1</mn></mrow></semantics></math> are used to assist <math display="inline"><semantics><mi>A</mi></semantics></math> to generate a new feature graph.</p>
Full article ">Figure 5
<p>Using <math display="inline"><semantics><mi>A</mi></semantics></math> as an example, remote elements assist in the detailed flow diagram of local feature generation.</p>
Full article ">Figure 6
<p>Results of qualitative experiments on the Synapse dataset.</p>
Full article ">Figure 7
<p>Results of qualitative experiments on the MOD dataset.</p>
Full article ">
27 pages, 1600 KiB  
Article
Evaluating the Performance of Pre-Trained Convolutional Neural Network for Audio Classification on Embedded Systems for Anomaly Detection in Smart Cities
by Mimoun Lamrini, Mohamed Yassin Chkouri and Abdellah Touhafi
Sensors 2023, 23(13), 6227; https://doi.org/10.3390/s23136227 - 7 Jul 2023
Cited by 8 | Viewed by 2906
Abstract
Environmental Sound Recognition (ESR) plays a crucial role in smart cities by accurately categorizing audio using well-trained Machine Learning (ML) classifiers. This application is particularly valuable for cities that analyzed environmental sounds to gain insight and data. However, deploying deep learning (DL) models [...] Read more.
Environmental Sound Recognition (ESR) plays a crucial role in smart cities by accurately categorizing audio using well-trained Machine Learning (ML) classifiers. This application is particularly valuable for cities that analyzed environmental sounds to gain insight and data. However, deploying deep learning (DL) models on resource-constrained embedded devices, such as Raspberry Pi (RPi) or Tensor Processing Units (TPUs), poses challenges. In this work, an evaluation of an existing pre-trained model for deployment on Raspberry Pi (RPi) and TPU platforms other than a laptop is proposed. We explored the impact of the retraining parameters and compared the sound classification performance across three datasets: ESC-10, BDLib, and Urban Sound. Our results demonstrate the effectiveness of the pre-trained model for transfer learning in embedded systems. On laptops, the accuracy rates reached 96.6% for ESC-10, 100% for BDLib, and 99% for Urban Sound. On RPi, the accuracy rates were 96.4% for ESC-10, 100% for BDLib, and 95.3% for Urban Sound, while on RPi with Coral TPU, the rates were 95.7% for ESC-10, 100% for BDLib and 95.4% for the Urban Sound. Utilizing pre-trained models reduces the computational requirements, enabling faster inference. Leveraging pre-trained models in embedded systems accelerates the development, deployment, and performance of various real-time applications. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Steps involved in the process of audio classification.</p>
Full article ">Figure 2
<p>The left panel displays a node example in a neural. The right panel shows an example of an ANN with a single output. The middle nodes within the hidden layers and the last node within the output layer represent neurons.</p>
Full article ">Figure 3
<p>The process of conducting inferences using YAMNet was derived from a modified source.</p>
Full article ">Figure 4
<p>Evaluation process outlining the steps executed to ensure an equitable comparison of the DL-based approach.</p>
Full article ">Figure 5
<p>A graphical illustration depicts the utilization of the windowing method to gather seven audio segments from a four-second audio file for training purposes.</p>
Full article ">Figure 6
<p>Tools for Embedding Deep Learning Models on Diverse Embedded Devices.</p>
Full article ">Figure 7
<p>On the (<b>left</b>), we have the general-purpose embedded platform, represented by the RPi 4 B, while on the (<b>right</b>), we have the TPU-based platform, represented by the USB Coral TPU.</p>
Full article ">Figure 8
<p>The process of embedding a model onto the Coral Dev Board can be divided into two fundamental stages. At the forefront lies the conventional training methodology, while the subsequent steps involve exporting the trained model to the boar board (emerging) at the base.</p>
Full article ">Figure 9
<p>Evaluation of accuracy (<math display="inline"><semantics><mrow><msub><mi>F</mi><mn>1</mn></msub><mo> </mo><mi>m</mi><mi>i</mi><mi>c</mi><mi>r</mi><mi>o</mi></mrow></semantics></math>) and inference time for all datasets using our proposed models.</p>
Full article ">Figure 10
<p>Evaluation of accuracy (<math display="inline"><semantics><mrow><msub><mi>F</mi><mn>1</mn></msub><mo> </mo><mi>m</mi><mi>i</mi><mi>c</mi><mi>r</mi><mi>o</mi></mrow></semantics></math>) and inference time for all datasets using our proposed models with post-training quantization (PTQ) without quantization.</p>
Full article ">Figure 11
<p>Evaluation of accuracy (<math display="inline"><semantics><mrow><msub><mi>F</mi><mn>1</mn></msub><mo> </mo><mi>m</mi><mi>i</mi><mi>c</mi><mi>r</mi><mi>o</mi></mrow></semantics></math>) and inference time for all datasets using our proposed models with post-training quantization (PTQ) with quantization.</p>
Full article ">Figure 12
<p>Evaluation of accuracy (<math display="inline"><semantics><mrow><msub><mi>F</mi><mn>1</mn></msub><mo> </mo><mi>m</mi><mi>i</mi><mi>c</mi><mi>r</mi><mi>o</mi></mrow></semantics></math>) and inference time for all datasets using our proposed models with quantization-aware training (QAT).</p>
Full article ">Figure 13
<p>Evaluation of the accuracy (<math display="inline"><semantics><mrow><msub><mi>F</mi><mn>1</mn></msub><mo> </mo><mi>m</mi><mi>i</mi><mi>c</mi><mi>r</mi><mi>o</mi></mrow></semantics></math>) and inference time for all datasets using our proposed two models with Coral TPU and post-training quantization (PTQ).</p>
Full article ">Figure 14
<p>Comparison of accuracy (<math display="inline"><semantics><mrow><msub><mi>F</mi><mn>1</mn></msub><mo> </mo><mi>m</mi><mi>i</mi><mi>c</mi><mi>r</mi><mi>o</mi><mo> </mo><mi>s</mi><mi>c</mi><mi>o</mi><mi>r</mi><mi>e</mi></mrow></semantics></math>) of proposed models between PC and RPi.</p>
Full article ">Figure 15
<p>Comparison of classification time of proposed models between PC and RPi.</p>
Full article ">Figure 16
<p>Comparison of accuracy (<math display="inline"><semantics><mrow><msub><mi>F</mi><mn>1</mn></msub><mo> </mo><mi>m</mi><mi>i</mi><mi>c</mi><mi>r</mi><mi>o</mi><mo> </mo><mi>s</mi><mi>c</mi><mi>o</mi><mi>r</mi><mi>e</mi></mrow></semantics></math>) of proposed models between RPi and RPi with Coral TPU.</p>
Full article ">Figure 17
<p>Comparison of classification time of proposed models between RPi and RPi with Coral TPU.</p>
Full article ">
16 pages, 1414 KiB  
Article
Gaze Estimation Based on Convolutional Structure and Sliding Window-Based Attention Mechanism
by Yujie Li, Jiahui Chen, Jiaxin Ma, Xiwen Wang and Wei Zhang
Sensors 2023, 23(13), 6226; https://doi.org/10.3390/s23136226 - 7 Jul 2023
Cited by 2 | Viewed by 2342
Abstract
The direction of human gaze is an important indicator of human behavior, reflecting the level of attention and cognitive state towards various visual stimuli in the environment. Convolutional neural networks have achieved good performance in gaze estimation tasks, but their global modeling capability [...] Read more.
The direction of human gaze is an important indicator of human behavior, reflecting the level of attention and cognitive state towards various visual stimuli in the environment. Convolutional neural networks have achieved good performance in gaze estimation tasks, but their global modeling capability is limited, making it difficult to further improve prediction performance. In recent years, transformer models have been introduced for gaze estimation and have achieved state-of-the-art performance. However, their slicing-and-mapping mechanism for processing local image patches can compromise local spatial information. Moreover, the single down-sampling rate and fixed-size tokens are not suitable for multiscale feature learning in gaze estimation tasks. To overcome these limitations, this study introduces a Swin Transformer for gaze estimation and designs two network architectures: a pure Swin Transformer gaze estimation model (SwinT-GE) and a hybrid gaze estimation model that combines convolutional structures with SwinT-GE (Res-Swin-GE). SwinT-GE uses the tiny version of the Swin Transformer for gaze estimation. Res-Swin-GE replaces the slicing-and-mapping mechanism of SwinT-GE with convolutional structures. Experimental results demonstrate that Res-Swin-GE significantly outperforms SwinT-GE, exhibiting strong competitiveness on the MpiiFaceGaze dataset and achieving a 7.5% performance improvement over existing state-of-the-art methods on the Eyediap dataset. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Basic methods for gaze estimation: (<b>a</b>) Appearance-based gaze estimation method. (<b>b</b>) Traditional appearance-based gaze estimation methods, where * represents a specific eye image. (<b>c</b>) Appearance-based deep learning gaze estimation methods.</p>
Full article ">Figure 2
<p>SwinT-GE network structure for gaze estimation.</p>
Full article ">Figure 3
<p>Network structure of Res-Swin-GE.</p>
Full article ">Figure 4
<p>(<b>a</b>) Example face images in MpiiFaceGaze. (<b>b</b>) Example face images in Eyediap.</p>
Full article ">Figure 5
<p>Data distribution of subjects’ gaze directions in the two public datasets: (<b>a</b>) MpiiFaceGaze [<a href="#B38-sensors-23-06226" class="html-bibr">38</a>]. (<b>b</b>) Eyediap [<a href="#B39-sensors-23-06226" class="html-bibr">39</a>].</p>
Full article ">Figure 6
<p>Angular error distribution of prediction results for the two network structures under different gaze directions on the two public datasets. (<b>a</b>,<b>c</b>) Angular error distribution of SwinT-GE and Res-Swin-GE on the MpiiFaceGaze [<a href="#B38-sensors-23-06226" class="html-bibr">38</a>] dataset. (<b>b</b>,<b>d</b>) Angular error distribution of SwinT-GE and Res-Swin-GE on the Eyediap [<a href="#B39-sensors-23-06226" class="html-bibr">39</a>] dataset.</p>
Full article ">Figure 7
<p>Improved prediction accuracy of Res-Swin-GE compared to SwinT-GE under different gaze directions for the two public datasets: (<b>a</b>) MpiiFaceGaze [<a href="#B38-sensors-23-06226" class="html-bibr">38</a>]. (<b>b</b>) Eyediap [<a href="#B39-sensors-23-06226" class="html-bibr">39</a>].</p>
Full article ">Figure 8
<p>Example of feature maps extracted by different modules from images in the MpiiFaceGaze [<a href="#B38-sensors-23-06226" class="html-bibr">38</a>] dataset: (<b>a</b>) Original input image. (<b>b</b>) The feature maps extracted from the slicing-and-mapping. (<b>c</b>) The feature maps extracted from ResNet Block shallow convolution. (<b>d</b>) The feature maps extracted from ResNet Block deep convolution.</p>
Full article ">Figure 9
<p>Example of feature maps extracted by different modules from images in the Eyediap [<a href="#B39-sensors-23-06226" class="html-bibr">39</a>] dataset: (<b>a</b>) Original input image. (<b>b</b>) The feature maps extracted from the slicing-and-mapping. (<b>c</b>) The feature maps extracted from ResNet Block shallow convolution. (<b>d</b>) The feature maps extracted from ResNet Block deep convolution.</p>
Full article ">Figure 10
<p>Angular errors of Res-Swin-GE compared to different networks on two public datasets.</p>
Full article ">Figure 11
<p>Angular errors of Res-Swin-GE, ResNet18-Pure, and SwinT-GE on the two publicly available datasets.</p>
Full article ">
18 pages, 14707 KiB  
Article
A Comprehensive Characterization of the TI-LGAD Technology
by Matias Senger, Anna Macchiolo, Ben Kilminster, Giovanni Paternoster, Matteo Centis Vignali and Giacomo Borghi
Sensors 2023, 23(13), 6225; https://doi.org/10.3390/s23136225 - 7 Jul 2023
Cited by 5 | Viewed by 1861
Abstract
Pixelated low-gain avalanche diodes (LGADs) can provide both precision spatial and temporal measurements for charged particle detection; however, electrical termination between the pixels yields a no-gain region, such that the active area or fill factor is not sufficient for small pixel sizes. Trench-isolated [...] Read more.
Pixelated low-gain avalanche diodes (LGADs) can provide both precision spatial and temporal measurements for charged particle detection; however, electrical termination between the pixels yields a no-gain region, such that the active area or fill factor is not sufficient for small pixel sizes. Trench-isolated LGADs (TI-LGADs) are a strong candidate for solving the fill-factor problem, as the p-stop termination structure is replaced by isolated trenches etched in the silicon itself. In the TI-LGAD process, the p-stop termination structure, typical of LGADs, is replaced by isolating trenches etched in the silicon itself. This modification substantially reduces the size of the no-gain region, thus enabling the implementation of small pixels with an adequate fill factor value. In this article, a systematic characterization of the TI-RD50 production, the first of its kind entirely dedicated to the TI-LGAD technology, is presented. Designs are ranked according to their measured inter-pixel distance, and the time resolution is compared against the regular LGAD technology. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic representation of the different design parameters that were explored in the TI-RD50 production. The left panel shows a detailed side view of the trench area, and illustrates the <span class="html-italic">pixel border</span>, as well as the <span class="html-italic">trench depth</span>. The right panel displays a top view illustrating the <span class="html-italic">number of trenches</span> and the <span class="html-italic">contact type</span>.</p>
Full article ">Figure 2
<p>Pictures of three devices mounted on the readout boards showing the three different layouts used during this work.</p>
Full article ">Figure 3
<p>Schematic representation of the beta setup. A picture of this setup is shown in <a href="#sensors-23-06225-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 4
<p>Picture of the beta setup utilized in this work. A schematic representation of this setup is shown in <a href="#sensors-23-06225-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Pictures of the setup at the test beam at CERN.</p>
Full article ">Figure 6
<p>Example of an LGAD signal analyzed with the developed software.</p>
Full article ">Figure 7
<p>Microscope picture of a <math display="inline"><semantics><mrow><mn>1</mn><mo>×</mo><mn>2</mn></mrow></semantics></math> device, showing the line along which the laser scan was performed in the TCT setup. The wire bonds can be seen landing on the metallization of the left and right pixels, respectively, as well as on the guard ring. The darker brown region is exposed silicon with no metallization through which the infrared laser was shined onto the sample. The inset shows a detail of the inter-pixel region and the two darker vertical lines that go through the metallization opening are the trenches, in this case the device has two trenches.</p>
Full article ">Figure 8
<p>Example of an IPD measurement.</p>
Full article ">Figure 9
<p>Jitter calculation example for the determination of the time resolution. This example is from our beta setup. Similar plots are obtained for a single position in the TCT setup, as well as in the test beam setup.</p>
Full article ">Figure 10
<p>An example showing the Landau distribution of the collected charge in our beta setup. A <span class="html-italic">Langauss</span> fit is shown which is a Landau convoluted with a Gaussian. The Landau component is also plotted. The units of charge are before the conversion to Coulomb, i.e., before dividing by the transimpedance of the system.</p>
Full article ">Figure 11
<p>Inter-pixel distance as a function of the applied bias voltage for all the designs tested (non-irradiated devices).</p>
Full article ">Figure 12
<p>TI-LGAD designs ranked according to increasing IPD (non-irradiated devices). These values were obtained at <math display="inline"><semantics><mrow><mo>−</mo><mn>20</mn></mrow></semantics></math> deg Celsius and with a bias voltage of <math display="inline"><semantics><mrow><mo>−</mo><mn>200</mn><mspace width="3.33333pt"/><mi mathvariant="normal">V</mi></mrow></semantics></math> (see <a href="#sensors-23-06225-f011" class="html-fig">Figure 11</a>, the vertical line labeled <span class="html-italic">working voltage)</span>.</p>
Full article ">Figure 13
<p>Time resolution uniformity for a specific scan with the TCT setup.</p>
Full article ">Figure 14
<p>Time resolution and collected charge as a function of the bias voltage measured with the beta setup.</p>
Full article ">Figure 15
<p>Distribution of events in an amplitude vs. peak time plane showing how the event selection with a threshold on the amplitude leads to no efficiency loss with non-irradiated devices but not in the case of irradiated devices.</p>
Full article ">Figure 16
<p>Time resolution as a function of collected charge obtained in our beta setup.</p>
Full article ">Figure 17
<p>Time resolution and collected charge as function of the bias voltage measured with the test beam setup.</p>
Full article ">Figure 18
<p>Time resolution as a function of collected charge obtained in the test beam setup.</p>
Full article ">
34 pages, 4020 KiB  
Article
A Novel Hybrid Harris Hawk-Arithmetic Optimization Algorithm for Industrial Wireless Mesh Networks
by P. Arun Mozhi Devan, Rosdiazli Ibrahim, Madiah Omar, Kishore Bingi and Hakim Abdulrab
Sensors 2023, 23(13), 6224; https://doi.org/10.3390/s23136224 - 7 Jul 2023
Cited by 9 | Viewed by 1750
Abstract
A novel hybrid Harris Hawk-Arithmetic Optimization Algorithm (HHAOA) for optimizing the Industrial Wireless Mesh Networks (WMNs) and real-time pressure process control was proposed in this research article. The proposed algorithm uses inspiration from Harris Hawk Optimization and the Arithmetic Optimization Algorithm to improve [...] Read more.
A novel hybrid Harris Hawk-Arithmetic Optimization Algorithm (HHAOA) for optimizing the Industrial Wireless Mesh Networks (WMNs) and real-time pressure process control was proposed in this research article. The proposed algorithm uses inspiration from Harris Hawk Optimization and the Arithmetic Optimization Algorithm to improve position relocation problems, premature convergence, and the poor accuracy the existing techniques face. The HHAOA algorithm was evaluated on various benchmark functions and compared with other optimization algorithms, namely Arithmetic Optimization Algorithm, Moth Flame Optimization, Sine Cosine Algorithm, Grey Wolf Optimization, and Harris Hawk Optimization. The proposed algorithm was also applied to a real-world industrial wireless mesh network simulation and experimentation on the real-time pressure process control system. All the results demonstrate that the HHAOA algorithm outperforms different algorithms regarding mean, standard deviation, convergence speed, accuracy, and robustness and improves client router connectivity and network congestion with a 31.7% reduction in Wireless Mesh Network routers. In the real-time pressure process, the HHAOA optimized Fractional-order Predictive PI (FOPPI) Controller produced a robust and smoother control signal leading to minimal peak overshoot and an average of a 53.244% faster settling. Based on the results, the algorithm enhanced the efficiency and reliability of industrial wireless networks and real-time pressure process control systems, which are critical for industrial automation and control applications. Full article
(This article belongs to the Special Issue Wireless Communication Systems and Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Hierarchy form of the proposed HHAOA optimization algorithm.</p>
Full article ">Figure 2
<p>Flowchart of the proposed HHAOA technique.</p>
Full article ">Figure 3
<p>Real-time schematic of the pressure process plant.</p>
Full article ">Figure 4
<p>P&amp;I diagram of the pressure process plant.</p>
Full article ">Figure 5
<p>FOPPI controller tuning using HHAOA.</p>
Full article ">Figure 6
<p>Benchmark functions search space plots.</p>
Full article ">Figure 7
<p>Convergence performance of all the benchmark functions.</p>
Full article ">Figure 8
<p>WMN convergence for different algorithms.</p>
Full article ">Figure 9
<p>WMN network connectivity and its coverage area for various algorithms. (<b>a</b>) Initial Network; (<b>b</b>) AOA Optimized WMN; (<b>c</b>) MFO Optimized WMN; (<b>d</b>) SCA Optimized WMN; (<b>e</b>) GWO Optimized WMN; (<b>f</b>) WOA Optimized WMN; (<b>g</b>) HHO Optimized WMN; (<b>h</b>) HHAOA Optimized WMN.</p>
Full article ">Figure 10
<p>Set-point tracking and disturbance rejection analysis for optimal FOPPI controller.</p>
Full article ">Figure 11
<p>Zoomed view of <a href="#sensors-23-06224-f010" class="html-fig">Figure 10</a> (<b>A</b>) Initial set-point tracking (<b>B</b>) Disturbance rejection performance (<b>C</b>) Control signal during initial set-point (<b>D</b>) Control signal during disturbance rejection.</p>
Full article ">
15 pages, 4937 KiB  
Article
Identification and Classification of Human Body Exercises on Smart Textile Bands by Combining Decision Tree and Convolutional Neural Networks
by Bonhak Koo, Ngoc Tram Nguyen and Jooyong Kim
Sensors 2023, 23(13), 6223; https://doi.org/10.3390/s23136223 - 7 Jul 2023
Cited by 4 | Viewed by 1894
Abstract
In recent years, human activity recognition (HAR) has gained significant interest from researchers in the sports and fitness industries. In this study, the authors have proposed a cascaded method including two classifying stages to classify fitness exercises, utilizing a decision tree as the [...] Read more.
In recent years, human activity recognition (HAR) has gained significant interest from researchers in the sports and fitness industries. In this study, the authors have proposed a cascaded method including two classifying stages to classify fitness exercises, utilizing a decision tree as the first stage and a one-dimension convolutional neural network as the second stage. The data acquisition was carried out by five participants performing exercises while wearing an inertial measurement unit sensor attached to a wristband on their wrists. However, only data acquired along the z-axis of the IMU accelerator was used as input to train and test the proposed model, to simplify the model and optimize the training time while still achieving good performance. To examine the efficiency of the proposed method, the authors compared the performance of the cascaded model and the conventional 1D-CNN model. The obtained results showed an overall improvement in the accuracy of exercise classification by the proposed model, which was approximately 92%, compared to 82.4% for the 1D-CNN model. In addition, the authors suggested and evaluated two methods to optimize the clustering outcome of the first stage in the cascaded model. This research demonstrates that the proposed model, with advantages in terms of training time and computational cost, is able to classify fitness workouts with high performance. Therefore, with further development, it can be applied in various real-time HAR applications. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Three axes of the accelerometer on the inertial measurement unit sensor.</p>
Full article ">Figure 2
<p>The IMU sensor attachment location on the participant’s wrist.</p>
Full article ">Figure 3
<p>Direction and amplitude of the acceleration vector along the <span class="html-italic">z</span>-axis when (<b>a</b>) the <span class="html-italic">z</span>-axis is toward the ground and there is no movement, (<b>b</b>) the <span class="html-italic">z</span>-axis is toward the ground and the movement is in the opposite direction, (<b>c</b>) the <span class="html-italic">z</span>-axis direction together with the gravity creates an angle of θ ≠ 0 and there is no movement.</p>
Full article ">Figure 4
<p>The 14 fitness exercises chosen to be classified. (<b>a</b>) Bench press, (<b>b</b>) Incline bench press, (<b>c</b>) Dumbbell shoulder press, (<b>d</b>) Dumbbell triceps extension, (<b>e</b>) Dumbbell kickback, (<b>f</b>) Dumbbell front raise, (<b>g</b>) Lat pull down, (<b>h</b>) Straight arm lat pull down, (<b>i</b>) Deadlift, (<b>j</b>) Dumbbell bent row, (<b>k</b>) One-arm dumbbell row, (<b>l</b>) EZ-bar curls, (<b>m</b>) Machine preacher curl, (<b>n</b>) Seated dumbbell lateral raise.</p>
Full article ">Figure 5
<p>The direction of the <span class="html-italic">z</span>-axis of the accelerometer in four types of user’s forearm positions. (<b>a</b>) The <span class="html-italic">z</span>-axis has the same direction as gravity, (<b>b</b>) the <span class="html-italic">z</span>-axis is perpendicular to gravity, (<b>c</b>) the <span class="html-italic">z</span>-axis forms an angle with gravity larger than 90° but less than 180°, (<b>d</b>) the <span class="html-italic">z</span>-axis forms an angle of 180° with gravity. Red arrow indicates the direction of the <span class="html-italic">z</span>-axis.</p>
Full article ">Figure 5 Cont.
<p>The direction of the <span class="html-italic">z</span>-axis of the accelerometer in four types of user’s forearm positions. (<b>a</b>) The <span class="html-italic">z</span>-axis has the same direction as gravity, (<b>b</b>) the <span class="html-italic">z</span>-axis is perpendicular to gravity, (<b>c</b>) the <span class="html-italic">z</span>-axis forms an angle with gravity larger than 90° but less than 180°, (<b>d</b>) the <span class="html-italic">z</span>-axis forms an angle of 180° with gravity. Red arrow indicates the direction of the <span class="html-italic">z</span>-axis.</p>
Full article ">Figure 6
<p>Block diagram of the proposed model with two classifying stages.</p>
Full article ">Figure 7
<p>The 1D-CNN architecture—the second stage of the proposed model. The input is only one time series. The units of the output are the number of classes corresponding to each group. Block 1 and Block 2 have the same layers and activation function, while Block 1 has 23 filters for the Conv 1D layer and Block 2 has 46 filters for this layer.</p>
Full article ">Figure 8
<p>Training accuracy of the 1D convolutional neural network for 200 epochs with 14 classes.</p>
Full article ">Figure 9
<p>Accuracy of the 1D-CNN model on the test data with 14 classes. The order of the exercises in this confusion matrix is the same as that in <a href="#sensors-23-06223-f004" class="html-fig">Figure 4</a>. The darker the blue, the higher the accuracy, while the darker the red, the lower the accuracy.</p>
Full article ">Figure 10
<p>Classification results of the first stage with 14 exercises divided into three groups. The darker the blue, the higher the accuracy, while the darker the red, the lower the accuracy.</p>
Full article ">Figure 11
<p>Classification results of the second stage for the exercises in Group 1 (<b>a</b>), Group 2 (<b>b</b>), and Group 3 (<b>c</b>) performed by Method 1. The order of the exercises in these confusion matrices is the same as that in <a href="#sensors-23-06223-f004" class="html-fig">Figure 4</a>. The darker the blue, the higher the accuracy, while the darker the red, the lower the accuracy.</p>
Full article ">Figure 12
<p>Classification results of the first stage, with 14 exercises divided into 4 groups. The darker the blue, the higher the accuracy, while the darker the red, the lower the accuracy.</p>
Full article ">Figure 13
<p>Classification results of the second stage for the exercises in Group 2 (<b>a</b>) and Group 3 (<b>b</b>) that were formed by Method 2. The order of the exercises in this confusion matrix is the same as that in <a href="#sensors-23-06223-f004" class="html-fig">Figure 4</a>. The darker the blue, the higher the accuracy, while the darker the red, the lower the accuracy.</p>
Full article ">
17 pages, 1673 KiB  
Review
Acoustic Lung Imaging Utilized in Continual Assessment of Patients with Obstructed Airway: A Systematic Review
by Chang-Sheng Lee, Minghui Li, Yaolong Lou, Qammer H. Abbasi and Muhammad Ali Imran
Sensors 2023, 23(13), 6222; https://doi.org/10.3390/s23136222 - 7 Jul 2023
Cited by 4 | Viewed by 1590
Abstract
Smart respiratory therapy is enabled by continual assessment of lung functions. This systematic review provides an overview of the suitability of equipment-to-patient acoustic imaging in continual assessment of lung conditions. The literature search was conducted using Scopus, PubMed, ScienceDirect, Web of Science, SciELO [...] Read more.
Smart respiratory therapy is enabled by continual assessment of lung functions. This systematic review provides an overview of the suitability of equipment-to-patient acoustic imaging in continual assessment of lung conditions. The literature search was conducted using Scopus, PubMed, ScienceDirect, Web of Science, SciELO Preprints, and Google Scholar. Fifteen studies remained for additional examination after the screening process. Two imaging modalities, lung ultrasound (LUS) and vibration imaging response (VRI), were identified. The most common outcome obtained from eleven studies was positive observations of changes to the geographical lung area, sound energy, or both, while positive observation of lung consolidation was reported in the remaining four studies. Two different modalities of lung assessment were used in eight studies, with one study comparing VRI against chest X-ray, one study comparing VRI with LUS, two studies comparing LUS to chest X-ray, and four studies comparing LUS in contrast to computed tomography. Our findings indicate that the acoustic imaging approach could assess and provide regional information on lung function. No technology has been shown to be better than another for measuring obstructed airways; hence, more research is required on acoustic imaging in detecting obstructed airways regionally in the application of enabling smart therapy. Full article
(This article belongs to the Special Issue Biosensing Technologies: Current Achievements and Future Challenges)
Show Figures

Figure 1

Figure 1
<p>PRISMA 2020 flow of information for study selection and inclusion.</p>
Full article ">Figure 2
<p>The conceptual flow of acoustic imaging system working principle. (<b>a</b>) Lung ultrasound and (<b>b</b>) vibration response imaging.</p>
Full article ">
23 pages, 21159 KiB  
Article
Research on the Forward Solving Method of Defect Leakage Signal Based on the Non-Uniform Magnetic Charge Model
by Pengfei Gao, Hao Geng, Lijian Yang and Yuming Su
Sensors 2023, 23(13), 6221; https://doi.org/10.3390/s23136221 - 7 Jul 2023
Cited by 1 | Viewed by 1365
Abstract
Pipeline magnetic flux leakage inspection is widely used in the evaluation of material defect detection due to its advantages of having no coupling agent and easy implementation. The quantification of defect size is an important part of magnetic flux leakage testing. Defects of [...] Read more.
Pipeline magnetic flux leakage inspection is widely used in the evaluation of material defect detection due to its advantages of having no coupling agent and easy implementation. The quantification of defect size is an important part of magnetic flux leakage testing. Defects of different geometrical dimensions produce signal waveforms with different characteristics after excitation. The key to achieving defect quantification is an accurate description of the relationship between the magnetic leakage signal and the size. In this paper, a calculation model for solving the defect leakage field based on the non-uniform magnetic charge distribution of magnetic dipoles is developed. Based on the traditional uniformly distributed magnetic charge model, the magnetic charge density distribution model is improved. Considering the variation of magnetic charge density with different depth positions, the triaxial signal characteristics of the defect are obtained by vector synthesis calculation. Simultaneous design of excitation pulling experiment. The leakage field distribution of rectangular defects with different geometries is analyzed. The experimental results show that the change in defect size will have an impact on the area of the defect leakage field distribution, and the larger the length and wider the width of the defect, the more sensitive the impact on the leakage field distribution. The solution model is consistent with the experimentally obtained leakage signal distribution law, and the model is a practical guide by which to improve the quality of defect evaluation. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) is the schematic diagram of the magnetic line propagation path when there is no defect, and (<b>b</b>) is the case with defect.</p>
Full article ">Figure 2
<p>Schematic diagram of magnetic dipole line.</p>
Full article ">Figure 3
<p>Schematic diagram of three-dimensional magnetic dipole surface.</p>
Full article ">Figure 4
<p>Schematic diagram of magnetic charge distribution.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>d</b>) show the trend of magnetic charge distribution on the plane when the plane spacing gradually increases, respectively.</p>
Full article ">Figure 6
<p>The results of the improved magnetic charge model calculation, where the black line shows the radial leakage component and the red line shows the axial leakage component.</p>
Full article ">Figure 7
<p>Schematic of the leakage field generated under the improved magnetic charge model when defects are present.</p>
Full article ">Figure 8
<p>The radial distribution characteristics of the leakage magnetism in 3D space when the length of the defect is changed. (<b>a</b>–<b>e</b>) corresponding to the energy distribution of the three-dimensional radial leakage signal for defects of 4 mm, 6 mm, 8 mm, 10 mm and 12 mm in length, in that order.</p>
Full article ">Figure 9
<p>The characteristics of the axial distribution of the leakage magnetism in three dimensions when the length of the defect is changed. (<b>a</b>–<b>e</b>) corresponding to the energy distribution of the three-dimensional axial leakage signal for defects of 4 mm, 6 mm, 8 mm, 10 mm and 12 mm in length, in that order.</p>
Full article ">Figure 10
<p>The projected images of the 3D leakage field on the ground for the maximum and minimum lengths. (<b>a</b>,<b>b</b>) show the projection images of the radial leakage component, and (<b>c</b>,<b>d</b>) show the projection images of the axial leakage component.</p>
Full article ">Figure 11
<p>Variation of the leakage magnetic field distribution at the center intercept line for defects with different lengths. (<b>a</b>) shows the radial leakage component, and (<b>b</b>) shows the axial leakage component. The peaks and troughs of the radial and axial components are shown in the blue text boxes.</p>
Full article ">Figure 12
<p>Fitting curves of different lengths with axial component leakage field amplitude.</p>
Full article ">Figure 13
<p>Attenuation amplitude diagram.</p>
Full article ">Figure 14
<p>Three-dimensional defect leakage magnetic distribution of the radial leakage component as the width increases. (<b>a</b>–<b>e</b>) correspond to the three-dimensional spatial distribution of the radial leakage field for widths of 4 mm, 6 mm, 8 mm, 10 mm and 12 mm, in that order.</p>
Full article ">Figure 15
<p>(<b>a</b>–<b>e</b>) correspond to the three-dimensional spatial distribution of the axial leakage field for widths of 4 mm, 6 mm, 8 mm, 10 mm and 12 mm, in that order.</p>
Full article ">Figure 16
<p>Projected images of the 3D leakage field on the ground for the maximum and minimum widths. (<b>a</b>,<b>b</b>) show the projection images of the radial leakage component, and (<b>c</b>,<b>d</b>) show the projection images of the axial leakage component.</p>
Full article ">Figure 17
<p>Variation of the leakage magnetic field distribution at the center intercept line for defects with different widths. (<b>a</b>) shows the radial leakage component, and (<b>b</b>) shows the axial leakage component.</p>
Full article ">Figure 18
<p>Fitting curves of different widths with axial component leakage field amplitude.</p>
Full article ">Figure 19
<p>Rate of increase of width and defect amplitude at different lengths.</p>
Full article ">Figure 20
<p>Three-dimensional defect leakage magnetic distribution of the radial leakage component as the depth increases. (<b>a</b>–<b>e</b>) show the variation of the radial 3D leakage field when the depths are 1.6 mm, 2.4 mm, 3.2 mm, 4 mm and 4.8 mm respectively.</p>
Full article ">Figure 21
<p>Three-dimensional defect leakage magnetic distribution of the axial leakage component as the depth increases. (<b>a</b>–<b>e</b>) show the variation of the axial 3D leakage field when the depths are 1.6 mm, 2.4 mm, 3.2 mm, 4 mm and 4.8 mm respectively.</p>
Full article ">Figure 21 Cont.
<p>Three-dimensional defect leakage magnetic distribution of the axial leakage component as the depth increases. (<b>a</b>–<b>e</b>) show the variation of the axial 3D leakage field when the depths are 1.6 mm, 2.4 mm, 3.2 mm, 4 mm and 4.8 mm respectively.</p>
Full article ">Figure 22
<p>Variation of the leakage magnetic field distribution at the center intercept line for defects with different depths. (<b>a</b>) shows the radial leakage component, and (<b>b</b>) shows the axial leakage component.</p>
Full article ">Figure 23
<p>Fitting curves of different depths with axial component leakage field amplitude.</p>
Full article ">Figure 24
<p>Diagram of the increase.</p>
Full article ">Figure 25
<p>(<b>a</b>) shows the schematic diagram of the experimental platform, (<b>b</b>) shows the schematic diagram of the defective specimen, and (<b>c</b>) shows the axial signal of the detected defect leakage.</p>
Full article ">Figure 26
<p>(<b>a</b>) shows the radial leakage component, and (<b>b</b>) shows the axial leakage component.</p>
Full article ">Figure 27
<p>(<b>a</b>) shows a comparison between the experimental data and the calculated radial signal amplitude, while (<b>b</b>) shows a comparison of the axial signal amplitude.</p>
Full article ">Figure 28
<p>(<b>a</b>,<b>b</b>) show a comparison of the conventional uniform magnetic charge model and the improved non-uniform magnetic charge with experimental results.</p>
Full article ">Figure 29
<p>(<b>a</b>) shows the fitting curve of width dimension and axial component amplitude, and (<b>b</b>) shows the fitting curve of length dimension and axial component amplitude.</p>
Full article ">
12 pages, 1170 KiB  
Article
New Systolic Array Algorithms and VLSI Architectures for 1-D MDST
by Doru Florin Chiper and Arcadie Cracan
Sensors 2023, 23(13), 6220; https://doi.org/10.3390/s23136220 - 7 Jul 2023
Viewed by 1365
Abstract
In this paper, we present two systolic array algorithms for efficient Very-Large-Scale Integration (VLSI) implementations of the 1-D Modified Discrete Sine Transform (MDST) using the systolic array architectural paradigm. The new algorithms decompose the computation of the MDST into modular and regular computational [...] Read more.
In this paper, we present two systolic array algorithms for efficient Very-Large-Scale Integration (VLSI) implementations of the 1-D Modified Discrete Sine Transform (MDST) using the systolic array architectural paradigm. The new algorithms decompose the computation of the MDST into modular and regular computational structures called pseudo-circular correlation and pseudo-cycle convolution. The two computational structures for pseudo-circular correlation and pseudo-cycle convolution both have the same form. This feature can be exploited to significantly reduce the hardware complexity since the two computational structures can be computed on the same linear systolic array. Moreover, the second algorithm can be used to further reduce the hardware complexity by replacing the general multipliers from the first one with multipliers with a constant that have a significantly reduced complexity. The resulting VLSI architectures have all the advantages of a cycle convolution and circular correlation based systolic implementations, such as high-speed using concurrency, an efficient use of the VLSI technology due to its local and regular interconnection topology, and low I/O cost. Moreover, in both architectures, a cost-effective application of an obfuscation technique can be achieved with low overheads. Full article
Show Figures

Figure 1

Figure 1
<p>The systolic array for Equations (10) and (11) [<a href="#B31-sensors-23-06220" class="html-bibr">31</a>] © 2022 IEEE.</p>
Full article ">Figure 2
<p>The function of a processing element from <a href="#sensors-23-06220-f001" class="html-fig">Figure 1</a> [<a href="#B31-sensors-23-06220" class="html-bibr">31</a>] © 2022 IEEE.</p>
Full article ">Figure 3
<p>Systolic array that implements Equation (22) and also (23) but with the input sequence <math display="inline"><semantics><mrow><msub><mi>x</mi><mi>b</mi></msub></mrow></semantics></math> instead of <math display="inline"><semantics><mrow><msub><mi>x</mi><mi>a</mi></msub></mrow></semantics></math>.</p>
Full article ">Figure 4
<p>The function of a processing element (PE) used in the systolic array [<a href="#B33-sensors-23-06220" class="html-bibr">33</a>].</p>
Full article ">
24 pages, 16030 KiB  
Article
A Self-Attention Integrated Learning Model for Landing Gear Performance Prediction
by Lin Lin, Changsheng Tong, Feng Guo, Song Fu, Yancheng Lv and Wenhui He
Sensors 2023, 23(13), 6219; https://doi.org/10.3390/s23136219 - 7 Jul 2023
Cited by 4 | Viewed by 1522
Abstract
The landing gear structure suffers from large loads during aircraft takeoff and landing, and an accurate prediction of landing gear performance is beneficial to ensure flight safety. Nevertheless, the landing gear performance prediction method based on machine learning has a strong reliance on [...] Read more.
The landing gear structure suffers from large loads during aircraft takeoff and landing, and an accurate prediction of landing gear performance is beneficial to ensure flight safety. Nevertheless, the landing gear performance prediction method based on machine learning has a strong reliance on the dataset, in which the feature dimension and data distribution will have a great impact on the prediction accuracy. To address these issues, a novel MCA-MLPSA is developed. First, an MCA (multiple correlation analysis) method is proposed to select key features. Second, a heterogeneous multilearner integration framework is proposed, which makes use of different base learners. Third, an MLPSA (multilayer perceptron with self-attention) model is proposed to adaptively capture the data distribution and adjust the weights of each base learner. Finally, the excellent prediction performance of the proposed MCA-MLPSA is validated by a series of experiments on the landing gear data. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Landing gear structure composition.</p>
Full article ">Figure 2
<p>Research outline.</p>
Full article ">Figure 3
<p>MCA-MLPSA model.</p>
Full article ">Figure 4
<p>Changes of landing gear parameters.</p>
Full article ">Figure 5
<p>Structure of MCA model.</p>
Full article ">Figure 6
<p>Structure of MLPSA model.</p>
Full article ">Figure 7
<p>Principle of weight adaptive integration.</p>
Full article ">Figure 8
<p>Landing gear performance prediction process.</p>
Full article ">Figure 9
<p>Pearson redundancy analysis.</p>
Full article ">Figure 10
<p>Spearman redundancy analysis.</p>
Full article ">Figure 11
<p>Kendall redundancy analysis.</p>
Full article ">Figure 12
<p>Regression accuracy improvement (Y1).</p>
Full article ">Figure 13
<p>Regression accuracy improvement (Y2).</p>
Full article ">Figure 14
<p>Prediction error (Y1).</p>
Full article ">Figure 15
<p>Prediction error (Y2).</p>
Full article ">
16 pages, 2668 KiB  
Article
Reflectance Measurements from Aerial and Proximal Sensors Provide Similar Precision in Predicting the Rice Yield Response to Mid-Season N Applications
by Telha H. Rehman, Mark E. Lundy, Andre Froes de Borja Reis, Nadeem Akbar and Bruce A. Linquist
Sensors 2023, 23(13), 6218; https://doi.org/10.3390/s23136218 - 7 Jul 2023
Cited by 1 | Viewed by 1413
Abstract
Accurately detecting nitrogen (N) deficiency and determining the need for additional N fertilizer is a key challenge to achieving precise N management in many crops, including rice (Oryza sativa L.). Many remotely sensed vegetation indices (VIs) have shown promise in this regard; [...] Read more.
Accurately detecting nitrogen (N) deficiency and determining the need for additional N fertilizer is a key challenge to achieving precise N management in many crops, including rice (Oryza sativa L.). Many remotely sensed vegetation indices (VIs) have shown promise in this regard; however, it is not well-known if VIs measured from different sensors can be used interchangeably. The objective of this study was to quantitatively test and compare the ability of VIs measured from an aerial and proximal sensor to predict the crop yield response to top-dress N fertilizer in rice. Nitrogen fertilizer response trials were established across two years (six site-years) throughout the Sacramento Valley rice-growing region of California. At panicle initiation (PI), unmanned aircraft system (UAS) Normalized Difference Red-Edge Index (NDREUAS) and GreenSeeker (GS) Normalized Difference Vegetation Index (NDVIGS) were measured and expressed as a sufficiency index (SI) (VI of N treatment divided by VI of adjacent N-enriched area). Following reflectance measurements, each plot was split into subplots with and without top-dress N fertilizer. All metrics evaluated in this study indicated that both NDREUAS and NDVIGS performed similarly with respect to predicting the rice yield response to top-dress N at PI. Utilizing SI measurements prior to top-dress N fertilizer application resulted in a 113% and 69% increase (for NDREUAS and NDVIGS, respectively) in the precision of the rice yield response differentiation compared to the effect of applying top-dress N without SI information considered. When the SI measured via NDREUAS and NDVIGS at PI was ≤0.97 and 0.96, top-dress N applications resulted in a significant (p < 0.05) increase in crop yield of 0.19 and 0.21 Mg ha−1, respectively. These results indicate that both aerial NDREUAS and proximal NDVIGS have the potential to accurately predict the rice yield response to PI top-dress N fertilizer in this system and could serve as the basis for developing a decision support tool for farmers that could potentially inform better N management and improve N use efficiency. Full article
(This article belongs to the Special Issue Sensors and Data-Driven Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Collecting rice canopy reflectance measurements using (<b>a</b>) an aerial MicaSense Red-Edge M sensor mounted to a DJI Matrice 100 and (<b>b</b>) a proximal GreenSeeker sensor.</p>
Full article ">Figure 2
<p>A georeferenced orthomosaic map of a nitrogen response trial depicting the Normalized Difference Red-Edge Index (NDRE).</p>
Full article ">Figure 3
<p>The relationship between the pre-plant N rate and rice grain yield without top-dress N for each site-year, as described by a quadratic linear mixed-effects model. The vertical dashed lines and values reported represent the optimal pre-plant N rate for each site (i.e., the N rate required to achieve maximum yields). Note: the relationship did not level-off at the Davis-19 site within the range of pre-plant N rates used in this study, and thus no pre-plant N rate is marked.</p>
Full article ">Figure 4
<p>The average grain yield response to top-dress N fertilizer applied at the panicle initiation (PI) rice growth stage. The error bars represent the standard deviation.</p>
Full article ">Figure 5
<p>The relationships between the pre-plant N rate and unmanned aircraft system (UAS) Normalized Difference Red-Edge Index (NDRE<sub>UAS</sub>) sufficiency index (SI) and GreenSeeker (GS) Normalized Difference Vegetation Index (NDVI<sub>GS</sub>) SI measured at panicle initiation (PI), as described by quadratic linear mixed-effects models. The solid and dashed vertical lines at 240 and 215 kg N ha<sup>−1</sup>, respectively, represent the N rate where the relationships saturated (i.e., the vertices of the quadratic models). The dotted vertical line at 200 kg N ha<sup>−1</sup> represents the optimal pre-plant N rate for all sites (i.e., the N rate required to achieve maximum yields on average), as derived from <a href="#app1-sensors-23-06218" class="html-app">Figure S2</a>.</p>
Full article ">Figure 6
<p>The estimated rice grain yield response to top-dress N fertilizer applied at the panicle initiation (PI) growth stage using a rate of 34 kg N ha<sup>−1</sup> (typical grower rate): (<b>a</b>) overall average (averaged across SI 0.70 to 1.00), and (<b>b</b>) for the unmanned aircraft system (UAS) Normalized Difference Red-Edge Index (NDRE<sub>UAS</sub>) sufficiency index (SI) and the GreenSeeker (GS) Normalized Difference Vegetation Index (NDVI<sub>GS</sub>) SI at specific SI values corresponding to the typical management range (i.e., pre-plant N rates of 150 to 200 kg N ha<sup>−1</sup>), as estimated by linear mixed-effects models. The error bars represent the standard error around the estimated grain yield response.</p>
Full article ">
13 pages, 2021 KiB  
Article
Artificial Intelligence Distinguishes Pathological Gait: The Analysis of Markerless Motion Capture Gait Data   Acquired by an iOS Application (TDPT-GT)
by Chifumi Iseki, Tatsuya Hayasaka, Hyota Yanagawa, Yuta Komoriya, Toshiyuki Kondo, Masayuki Hoshi, Tadanori Fukami, Yoshiyuki Kobayashi, Shigeo Ueda, Kaneyuki Kawamae, Masatsune Ishikawa, Shigeki Yamada, Yukihiko Aoyagi and Yasuyuki Ohta
Sensors 2023, 23(13), 6217; https://doi.org/10.3390/s23136217 - 7 Jul 2023
Cited by 5 | Viewed by 3696
Abstract
Distinguishing pathological gait is challenging in neurology because of the difficulty of capturing total body movement and its analysis. We aimed to obtain a convenient recording with an iPhone and establish an algorithm based on deep learning. From May 2021 to November 2022 [...] Read more.
Distinguishing pathological gait is challenging in neurology because of the difficulty of capturing total body movement and its analysis. We aimed to obtain a convenient recording with an iPhone and establish an algorithm based on deep learning. From May 2021 to November 2022 at Yamagata University Hospital, Shiga University, and Takahata Town, patients with idiopathic normal pressure hydrocephalus (n = 48), Parkinson’s disease (n = 21), and other neuromuscular diseases (n = 45) comprised the pathological gait group (n = 114), and the control group consisted of 160 healthy volunteers. iPhone application TDPT-GT captured the subjects walking in a circular path of about 1 meter in diameter, a markerless motion capture system, with an iPhone camera, which generated the three-axis 30 frames per second (fps) relative coordinates of 27 body points. A light gradient boosting machine (Light GBM) with stratified k-fold cross-validation (k = 5) was applied for gait collection for about 1 min per person. The median ability model tested 200 frames of each person’s data for its distinction capability, which resulted in the area under a curve of 0.719. The pathological gait captured by the iPhone could be distinguished by artificial intelligence. Full article
(This article belongs to the Special Issue Sensors in 2023)
Show Figures

Figure 1

Figure 1
<p>The markerless motion capture was recorded while the participants walked in a circle about 1 m in diameter and about 3 m away from the gait trail so that their whole body fit within the frame.</p>
Full article ">Figure 2
<p>The receiver operating characteristic (ROC) curves and the area under the curve (AUC) obtained by the five machine learning models.</p>
Full article ">Figure 3
<p>Applying model 1, the test was performed to distinguish each individual’s gait, whether pathological or not, resulting in an AUC of 0.719. The cut-off value (specificity and sensitivity) is shown near the curve.</p>
Full article ">Figure 4
<p>High feature importance scores in the discrimination of pathological gait.</p>
Full article ">
25 pages, 7282 KiB  
Article
Towards Topology-Free Programming for Cyber-Physical Systems with Process-Oriented Paradigm
by Vladimir E. Zyubin, Natalia O. Garanina, Igor S. Anureev and Sergey M. Staroletov
Sensors 2023, 23(13), 6216; https://doi.org/10.3390/s23136216 - 7 Jul 2023
Cited by 3 | Viewed by 1633
Abstract
The paper proposes a topology-free specification of distributed control systems by means of a process-oriented programming paradigm. The proposed approach was characterized, on the one hand, by a topologically independent specification of the control algorithm and, on the other hand, by the possibility [...] Read more.
The paper proposes a topology-free specification of distributed control systems by means of a process-oriented programming paradigm. The proposed approach was characterized, on the one hand, by a topologically independent specification of the control algorithm and, on the other hand, by the possibility of using existing formal verification methods by preserving the semantics of a centralized process-oriented program. The paper discusses the advantages of a topologically independent specification of distributed control systems, outlines the features of control software, argues why the use of a process-oriented approach to the development of the automation of cyber-physical systems is suitable for solving these problems, describes a general scheme for implementing a distributed control system according to a process-oriented specification, and proposes a formal heuristic algorithm for partitioning a sequential process-oriented program into independent clusters. We illustrate our algorithm with bottle-filling and sluice case studies. Full article
(This article belongs to the Special Issue IoT-Based Cyber-Physical System: Challenges and Future Direction)
Show Figures

Figure 1

Figure 1
<p>Preserving the hyperprocess semantics while deploying a process-oriented specification on a distributed architecture.</p>
Full article ">Figure 2
<p>Partitioning algorithm flowchart.</p>
Full article ">Figure 3
<p>The bottle-filling system.</p>
Full article ">Figure 4
<p>Process interaction diagram for bottle-filling software.</p>
Full article ">Figure 5
<p>Flowcharts for the <tt>Initialization</tt> process (<b>on the left</b>) and <tt>MainLoop</tt> process (<b>on the right</b>).</p>
Full article ">Figure 6
<p>Flowcharts for the <tt>TankFilling</tt> process (<b>on the left</b>), <tt>ForcedSterilization</tt> process (<b>on the center</b>), and <tt>KeepSterilization</tt> process (<b>on the right</b>).</p>
Full article ">Figure 7
<p>Flowcharts for the <tt>NextBottle</tt> process (<b>on the left</b>) and <tt>BottleFilling</tt> process (<b>on the right</b>).</p>
Full article ">Figure 8
<p>Implementation of the bottle filling system on four controllers.</p>
Full article ">Figure 9
<p>The sluice control system.</p>
Full article ">Figure 10
<p>The result of the proposed partitioning algorithm for the sluice control system: the processes form three clusters, highlighted with magenta, yellow and orange.</p>
Full article ">Figure 11
<p>The result of the proposed partitioning algorithm ignoring safely shared variables for the sluice control system: the processes form six clusters, highlighted in magenta, yellow, blue, purple, cyan, and green.</p>
Full article ">
17 pages, 8821 KiB  
Article
Infrared Image-Enhancement Algorithm for Weak Targets in Complex Backgrounds
by Yingchao Li, Lianji Ma, Shuai Yang, Qiang Fu, Hongyu Sun and Chao Wang
Sensors 2023, 23(13), 6215; https://doi.org/10.3390/s23136215 - 7 Jul 2023
Viewed by 1911
Abstract
Infrared small-target enhancement in complex contexts is one of the key technologies for infrared search and tracking systems. The effect of enhancement directly determines the reliability of the monitoring equipment. To address the problem of the low signal-to-noise ratio of small infrared moving [...] Read more.
Infrared small-target enhancement in complex contexts is one of the key technologies for infrared search and tracking systems. The effect of enhancement directly determines the reliability of the monitoring equipment. To address the problem of the low signal-to-noise ratio of small infrared moving targets in complex backgrounds and the poor effect of traditional enhancement algorithms, an accurate enhancement method for small infrared moving targets based on two-channel information is proposed. For a single frame, a modified curvature filter is used in the A channel to weaken the background while an improved PM model is used to enhance the target, and a modified band-pass filter is used in the B channel for coarse enhancement followed by a local contrast algorithm for fine enhancement, based on which a weighted superposition algorithm is used to extract a single-frame candidate target. The results of the experimental data analysis prove that the method has a good enhancement effect and robustness for small IR motion target enhancement in complex backgrounds, and it outperforms other advanced algorithms by about 43.7% in ROC. Full article
Show Figures

Figure 1

Figure 1
<p>Enhanced flow chart.</p>
Full article ">Figure 2
<p>Small-target structure and closed operator structure elements. (<b>a</b>) Small-target imaging principle. (<b>b</b>) Structural elements constructed according to general small-target projection.</p>
Full article ">Figure 3
<p>A small target in a complex context. (<b>a</b>) Aims in a complex context a. (<b>b</b>) Aims in a complex context b.</p>
Full article ">Figure 4
<p>Classic triangular cut.</p>
Full article ">Figure 5
<p>Improved triangular cut.</p>
Full article ">Figure 6
<p>3D transfer functions for the exponential and band-pass filters. (<b>a</b>) 3D transfer function for exponential filters. (<b>b</b>) 3D transfer function for bandpass filters.</p>
Full article ">Figure 7
<p>Nested structures.</p>
Full article ">Figure 8
<p>Image of a small infrared target against a complex background to be processed. (<b>a</b>) Field ground background and ground target salient map. (<b>b</b>) Aerial background and aerial target salient prominence.</p>
Full article ">Figure 9
<p>Wetland background infrared small-target processing. (<b>a</b>) Original image to be processed. (<b>b</b>) Pre-processed images. (<b>c</b>) Improved image after PM model processing. (<b>d</b>) Image processed using improved curvature filtering. (<b>e</b>) Channel A outputs small target positions. (<b>f</b>) Bandpass filtering results. (<b>g</b>) Double-layer local contrast processing results. (<b>h</b>) Channel B processing results. (<b>i</b>) Enhanced results. (<b>j</b>) Significant graphs of enhanced results.</p>
Full article ">Figure 9 Cont.
<p>Wetland background infrared small-target processing. (<b>a</b>) Original image to be processed. (<b>b</b>) Pre-processed images. (<b>c</b>) Improved image after PM model processing. (<b>d</b>) Image processed using improved curvature filtering. (<b>e</b>) Channel A outputs small target positions. (<b>f</b>) Bandpass filtering results. (<b>g</b>) Double-layer local contrast processing results. (<b>h</b>) Channel B processing results. (<b>i</b>) Enhanced results. (<b>j</b>) Significant graphs of enhanced results.</p>
Full article ">Figure 10
<p>Processing of small shore-side background infrared targets. (<b>a</b>) Original image to be processed. (<b>b</b>) Pre-processed images. (<b>c</b>) Improved image after PM model processing. (<b>d</b>) Image processed using improved curvature filtering. (<b>e</b>) Channel A outputs small-target positions. (<b>f</b>) Bandpass filtering results. (<b>g</b>) Double-layer local contrast processing results. (<b>h</b>) Channel B processing results. (<b>i</b>) Enhanced results. (<b>j</b>) Significant graphs of enhanced results.</p>
Full article ">Figure 10 Cont.
<p>Processing of small shore-side background infrared targets. (<b>a</b>) Original image to be processed. (<b>b</b>) Pre-processed images. (<b>c</b>) Improved image after PM model processing. (<b>d</b>) Image processed using improved curvature filtering. (<b>e</b>) Channel A outputs small-target positions. (<b>f</b>) Bandpass filtering results. (<b>g</b>) Double-layer local contrast processing results. (<b>h</b>) Channel B processing results. (<b>i</b>) Enhanced results. (<b>j</b>) Significant graphs of enhanced results.</p>
Full article ">Figure 11
<p>A typical infrared image and its processed image. (<b>a</b>) Original image. (<b>b</b>) Original image target prominence map. (<b>c</b>) Processing results graph. (<b>d</b>) Targeted significance plot of treatment results.</p>
Full article ">Figure 12
<p>Noise-free target-free map.</p>
Full article ">Figure 13
<p>Adding targets and noise maps. (<b>a</b>) Add target only. (<b>b</b>) Noise variance of 0.01. (<b>c</b>) Noise variance of 0.02. (<b>d</b>) Noise variance of 0.03.</p>
Full article ">Figure 14
<p>ROC curves for different methods.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop