[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 22, November-2
Previous Issue
Volume 22, October-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 22, Issue 21 (November-1 2022) – 500 articles

Cover Story (view full-size image): In this work, we present for the first time a surface-enhanced Raman spectroscopy (SERS) protocol for the detection of the common antidepressant amitriptyline in dried blood and dried saliva samples. The validated protocol is rapid and non-destructive, with a detection limit of 95 ppb and a linear range which covers the therapeutic window of amitriptyline in biological fluids. The ability to rapidly measure amitriptyline is of interest across a variety of disciplines, particularly in clinical settings where therapeutic drug monitoring is required, and also in forensic investigations. In these settings, the analysis of dried biological samples is increasingly popular. SERS analysis of dried biological samples for amitriptyline offers rapid analysis, whilst eliminating sample pre-treatment, and will be of interest in a variety of disciplines. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 7774 KiB  
Article
Attention-Guided Disentangled Feature Aggregation for Video Object Detection
by Shishir Muralidhara, Khurram Azeem Hashmi, Alain Pagani, Marcus Liwicki, Didier Stricker and Muhammad Zeshan Afzal
Sensors 2022, 22(21), 8583; https://doi.org/10.3390/s22218583 - 7 Nov 2022
Cited by 4 | Viewed by 3848
Abstract
Object detection is a computer vision task that involves localisation and classification of objects in an image. Video data implicitly introduces several challenges, such as blur, occlusion and defocus, making video object detection more challenging in comparison to still image object detection, which [...] Read more.
Object detection is a computer vision task that involves localisation and classification of objects in an image. Video data implicitly introduces several challenges, such as blur, occlusion and defocus, making video object detection more challenging in comparison to still image object detection, which is performed on individual and independent images. This paper tackles these challenges by proposing an attention-heavy framework for video object detection that aggregates the disentangled features extracted from individual frames. The proposed framework is a two-stage object detector based on the Faster R-CNN architecture. The disentanglement head integrates scale, spatial and task-aware attention and applies it to the features extracted by the backbone network across all the frames. Subsequently, the aggregation head incorporates temporal attention and improves detection in the target frame by aggregating the features of the support frames. These include the features extracted from the disentanglement network along with the temporal features. We evaluate the proposed framework using the ImageNet VID dataset and achieve a mean Average Precision (mAP) of 49.8 and 52.5 using the backbones of ResNet-50 and ResNet-101, respectively. The improvement in performance over the individual baseline methods validates the efficacy of the proposed approach. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Illustration of challenges in video object detection. Unlike object detection in still images, objects suffer from appearance deterioration in videos caused by several challenges.</p>
Full article ">Figure 2
<p>Overview of deep learning-based video object detection methods. The surveyed works have been categorized under the corresponding approaches.</p>
Full article ">Figure 3
<p>Overview of the proposed object detection framework based on the Faster R-CNN architecture. The figure highlights the modules with their components and illustrates the sequence in which the data is processed.</p>
Full article ">Figure 4
<p>A single residual block in ResNet. Residual blocks are stacked together to form different variants of ResNet.</p>
Full article ">Figure 5
<p>Overview of the neck implementing the disentanglement head. The individual attention mechanisms are enclosed within dashed boxes, and together, they form a single block represented by the solid box. The directed solid line indicates the sequence of processing, and the circles represent the functions applied within each attention. In task-aware attention, the initial values of <math display="inline"><semantics> <mrow> <mo>[</mo> <mi>α</mi> <mn>1</mn> <mo>,</mo> <mi>β</mi> <mn>1</mn> <mo>,</mo> <mi>α</mi> <mn>2</mn> <mo>,</mo> <mi>β</mi> <mn>2</mn> <mo>]</mo> </mrow> </semantics></math> = [1,0,0,0] are concatenated with the normalised data and finding the maximum between the piecewise functions involving <math display="inline"><semantics> <mrow> <mi>α</mi> <mn>1</mn> <mo>,</mo> <mi>β</mi> <mn>1</mn> <mo>,</mo> <mi>α</mi> <mn>2</mn> <mo>,</mo> <mi>β</mi> <mn>2</mn> </mrow> </semantics></math> and the input tensor.</p>
Full article ">Figure 6
<p>Overview of the aggregation head implemented using SELSA and Temporal RoI Align. The figure illustrates leveraging multiple frames as a reference for improving object detection.</p>
Full article ">Figure 7
<p>Visualising the performance of the proposed model against the baseline on the ImageNet VID dataset. Our model performs better in challenging conditions with fewer misclassifications and false positives.</p>
Full article ">Figure 8
<p>Video object detection under challenging conditions. The figure demonstrates the robustness of the proposed approach against inherent challenges in videos.</p>
Full article ">Figure 9
<p>Fail cases of the proposed model: misclassified objects (top), duplicate bounding boxes (middle) and inaccurate or missing predictions (bottom).</p>
Full article ">
18 pages, 1532 KiB  
Article
Consortium Framework Using Blockchain for Asthma Healthcare in Pandemics
by Muhammad Shoaib Farooq, Maryam Suhail, Junaid Nasir Qureshi, Furqan Rustam, Isabel de la Torre Díez, Juan Luis Vidal Mazón, Carmen Lili Rodríguez and Imran Ashraf
Sensors 2022, 22(21), 8582; https://doi.org/10.3390/s22218582 - 7 Nov 2022
Cited by 4 | Viewed by 3020
Abstract
Asthma is a deadly disease that affects the lungs and air supply of the human body. Coronavirus and its variants also affect the airways of the lungs. Asthma patients approach hospitals mostly in a critical condition and require emergency treatment, which creates a [...] Read more.
Asthma is a deadly disease that affects the lungs and air supply of the human body. Coronavirus and its variants also affect the airways of the lungs. Asthma patients approach hospitals mostly in a critical condition and require emergency treatment, which creates a burden on health institutions during pandemics. The similar symptoms of asthma and coronavirus create confusion for health workers during patient handling and treatment of disease. The unavailability of patient history to physicians causes complications in proper diagnostics and treatments. Many asthma patient deaths have been reported especially during pandemics, which necessitates an efficient framework for asthma patients. In this article, we have proposed a blockchain consortium healthcare framework for asthma patients. The proposed framework helps in managing asthma healthcare units, coronavirus patient records and vaccination centers, insurance companies, and government agencies, which are connected through the secure blockchain network. The proposed framework increases data security and scalability as it stores encrypted patient data on the Interplanetary File System (IPFS) and keeps data hash values on the blockchain. The patient data are traceable and accessible to physicians and stakeholders, which helps in accurate diagnostics, timely treatment, and the management of patients. The smart contract ensures the execution of all business rules. The patient profile generation mechanism is also discussed. The experiment results revealed that the proposed framework has better transaction throughput, query delay, and security than existing solutions. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Stages of asthma.</p>
Full article ">Figure 2
<p>Asthma prevalence by gender and age in the USA.</p>
Full article ">Figure 3
<p>Blockchain consortium framework for asthma patient healthcare.</p>
Full article ">Figure 4
<p>Layered architecture of blockchain consortium.</p>
Full article ">Figure 5
<p>Peer-to-peer connectivity blockchain consortium framework for asthma patient healthcare.</p>
Full article ">Figure 6
<p>Workflow of patients’ medical records through the APMS.</p>
Full article ">Figure 7
<p>Smart contract records.</p>
Full article ">Figure 8
<p>Mapping of hospitals within the network.</p>
Full article ">Figure 9
<p>Patient record entry with name.</p>
Full article ">Figure 10
<p>Patient record detail fetching.</p>
Full article ">Figure 11
<p>Obtaining multiple records from different healthcare units.</p>
Full article ">Figure 12
<p>Number of patients in healthcare units.</p>
Full article ">Figure 13
<p>Flowchart of smart contract execution.</p>
Full article ">Figure 14
<p>The 51% attack.</p>
Full article ">Figure 15
<p>On-chain block replication on the blockchain network.</p>
Full article ">Figure 16
<p>Block example with UTXO.</p>
Full article ">Figure 17
<p>Patient profile generation process.</p>
Full article ">Figure 18
<p>TPHs of the proposed framework.</p>
Full article ">Figure 19
<p>System response on user queries.</p>
Full article ">
28 pages, 10325 KiB  
Review
Review of Vibration Control Strategies of High-Rise Buildings
by Mohamed Hechmi El Ouni, Mahdi Abdeddaim, Said Elias and Nabil Ben Kahla
Sensors 2022, 22(21), 8581; https://doi.org/10.3390/s22218581 - 7 Nov 2022
Cited by 20 | Viewed by 7334
Abstract
Since the early ages of human existence on Earth, humans have fought against natural hazards for survival. Over time, the most dangerous hazards humanity has faced are earthquakes and strong winds. Since then and till nowadays, the challenges are ongoing to construct higher [...] Read more.
Since the early ages of human existence on Earth, humans have fought against natural hazards for survival. Over time, the most dangerous hazards humanity has faced are earthquakes and strong winds. Since then and till nowadays, the challenges are ongoing to construct higher buildings that can withstand the forces of nature. This paper is a detailed review of various vibration control strategies used to enhance the dynamical response of high-rise buildings. Hence, different control strategies studied and used in civil engineering are presented with illustrations of real applications if existing. The main aim of this review paper is to provide a reference-rich document for all the contributors to the vibration control of structures. This paper will clarify the applicability of specific control strategies for high-rise buildings. It is worth noting that not all the studied and investigated methods are applicable to high-rise buildings; a few of them remain limited by many parameters such as cost-effectiveness and engineering-wise installation and maintenance. Full article
Show Figures

Figure 1

Figure 1
<p>Borj Dubai (Dubai, 2008) [<a href="#B1-sensors-22-08581" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Division for passive damper.</p>
Full article ">Figure 3
<p>Utah State Capitol building (<b>a</b>) and the seismic dampening widgets (base isolators, (<b>b</b>)).</p>
Full article ">Figure 4
<p>Los Angeles City Hall (base-isolated).</p>
Full article ">Figure 5
<p>(<b>a</b>) Tower of Taipei 101 in Taiwan <a href="https://upload.wikimedia.org/wikipedia/commons/1/1a/Taipei_101_2009_amk-EditMylius.jpg" target="_blank">https://upload.wikimedia.org/wikipedia/commons/1/1a/Taipei_101_2009_amk-EditMylius.jpg</a> (accessed on 15 October 2022); (<b>b</b>) TMD installed in the top of the tower <a href="https://upload.wikimedia.org/wikipedia/commons/1/15/Taipei_101_Tuned_Mass_Damper.png" target="_blank">https://upload.wikimedia.org/wikipedia/commons/1/15/Taipei_101_Tuned_Mass_Damper.png</a> (accessed on 15 October 2022); (<b>c</b>) zoom on the TMD <a href="https://upload.wikimedia.org/wikipedia/commons/4/4a/Tuned_mass_damper_-_Taipei_101_-_Wikimania_2007_0224.jpg" target="_blank">https://upload.wikimedia.org/wikipedia/commons/4/4a/Tuned_mass_damper_-_Taipei_101_-_Wikimania_2007_0224.jpg</a> (accessed on 15 October 2022).</p>
Full article ">Figure 6
<p>(<b>a</b>) Tuned viscous mass damper coupled to a chevron bracing equipping a building in Sendai, Japan; (<b>b</b>) tuned viscous mass damper device.</p>
Full article ">Figure 7
<p>(<b>a</b>) Shin Yokohama Prince Hotel <a href="https://upload.wikimedia.org/wikipedia/commons/thumb/7/70/Shin_Yokohama_Prince_Hotel_20080808-002.jpg/375px-Shin_Yokohama_Prince_Hotel_20080808-002.jpg" target="_blank">https://upload.wikimedia.org/wikipedia/commons/thumb/7/70/Shin_Yokohama_Prince_Hotel_20080808-002.jpg/375px-Shin_Yokohama_Prince_Hotel_20080808-002.jpg</a> (accessed on 15 October 2022), Japan; (<b>b</b>) One Wall Centre in Canada (TLCD) <a href="https://upload.wikimedia.org/wikipedia/commons/thumb/7/73/One_Wall_Centre.jpg/375px-One_Wall_Centre.jpg" target="_blank">https://upload.wikimedia.org/wikipedia/commons/thumb/7/73/One_Wall_Centre.jpg/375px-One_Wall_Centre.jpg</a> (accessed on 15 October 2022).</p>
Full article ">Figure 8
<p>Titanium La Portada Building: <a href="https://upload.wikimedia.org/wikipedia/commons/thumb/7/74/Titanium_La_Portada_%2838888739395%29.jpg/360px-Titanium_La_Portada_%2838888739395%29.jpg" target="_blank">https://upload.wikimedia.org/wikipedia/commons/thumb/7/74/Titanium_La_Portada_%2838888739395%29.jpg/360px-Titanium_La_Portada_%2838888739395%29.jpg</a> (accessed on 15 October 2022).</p>
Full article ">Figure 9
<p>Prudential Tower in Tokyo <a href="https://upload.wikimedia.org/wikipedia/commons/3/37/Prudential-Tower-Tokyo-01.jpg" target="_blank">https://upload.wikimedia.org/wikipedia/commons/3/37/Prudential-Tower-Tokyo-01.jpg</a> (accessed on 15 October 2022).</p>
Full article ">Figure 10
<p>Examples of aerodynamic modifications to square building shapes.</p>
Full article ">Figure 11
<p>(<b>a</b>) Shanghai World Financial Center <a href="https://en.wikipedia.org/wiki/File:%E4%B8%8A%E6%B5%B7%E5%9B%BD%E9%99%85%E9%87%91%E8%9E%8D%E4%B8%AD%E5%BF%83.jpg" target="_blank">https://en.wikipedia.org/wiki/File:%E4%B8%8A%E6%B5%B7%E5%9B%BD%E9%99%85%E9%87%91%E8%9E%8D%E4%B8%AD%E5%BF%83.jpg</a> (accessed on 15 October 2022). (<b>b</b>) Jin Mao towers <a href="https://en.wikipedia.org/wiki/File:Jin_Mao_Tower_2007.jpg" target="_blank">https://en.wikipedia.org/wiki/File:Jin_Mao_Tower_2007.jpg</a> (accessed on 15 October 2022).</p>
Full article ">Figure 12
<p>(<b>a</b>) Illustration of outrigger system; (<b>b</b>) Melbourne Tower; (<b>c</b>) illustration of “virtual outrigger” system using belt trusses; (<b>d</b>) Plaza Rakyat tower (Malaysia).</p>
Full article ">Figure 13
<p>(<b>a</b>) General view of the Great Mosque of Algeria; (<b>b</b>) different structural members with respect to their design behavior.</p>
Full article ">Figure 14
<p>Block diagram of active control.</p>
Full article ">Figure 15
<p>(<b>a</b>) Kyobashi Siewa Center (Japan) and (<b>b</b>) its AMD unit; (<b>c</b>) model of a building equipped with an AMD on the top floor.</p>
Full article ">Figure 16
<p>(<b>a</b>) Active CBC of Harumi Triton Square in Tokyo, Japan; (<b>b</b>) model of two adjacent buildings connected with an active strut.</p>
Full article ">Figure 17
<p>n-story shear frame equipped with an ABS between the ground and the first floor.</p>
Full article ">Figure 18
<p>(<b>a</b>) Normandy Bridge equipped with active tendon cable <a href="https://upload.wikimedia.org/wikipedia/commons/c/cc/Pontdenormandie.JPG" target="_blank">https://upload.wikimedia.org/wikipedia/commons/c/cc/Pontdenormandie.JPG</a> (access on 15 October 2022); (<b>b</b>) active tendon mechanism.</p>
Full article ">Figure 19
<p>Variable-Orifice Dampers.</p>
Full article ">Figure 20
<p>(<b>a</b>) Kajima Technical Research Institute with AVS system; (<b>b</b>) control scheme used in the Kajima Technical Research Institute [<a href="#B8-sensors-22-08581" class="html-bibr">8</a>].</p>
Full article ">Figure 21
<p>Controllable-fluid damper.</p>
Full article ">Figure 22
<p>The National Museum of Emerging Science and Innovation (Tokyo) <a href="https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Miraikan.jpg/1024px-Miraikan.jpg" target="_blank">https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Miraikan.jpg/1024px-Miraikan.jpg</a> (access on 15 October 2022).</p>
Full article ">Figure 23
<p>(<b>a</b>) Maxwell magnetic actuator and (<b>b</b>) Lorentz magnetic actuator.</p>
Full article ">Figure 24
<p>(<b>a</b>) Landmark Tower in Yokohama equipped with tow HMDs <a href="https://upload.wikimedia.org/wikipedia/commons/0/03/Yokohama_Landmark_Tower_201507.JPG" target="_blank">https://upload.wikimedia.org/wikipedia/commons/0/03/Yokohama_Landmark_Tower_201507.JPG</a> (access on 15 October 2022); (<b>b</b>) HMD device <a href="https://www.mhi.com/products/infrastructure/images/steelstructures_vibrationcontrol_case07.png" target="_blank">https://www.mhi.com/products/infrastructure/images/steelstructures_vibrationcontrol_case07.png</a> (access on 15 October 2022).</p>
Full article ">
12 pages, 5793 KiB  
Article
Learning-Based Image Damage Area Detection for Old Photo Recovery
by Tien-Ying Kuo, Yu-Jen Wei, Po-Chyi Su and Tzu-Hao Lin
Sensors 2022, 22(21), 8580; https://doi.org/10.3390/s22218580 - 7 Nov 2022
Cited by 4 | Viewed by 2748
Abstract
Most methods for repairing damaged old photos are manual or semi-automatic. With these methods, the damaged region must first be manually marked so that it can be repaired later either by hand or by an algorithm. However, damage marking is a time-consuming and [...] Read more.
Most methods for repairing damaged old photos are manual or semi-automatic. With these methods, the damaged region must first be manually marked so that it can be repaired later either by hand or by an algorithm. However, damage marking is a time-consuming and labor-intensive process. Although there are a few fully automatic repair methods, they are in the style of end-to-end repairing, which means they provide no control over damaged area detection, potentially destroying or being unable to completely preserve valuable historical photos to the full degree. Therefore, this paper proposes a deep learning-based architecture for automatically detecting damaged areas of old photos. We designed a damage detection model to automatically and correctly mark damaged areas in photos, and this damage can be subsequently repaired using any existing inpainting methods. Our experimental results show that our proposed damage detection model can detect complex damaged areas in old photos automatically and effectively. The damage marking time is substantially reduced to less than 0.01 s per photo to speed up old photo recovery processing. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

Figure 1
<p>Flow chart of our architecture to automatically repair damaged old photos. By feeding an old damaged photo into our damage detection network, we can generate a damaged area mask. To restore the photo, the damaged photo and the mask are fed together into an arbitrary inpainting algorithm.</p>
Full article ">Figure 2
<p>Architecture of the damage detection model.</p>
Full article ">Figure 3
<p>Dataset for damage detection: (<b>a</b>) old damaged photo; (<b>b</b>) corresponding marked ground truth.</p>
Full article ">Figure 4
<p>Real damaged photos and damaged photos synthesized by texture mask: (<b>a</b>) real damaged photo; (<b>b</b>) our synthesized damaged photo.</p>
Full article ">Figure 5
<p>The PR curve of U-Net with various modules.</p>
Full article ">Figure 6
<p>The detection results of different modules: (<b>a</b>) the old damaged photo; (<b>b</b>) labeled ground truth of damaged areas; (<b>c</b>) the detection result of U-Net; (<b>d</b>) the detection result of U-Net with residual block; (<b>e</b>) the detection result of U-Net with dense block; (<b>f</b>) our proposed detection result of U-Net with RDB.</p>
Full article ">Figure 7
<p>The PR curve of different methods, including [<a href="#B5-sensors-22-08580" class="html-bibr">5</a>,<a href="#B16-sensors-22-08580" class="html-bibr">16</a>,<a href="#B18-sensors-22-08580" class="html-bibr">18</a>,<a href="#B19-sensors-22-08580" class="html-bibr">19</a>], and our proposed method.</p>
Full article ">Figure 8
<p>The results of different detection methods, with yellow boxes indicating areas of performance difference: (<b>a</b>) the old damaged photo; (<b>b</b>) the result of Wan et al. [<a href="#B5-sensors-22-08580" class="html-bibr">5</a>]; (<b>c</b>) the result of Liu et al. [<a href="#B16-sensors-22-08580" class="html-bibr">16</a>]; (<b>d</b>) the result of Jenkins et al. [<a href="#B18-sensors-22-08580" class="html-bibr">18</a>]; (<b>e</b>) the result of Cheng et al. [<a href="#B19-sensors-22-08580" class="html-bibr">19</a>]; (<b>f</b>) the result of our proposed method.</p>
Full article ">Figure 9
<p>Results of different restoration methods on the damaged photo: (<b>a</b>) the old damaged photo; (<b>b</b>) the result of Wan et al. [<a href="#B5-sensors-22-08580" class="html-bibr">5</a>]; (<b>c</b>) the result of ours + Yu et al. [<a href="#B27-sensors-22-08580" class="html-bibr">27</a>]; (<b>d</b>) the result of ours + gated convolution [<a href="#B4-sensors-22-08580" class="html-bibr">4</a>]; (<b>e</b>) the result of ours + partial convolution [<a href="#B28-sensors-22-08580" class="html-bibr">28</a>].</p>
Full article ">Figure 9 Cont.
<p>Results of different restoration methods on the damaged photo: (<b>a</b>) the old damaged photo; (<b>b</b>) the result of Wan et al. [<a href="#B5-sensors-22-08580" class="html-bibr">5</a>]; (<b>c</b>) the result of ours + Yu et al. [<a href="#B27-sensors-22-08580" class="html-bibr">27</a>]; (<b>d</b>) the result of ours + gated convolution [<a href="#B4-sensors-22-08580" class="html-bibr">4</a>]; (<b>e</b>) the result of ours + partial convolution [<a href="#B28-sensors-22-08580" class="html-bibr">28</a>].</p>
Full article ">Figure 10
<p>Results for different restoration methods on the damaged photo: (<b>a</b>) the old damaged photo; (<b>b</b>) the result of Wan et al. [<a href="#B5-sensors-22-08580" class="html-bibr">5</a>]; (<b>c</b>) the result of ours + Yu et al. [<a href="#B27-sensors-22-08580" class="html-bibr">27</a>]; (<b>d</b>) the result of ours + gated convolution [<a href="#B4-sensors-22-08580" class="html-bibr">4</a>]; (<b>e</b>) the result of ours + partial convolution [<a href="#B28-sensors-22-08580" class="html-bibr">28</a>].</p>
Full article ">Figure 11
<p>The case of failure detection: (<b>a</b>) damaged photo; (<b>b</b>) result of damage detection; (<b>c</b>) result of damage restoration.</p>
Full article ">
18 pages, 9368 KiB  
Article
Characterization of Damage Progress in the Defective Grouted Sleeve Connection Using Combined Acoustic Emission and Ultrasonics
by Lu Zhang, Zhenmin Fang, Yongze Tang, Hongyu Li and Qizhou Liu
Sensors 2022, 22(21), 8579; https://doi.org/10.3390/s22218579 - 7 Nov 2022
Cited by 4 | Viewed by 2098
Abstract
The grouted sleeve connection is one of the most widely used connections for prefabricated buildings (PBs). Usually, its quality can have a significant impact on the safety of the whole PB, especially for the internal flaws that form during sleeve grouting. It is [...] Read more.
The grouted sleeve connection is one of the most widely used connections for prefabricated buildings (PBs). Usually, its quality can have a significant impact on the safety of the whole PB, especially for the internal flaws that form during sleeve grouting. It is directly related to the mechanical performance and failure behavior of the grouted sleeve. Therefore, it is essential to understand the damage progression of the defective grouted sleeve connection. However, destructive testing is the mainstream measure to evaluate the grout sleeves, which is not applicable for in situ inspection. Therefore, this paper proposes a combined acoustic emission (AE) and ultrasonic testing (UT) method to characterize the damage progress of a grouted sleeve with different degrees of internal flaws under tensile loading. The UT was conducted before loading to evaluate the internal flaws. Additionally, the AE was used as the processing monitoring technique during the tensile testing. Two damage modes were identified: (i) brittle mode associated with the rebar pullout; (ii) ductile mode associated with the rapture of the rebar. The UT energy ratio was selected as the most sensitive feature to the internal flaws, both numerically and experimentally. The AE signatures of different damage phases and different damage modes were determined and characterized. For the brittle and ductile damage modes, two and three phases appeared in the AE activities, respectively. The proposed combined AE and UT method can provide a reliable and convenient nondestructive evaluation of grouted sleeves with internal flaws. Moreover, it can also characterize the damage progress of the grouted sleeve connections in real-time. Full article
Show Figures

Figure 1

Figure 1
<p>Grout defect detection system based on UT wave measurement.</p>
Full article ">Figure 2
<p>Calculation example: sleeve-plasticine interface.</p>
Full article ">Figure 3
<p>The details of the grouting sleeve.</p>
Full article ">Figure 4
<p>The grouting sleeve with different flaws.</p>
Full article ">Figure 5
<p>Numerical simulation: (<b>a</b>) FEA model and transducer setup; (<b>b</b>) meshing.</p>
Full article ">Figure 6
<p>The excitation signal (<b>a</b>) in time domain, (<b>b</b>) in the frequency domain.</p>
Full article ">Figure 7
<p>The time domain of Path 13: (<b>a</b>) The time domain; (<b>b</b>) the first wave; (<b>c</b>) an example of a Path 13 signal in the time domain indicating the envelope (red line) to calculate the ultrasonic energy.</p>
Full article ">Figure 8
<p>The numerical comparison of the UT signal energy ratio among the Path 12, Path 13, and Path 14.</p>
Full article ">Figure 9
<p>(<b>a</b>) grouting sleeve defect specimen; (<b>b</b>) schematic diagram of internal defects of sleeve.</p>
Full article ">Figure 10
<p>The setup of the active grout defect detection system: (<b>a</b>) experimental setup; (<b>b</b>) the excitation signal in time domain; and (<b>c</b>) the excitation signal in the frequency domain.</p>
Full article ">Figure 11
<p>The comparison of UT signal energy ratio between experimental and numerical study from (<b>a</b>) Path 12, (<b>b</b>) Path 13, and (<b>c</b>) Path 14, (<b>d</b>) the experimental comparison of UT signal energy among Path 12, Path 13, and Path 14.</p>
Full article ">Figure 12
<p>Loading procedure.</p>
Full article ">Figure 13
<p>Test setup: (<b>a</b>) test setup design; (<b>b</b>) experimental setup.</p>
Full article ">Figure 14
<p>The two typical failure mode of the grouting sleeve connector: (<b>a</b>) rebar fracture; (<b>b</b>) rebar pullout.</p>
Full article ">Figure 15
<p>AE cumulative event history: (<b>a</b>) 0% defect; (<b>b</b>) 10% defect; (<b>c</b>) 20% defect; (<b>d</b>) 30% defect; (<b>e</b>) 40% defect; and (<b>f</b>) 50% defect.</p>
Full article ">Figure 16
<p>AE event distribution versus specimen failure diagram: (<b>a</b>,<b>b</b>) 0% defect; (<b>c</b>,<b>d</b>) 10% defect; (<b>e</b>,<b>f</b>) 20% defect; (<b>g</b>,<b>h</b>) 30% defect; (<b>i</b>,<b>j</b>) 40% defect; and (<b>k</b>,<b>l</b>) 50% defect.</p>
Full article ">Figure 16 Cont.
<p>AE event distribution versus specimen failure diagram: (<b>a</b>,<b>b</b>) 0% defect; (<b>c</b>,<b>d</b>) 10% defect; (<b>e</b>,<b>f</b>) 20% defect; (<b>g</b>,<b>h</b>) 30% defect; (<b>i</b>,<b>j</b>) 40% defect; and (<b>k</b>,<b>l</b>) 50% defect.</p>
Full article ">Figure 17
<p>Time-domain histories, frequency domain histories, and frequency spectra of the AE signals detected by PK151 sensors, where (<b>a</b>–<b>c</b>) are from rebar fracture; (<b>d</b>–<b>f</b>) are from friction among rebar, mortar, and sleeve; and (<b>g</b>–<b>i</b>) are from concrete crushing.</p>
Full article ">Figure 17 Cont.
<p>Time-domain histories, frequency domain histories, and frequency spectra of the AE signals detected by PK151 sensors, where (<b>a</b>–<b>c</b>) are from rebar fracture; (<b>d</b>–<b>f</b>) are from friction among rebar, mortar, and sleeve; and (<b>g</b>–<b>i</b>) are from concrete crushing.</p>
Full article ">Figure 18
<p>The correlation of the UT damage index, qualitative AE, and quantitative AE.</p>
Full article ">
22 pages, 4729 KiB  
Article
COVIDX-LwNet: A Lightweight Network Ensemble Model for the Detection of COVID-19 Based on Chest X-ray Images
by Wei Wang, Shuxian Liu, Huan Xu and Le Deng
Sensors 2022, 22(21), 8578; https://doi.org/10.3390/s22218578 - 7 Nov 2022
Cited by 4 | Viewed by 2463
Abstract
Recently, the COVID-19 pandemic coronavirus has put a lot of pressure on health systems around the world. One of the most common ways to detect COVID-19 is to use chest X-ray images, which have the advantage of being cheap and fast. However, in [...] Read more.
Recently, the COVID-19 pandemic coronavirus has put a lot of pressure on health systems around the world. One of the most common ways to detect COVID-19 is to use chest X-ray images, which have the advantage of being cheap and fast. However, in the early days of the COVID-19 outbreak, most studies applied pretrained convolutional neural network (CNN) models, and the features produced by the last convolutional layer were directly passed into the classification head. In this study, the proposed ensemble model consists of three lightweight networks, Xception, MobileNetV2 and NasNetMobile as three original feature extractors, and then three base classifiers are obtained by adding the coordinated attention module, LSTM and a new classification head to the original feature extractors. The classification results from the three base classifiers are then fused by a confidence fusion method. Three publicly available chest X-ray datasets for COVID-19 testing were considered, with ternary (COVID-19, normal and other pneumonia) and quaternary (COVID-19, normal) analyses performed on the first two datasets, bacterial pneumonia and viral pneumonia classification, and achieved high accuracy rates of 95.56% and 91.20%, respectively. The third dataset was used to compare the performance of the model compared to other models and the generalization ability on different datasets. We performed a thorough ablation study on the first dataset to understand the impact of each proposed component. Finally, we also performed visualizations. These saliency maps not only explain key prediction decisions of the model, but also help radiologists locate areas of infection. Through extensive experiments, it was finally found that the results obtained by the proposed method are comparable to the state-of-the-art methods. Full article
Show Figures

Figure 1

Figure 1
<p>Specific details of the proposed architecture.</p>
Full article ">Figure 2
<p>Coordinated Attention.</p>
Full article ">Figure 3
<p>The internal structure of LSTM.</p>
Full article ">Figure 4
<p>Schematic diagram of confidence fusion.</p>
Full article ">Figure 5
<p>Three base classifiers are shown (base classifier 1 in the figure: Xception + coordinated attention module + LSTM layer + new classification head, base classifier 2: MobileNetV2 + coordinated attention module + LSTM layer + new classification head, base classifier 3: NasNetMobile + Coordinated attention module + LSTM layer + new classification head) and the proposed ensemble model COVIDX-LwNet (the model proposed in the figure) based on confidence fusion on the test set of the D1 dataset. In (<b>a</b>) precision, (<b>b</b>) sensitivity, (<b>c</b>) Specificity, (<b>d</b>) F1 score of all three base classifiers and the proposed ensemble method are shown, respectively.</p>
Full article ">Figure 6
<p>The accuracy of the above four models.</p>
Full article ">Figure 7
<p>Three-class correlation curves for the three base classifiers and the proposed model (C ovidX-LwNet). (<b>a</b>) train loss, (<b>b</b>) train accuracy, (<b>c</b>) P-R, (<b>d</b>) ROC.</p>
Full article ">Figure 7 Cont.
<p>Three-class correlation curves for the three base classifiers and the proposed model (C ovidX-LwNet). (<b>a</b>) train loss, (<b>b</b>) train accuracy, (<b>c</b>) P-R, (<b>d</b>) ROC.</p>
Full article ">Figure 8
<p>Four-class correlation curves for the three base classifiers and the proposed model (COVIDX-LwNet). (<b>a</b>) train loss, (<b>b</b>) train accuracy, (<b>c</b>) P-R, (<b>d</b>) ROC.</p>
Full article ">Figure 9
<p>Three classification results of some sample images of the test set in dataset D1 (three random images for each category, nine images in total), the true label in the leftmost column is COVID, and the true label in the middle column is normal, the ground truth label for the rightmost column is pneumonia.</p>
Full article ">Figure 10
<p>Raw chest X-ray image corresponding to <a href="#sensors-22-08578-f009" class="html-fig">Figure 9</a> and Grad-CAM visualization generated using the following model: Xception model + coordinated attention module + LSTM layer + new classification head.</p>
Full article ">Figure 11
<p>(<b>a</b>–<b>c</b>) shows the first 16 channel maps of the features extracted by the first three convolutional layers of base classifier 1 and their overlay fusion on the right, some feature maps are clearly specialized to detect areas where the thorax and lungs are present, others highlight only areas with thickened lung markings, pulmonary fibrosis and bilateral diffuse opacities.</p>
Full article ">Figure 11 Cont.
<p>(<b>a</b>–<b>c</b>) shows the first 16 channel maps of the features extracted by the first three convolutional layers of base classifier 1 and their overlay fusion on the right, some feature maps are clearly specialized to detect areas where the thorax and lungs are present, others highlight only areas with thickened lung markings, pulmonary fibrosis and bilateral diffuse opacities.</p>
Full article ">
21 pages, 5249 KiB  
Article
Adverse Weather Target Detection Algorithm Based on Adaptive Color Levels and Improved YOLOv5
by Jiale Yao, Xiangsuo Fan, Bing Li and Wenlin Qin
Sensors 2022, 22(21), 8577; https://doi.org/10.3390/s22218577 - 7 Nov 2022
Cited by 20 | Viewed by 3555
Abstract
With the continuous development of artificial intelligence and computer vision technology, autonomous vehicles have developed rapidly. Although self-driving vehicles have achieved good results in normal environments, driving in adverse weather can still pose a challenge to driving safety. To improve the detection ability [...] Read more.
With the continuous development of artificial intelligence and computer vision technology, autonomous vehicles have developed rapidly. Although self-driving vehicles have achieved good results in normal environments, driving in adverse weather can still pose a challenge to driving safety. To improve the detection ability of self-driving vehicles in harsh environments, we first construct a new color levels offset compensation model to perform adaptive color levels correction on images, which can effectively improve the clarity of targets in adverse weather and facilitate the detection and recognition of targets. Then, we compare several common one-stage target detection algorithms and improve on the best-performing YOLOv5 algorithm. We optimize the parameters of the Backbone of the YOLOv5 algorithm by increasing the number of model parameters and incorporating the Transformer and CBAM into the YOLOv5 algorithm. At the same time, we use the loss function of EIOU to replace the loss function of the original CIOU. Finally, through the ablation experiment comparison, the improved algorithm improves the detection rate of the targets, with the mAP reaching 94.7% and the FPS being 199.86. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the adverse weather dataset.</p>
Full article ">Figure 2
<p>Schematic diagram of the augmented dataset.</p>
Full article ">Figure 3
<p>Image after auto color levels algorithm processing.</p>
Full article ">Figure 4
<p>Comparison of before and after improvement of auto color levels algorithm.</p>
Full article ">Figure 5
<p>Improved YOLOv5 network structure diagram.</p>
Full article ">Figure 6
<p>YOLOv5 before and after the improvement of the PR curve comparison chart.</p>
Full article ">Figure 7
<p>Comparison chart of training curve visualization.</p>
Full article ">Figure 8
<p>Graph of recognition results before and after image filtering.</p>
Full article ">Figure 9
<p>Comparison of test results of failed cases.</p>
Full article ">
17 pages, 17815 KiB  
Article
Analysis of Ultrasonic Machining Characteristics under Dynamic Load
by Zhangping Chen, Xinghong Zhao, Shixing Chen, Honghuan Chen, Pengfei Ni and Fan Zhang
Sensors 2022, 22(21), 8576; https://doi.org/10.3390/s22218576 - 7 Nov 2022
Cited by 2 | Viewed by 2371
Abstract
This research focuses on the load characteristics of piezoelectric transducers in the process of longitudinal vibration ultrasonic welding. We are primarily interested in the impedance characteristics of the piezoelectric transducer during loading, which is studied by leveraging the equivalent circuit theory of piezoelectric [...] Read more.
This research focuses on the load characteristics of piezoelectric transducers in the process of longitudinal vibration ultrasonic welding. We are primarily interested in the impedance characteristics of the piezoelectric transducer during loading, which is studied by leveraging the equivalent circuit theory of piezoelectric transducers. Specifically, we propose a cross-value mapping method. This method can well map the load change in ultrasonic welding to the impedance change, aiming to obtain an equivalent model of impedance and load. The least-squares strategy is used for parameter identification during data fitting. Extensive simulations and physical experiments are conducted to verify the proposed model. As a result, we can empirically find that the result from our model agrees with the impedance characteristics from the real-life data measured by the impedance meter, indicating its potential for real practice in controller research and transducer design. Full article
(This article belongs to the Special Issue The Development of Piezoelectric Sensors and Actuators)
Show Figures

Figure 1

Figure 1
<p>Electromechanical equivalent model of the piezoelectric transducer.</p>
Full article ">Figure 2
<p>The main research content.</p>
Full article ">Figure 3
<p>The experiment platform.</p>
Full article ">Figure 4
<p>(<b>a</b>) Piezoelectric transducer impedance analysis admittance circle; (<b>b</b>) Piezoelectric transducer impedance analysis amplitude phase curve.</p>
Full article ">Figure 5
<p>Admittance circle of the piezoelectric transducer under different load.</p>
Full article ">Figure 6
<p>Amplitude-phase curve of the piezoelectric transducer under different load.</p>
Full article ">Figure 7
<p>(<b>a</b>) Amplitude value characteristics when <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math> is a resistive load; (<b>b</b>) Phase frequency characteristics when <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math> is a resistive load.</p>
Full article ">Figure 8
<p>(<b>a</b>) Amplitude value characteristics when <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math> is a capacitive load; (<b>b</b>) Phase frequency characteristics when <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math> is a capacitive load.</p>
Full article ">Figure 9
<p>(<b>a</b>) Amplitude value characteristics when <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math> is a inductive load; (<b>b</b>) Phase frequency characteristics when <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math> is a inductive load.</p>
Full article ">Figure 10
<p>(<b>a</b>) <math display="inline"><semantics> <msub> <mi>f</mi> <mi>s</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math>; (<b>c</b>) <span class="html-italic">R</span> and <math display="inline"><semantics> <msub> <mi>Z</mi> <mrow> <mi>f</mi> <mi>t</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>(<b>a</b>) The intersection mapping of <math display="inline"><semantics> <msub> <mi>f</mi> <mi>s</mi> </msub> </semantics></math>; (<b>b</b>) The intersection mapping of <math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math>; (<b>c</b>) The intersection mapping of <span class="html-italic">R</span>.</p>
Full article ">Figure 12
<p>(<b>a</b>) Two-dimensional intersection of <math display="inline"><semantics> <msub> <mi>f</mi> <mi>s</mi> </msub> </semantics></math>; (<b>b</b>) Two-dimensional intersection of <math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math>; (<b>c</b>) Two-dimensional intersection of <span class="html-italic">R</span>.</p>
Full article ">Figure 13
<p>(<b>a</b>) Mapping curve intersection diagram; (<b>b</b>) Inscribed circle diagram; (<b>c</b>) Circumcircle diagram.</p>
Full article ">Figure 14
<p>(<b>a</b>) Load and real part by an inscribed circle; (<b>b</b>) Load and real part by a circumcircle; (<b>c</b>) Load and real part by average.</p>
Full article ">Figure 15
<p>(<b>a</b>) Load and imaginary part by an inscribed circle; (<b>b</b>) Load and imaginary part by a circumcircle; (<b>c</b>) Load and imaginary part by average.</p>
Full article ">Figure 16
<p>(<b>a</b>) <math display="inline"><semantics> <msubsup> <mi>f</mi> <mi>s</mi> <mi>e</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>f</mi> <mi>s</mi> <mi>s</mi> </msubsup> </semantics></math> by inscribed circle; (<b>b</b>) <math display="inline"><semantics> <msubsup> <mi>f</mi> <mi>s</mi> <mi>e</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>f</mi> <mi>s</mi> <mi>s</mi> </msubsup> </semantics></math> by circumcircle; (<b>c</b>) <math display="inline"><semantics> <msubsup> <mi>f</mi> <mi>s</mi> <mi>e</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>f</mi> <mi>s</mi> <mi>s</mi> </msubsup> </semantics></math> by average.</p>
Full article ">Figure 17
<p>(<b>a</b>) <math display="inline"><semantics> <msubsup> <mi>f</mi> <mn>1</mn> <mi>e</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>f</mi> <mn>1</mn> <mi>s</mi> </msubsup> </semantics></math> by inscribed circle; (<b>b</b>) <math display="inline"><semantics> <msubsup> <mi>f</mi> <mn>1</mn> <mi>e</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>f</mi> <mn>1</mn> <mi>s</mi> </msubsup> </semantics></math> by circumcircle; (<b>c</b>) <math display="inline"><semantics> <msubsup> <mi>f</mi> <mn>1</mn> <mi>e</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>f</mi> <mn>1</mn> <mi>s</mi> </msubsup> </semantics></math> by average.</p>
Full article ">Figure 18
<p>(<b>a</b>) <math display="inline"><semantics> <msup> <mi>R</mi> <mi>e</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> by inscribed circle; (<b>b</b>) <math display="inline"><semantics> <msup> <mi>R</mi> <mi>e</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> by circumcircle; (<b>c</b>) <math display="inline"><semantics> <msup> <mi>R</mi> <mi>e</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> by average.</p>
Full article ">
17 pages, 5924 KiB  
Article
Compact Camera Fluorescence Detector for Parallel-Light Lens-Based Real-Time PCR System
by Seul-Bit-Na Koo, Yu-Seop Kim, Chan-Young Park and Deuk-Ju Lee
Sensors 2022, 22(21), 8575; https://doi.org/10.3390/s22218575 - 7 Nov 2022
Cited by 1 | Viewed by 2090
Abstract
The polymerase chain reaction is an important technique in biological research. However, it is time consuming and has a number of disadvantages. Therefore, real-time PCR technology that can be used in real-time monitoring has emerged, and many studies are being conducted regarding its [...] Read more.
The polymerase chain reaction is an important technique in biological research. However, it is time consuming and has a number of disadvantages. Therefore, real-time PCR technology that can be used in real-time monitoring has emerged, and many studies are being conducted regarding its use. Real-time PCR requires many optical components and imaging devices such as expensive, high-performance cameras. Therefore, its cost and assembly process are limitations to its use. Currently, due to the development of smart camera devices, small, inexpensive cameras and various lenses are being developed. In this paper, we present a Compact Camera Fluorescence Detector for use in parallel-light lens-based real-time PCR devices. The proposed system has a simple optical structure, the system cost can be reduced, and the size can be miniaturized. This system only incorporates Fresnel lenses without additional optics in order for the same field of view to be achieved for 25 tubes. In the center of the Fresnel lens, one LED and a complementary metal-oxide semiconductor camera were placed in directions that were as similar as possible. In addition, to achieve the accurate analysis of the results, image processing was used to correct them. As a result of an experiment using a reference fluorescent substance and double-distilled water, it was confirmed that stable fluorescence detection was possible. Full article
(This article belongs to the Special Issue I3S 2022 Selected Papers)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) System structure diagram; (<b>b</b>) system block diagram.</p>
Full article ">Figure 2
<p>(<b>a</b>) System used in the actual experiment, except for the thermocycler; (<b>b</b>) linear actuator for initial height setting.</p>
Full article ">Figure 3
<p>(<b>a</b>) Back of system board with filter wheel module and LEDs attached; (<b>b</b>) front of system board with filter wheel module attached; (<b>c</b>) system structure diagram of filter wheel module; (<b>d</b>) side view of filter wheel module and detector module.</p>
Full article ">Figure 4
<p>(<b>a</b>) Center the camera; (<b>b</b>) center the LED; (<b>c</b>) rotate the camera and LED in the same direction; (<b>d</b>) move the camera and LED in a 45-degree direction.</p>
Full article ">Figure 5
<p>(<b>a</b>) Image processing for 25 tubes; (<b>b</b>) FAM plate (f0) and DDW plate (d0) images.</p>
Full article ">Figure 6
<p>(<b>a</b>) FAM plate image f0; (<b>b</b>) DDW plate image d0.</p>
Full article ">Figure 7
<p>Brightness comparison for f0 FAM plate image and d0 DDW plate image taken with the camera placed in the center position of the Fresnel lens: (<b>a</b>) mean brightness of f0 and d0; (<b>b</b>) difference between the mean brightness of f0 and d0.</p>
Full article ">Figure 8
<p>d0 DDW plate image with the LED centered and the camera positioned sideways.</p>
Full article ">Figure 9
<p>(<b>a</b>) Brightness comparison plot for f0 FAM plate image and d0 DDW plate image taken with the LED placed in the center position of the Fresnel lens: (<b>a</b>) mean brightness of f0 and d0; (<b>b</b>) difference between the mean brightness of f0 and d0.</p>
Full article ">Figure 10
<p>Change in brightness of plate d0 image when rotating in the direction of quadrant 2 with camera placed in the center of lens, LED placed outside, and camera board and filter wheel board fixed to each other: (<b>a</b>) brightness when rotated by 4 mm; (<b>b</b>) brightness when rotated by 5 mm; (<b>c</b>) brightness when rotated by 6 mm.</p>
Full article ">Figure 11
<p>Minimum difference in mean brightness as of fluorescence radius for f0,f1 FAM plate image and d0,d1 DDW plate image combinations according to rotation distance: (<b>a</b>) 4 mm; (<b>b</b>) 5 mm; (<b>c</b>) 6 mm.</p>
Full article ">Figure 12
<p>When the camera and LED are rotated 6 mm to the second quadrant, fluorescence brightness as detected by ROI as the radius of 50 for f0,f1 FAM plate image and d0,d1 DDW plate image; (<b>a</b>) mean brightness of f0,f1 and d0,d1; (<b>b</b>) differences in mean brightness according to FAM and DDW plate image combinations.</p>
Full article ">Figure 13
<p>Images of d0 plate taken when the camera and LED, arranged in the radial direction at an angle of 45 degrees in the second quadrant, were offset from the center of the lens by the following distances: (<b>a</b>) 3 mm; (<b>b</b>) 4 mm; (<b>c</b>) 4.5 mm; (<b>d</b>) 5 mm; (<b>e</b>) 5.5 mm.</p>
Full article ">Figure 13 Cont.
<p>Images of d0 plate taken when the camera and LED, arranged in the radial direction at an angle of 45 degrees in the second quadrant, were offset from the center of the lens by the following distances: (<b>a</b>) 3 mm; (<b>b</b>) 4 mm; (<b>c</b>) 4.5 mm; (<b>d</b>) 5 mm; (<b>e</b>) 5.5 mm.</p>
Full article ">Figure 14
<p>Minimum difference in mean fluorescence brightness according to the radius for FAM and DDW plate image combinations regarding offset distance: (<b>a</b>) 3 mm; (<b>b</b>) 4 mm; (<b>c</b>) 4.5 mm; (<b>d</b>) 5 mm; (<b>e</b>) 5.5 mm.</p>
Full article ">Figure 15
<p>Fluorescence brightness detected as ROI with a radius of 50 for f0,f1 FAM plate images and d0,d1 DDW plate images. The camera and LED arranged in the radial direction at 45 degrees in the second quadrant were offset by 4.5 mm from the center of the Fresnel lens; (<b>a</b>) mean brightness of f0, f1, d0, and d1; (<b>b</b>) mean brightness differences between FAM and DDW plate image combinations (d0,f0), (d0,f1), (d1,f0), (d1,f1).</p>
Full article ">
32 pages, 11796 KiB  
Article
Piton: Investigating the Controllability of a Wearable Telexistence Robot
by Abdullah Iskandar, Mohammed Al-Sada, Tamon Miyake, Yamen Saraiji, Osama Halabi and Tatsuo Nakajima
Sensors 2022, 22(21), 8574; https://doi.org/10.3390/s22218574 - 7 Nov 2022
Cited by 4 | Viewed by 4565
Abstract
The COVID-19 pandemic impacted collaborative activities, travel, and physical contact, increasing the demand for real-time interactions with remote environments. However, the existing remote communication solutions provide limited interactions and do not convey a high sense of presence within a remote environment. Therefore, we [...] Read more.
The COVID-19 pandemic impacted collaborative activities, travel, and physical contact, increasing the demand for real-time interactions with remote environments. However, the existing remote communication solutions provide limited interactions and do not convey a high sense of presence within a remote environment. Therefore, we propose a snake-shaped wearable telexistence robot, called Piton, that can be remotely used for a variety of collaborative applications. To the best of our knowledge, Piton is the first snake-shaped wearable telexistence robot. We explain the implementation of Piton, its control architecture, and discuss how Piton can be deployed in a variety of contexts. We implemented three control methods to control Piton: HM—using a head-mounted display (HMD), HH—using an HMD and hand-held tracker, and FM—using an HMD and a foot-mounted tracker. We conducted a user study to investigate the applicability of the proposed control methods for telexistence, focusing on body ownership (Alpha IVBO), mental and physical load (NASA-TLX), motion sickness (VRSQ), and a questionnaire to measure user impressions. The results show that both the HM and HH provide relevantly high levels of body ownership, had high perceived accuracy, and were highly favored, whereas the FM control method yielded the lowest body ownership effect and was least favored. We discuss the results and highlight the advantages and shortcomings of the control methods with respect to various potential application contexts. Based on our design and evaluation of Piton, we extracted a number of insights and future research directions to deepen our investigation and realization of wearable telexistence robots. Full article
(This article belongs to the Special Issue Challenges and Future Trends of Wearable Robotics)
Show Figures

Figure 1

Figure 1
<p>Design concept of Piton. (<b>a</b>) At the local site, the user uses the HMD to interact with Piton at the remote site. (<b>b</b>) At the remote site, a surrogate user wears Piton.</p>
Full article ">Figure 2
<p>Piton can be used in everyday usage contexts. For example, Piton can be used to (<b>a</b>) interact with the surrogate, (<b>b</b>) interact with the remote environment, sharing various experiences with remote users or checking merchandise, or (<b>c</b>) enjoy the outdoor scenery or social activities.</p>
Full article ">Figure 3
<p>Piton can be used for industrial tasks. (<b>a</b>) Piton can support remote knowledge transfer and training, such as instructing remote users during assembly or machinery operation tasks. (<b>b</b>) The flexibility of Piton can be used for inspecting objects or environments, such as by extending around or above the surrogate user.</p>
Full article ">Figure 4
<p>This diagram shows the overall architecture of our system. The arrows indicate the data flow between the various components, coded in three colors: red for auditory communication, purple for stereoscopic video streaming, and yellow for robot control.</p>
Full article ">Figure 5
<p>Control methods: (<b>a</b>) HMD to control the position and orientation of Piton; (<b>b</b>) HMD to control Piton’s rotation and a hand-held tracker to control its position; (<b>c</b>) HMD to control Piton’s rotation and a foot-mounted tracker to control its position.</p>
Full article ">Figure 6
<p>Visualization of the IK system and robot model, with the green cube presenting the target objective for the positional movement: (<b>a</b>) The robot moves to top position; (<b>b</b>) The robot moves to the right position.</p>
Full article ">Figure 7
<p>The graphical user interface of our system connects with Gstreamer and WebSocket server (robot control software), allows enabling/disabling the robot’s movements or sending data, and for starting the calibration process of the control methods.</p>
Full article ">Figure 8
<p>Piton robot structure. The robot is composed of eight servomotors interlinked using aluminum and PLA brackets. The end-effectors comprise a PLA ZED camera holder.</p>
Full article ">Figure 9
<p>The robot is mounted on a backpack rack. (<b>a</b>) Front view; (<b>b</b>) side view.</p>
Full article ">Figure 10
<p>The robot control software’s UI enables controlling the robot through WebSocket.</p>
Full article ">Figure 11
<p>HM control calibration procedure of positional movement. The user moves their head to the corners of the control space, as shown in (<b>a</b>,<b>b</b>), thereby forming a tracking area that maps the user’s head position to the robot’s neck position.</p>
Full article ">Figure 12
<p>HH control calibration procedure. The user moves their hand to the corners of the control space, as shown in (<b>a</b>,<b>b</b>), thereby forming a tracking area that maps the user’s hand position to the robot’s neck position.</p>
Full article ">Figure 13
<p>FM control calibration procedure. The user moves his foot to the corners of the control space, as shown in (<b>a</b>,<b>b</b>), thereby forming a tracking area that maps the user’s foot position to the robot’s neck position. Dorsiflexing their foot upwards calibrates the vertical positional movement, enabling them to move Piton within the calibrated workspace.</p>
Full article ">Figure 14
<p>This diagram illustrates how each data point is captured from the HMD and trackers, and then processed to produce servomotor angles through our control system at the local site. The servomotor angles are then sent to the remote site where they are executed by the robot control software.</p>
Full article ">Figure 15
<p>Task 1 (mirroring): (<b>a</b>) a monitor with a web camera is used for mirroring task; (<b>b</b>) the user can observe the robot’s movements by looking at the screen (similar to a mirror).</p>
Full article ">Figure 16
<p>Task 2 (finding numbers and letters): (<b>a</b>) Uno Stacko block game with randomly set numbers and letters; (<b>b</b>) A user moving Piton to find specifically colored numbers during task 2.</p>
Full article ">Figure 17
<p>Task 3 is text reading: (<b>a</b>) the text is printed and wrapped around a box, and (<b>b</b>) users have to control Piton to look around the box edges to read the text.</p>
Full article ">Figure 18
<p>Alpha IVBO questionnaire results: acceptance, change, control.</p>
Full article ">Figure 19
<p>NASA-TLX results of the 17 participants.</p>
Full article ">Figure 20
<p>VR Sickness Questionnaire: (<b>a</b>) results of oculomotor score and disorientation score; (<b>b</b>) results of VRSQ total score.</p>
Full article ">Figure 21
<p>Results of the poststudy questionnaires: (<b>a</b>) results of Q1–4; (<b>b</b>) results of ranking questions (Q5, Q6).</p>
Full article ">
13 pages, 5363 KiB  
Article
Research on a Non-Contact Multi-Electrode Voltage Sensor and Signal Processing Algorithm
by Wenbin Zhang, Yonglong Yang, Jingjing Zhao, Rujin Huang, Kang Cheng and Mingxing He
Sensors 2022, 22(21), 8573; https://doi.org/10.3390/s22218573 - 7 Nov 2022
Cited by 4 | Viewed by 4116
Abstract
Traditional contact voltage measurement requires a direct electrical connection to the system, which is not easy to install and maintain. The voltage measurement based on the electric field coupling plate capacitance structure does not need to be in contact with the measured object [...] Read more.
Traditional contact voltage measurement requires a direct electrical connection to the system, which is not easy to install and maintain. The voltage measurement based on the electric field coupling plate capacitance structure does not need to be in contact with the measured object or the ground, which can avoid the above problems. However, most of the existing flat-plate structure voltage measurement sensors are not only expensive to manufacture, but also bulky, and when the relative position between the wire under test and the sensor changes, it will bring great measurement errors, making it difficult to meet actual needs. Aiming to address the above problems, this paper proposes a multi-electrode array structure non-contact voltage sensor and signal processing algorithm. The sensor is manufactured by the PCB process, which effectively reduces the manufacturing cost and process difficulty. The experimental and simulation results show that, when the relative position of the wire and the sensor is offset by 10 mm in the 45° direction, the relative error of the traditional single-electrode voltage sensor is 17.62%, while the relative error of the multi-electrode voltage sensor designed in this paper is only 0.38%. In addition, the ratio error of the sensor under the condition of power frequency of 50 Hz is less than ±1% and the phase difference is less than 4°. The experimental results show that the sensor has good accuracy and linearity. Full article
(This article belongs to the Collection Multi-Sensor Information Fusion)
Show Figures

Figure 1

Figure 1
<p>Measurement principle.</p>
Full article ">Figure 2
<p>Structural diagram of the sensing electrode unit.</p>
Full article ">Figure 3
<p>Schematic diagram of the three-dimensional structure of the sensor.</p>
Full article ">Figure 4
<p>The relationship between anti-interference degree and shielding layer area.</p>
Full article ">Figure 5
<p>Schematic diagram of the measurement system.</p>
Full article ">Figure 6
<p>Power frequency experimental platform. (<b>1</b>)—high voltage source, (<b>2</b>)—PC terminal data acquisition, (<b>3</b>)—pico 5443D 16-bit PC oscilloscope, (<b>4</b>)—Agilent 16-bit digital multimeter, (<b>5</b>)—shield cover, (<b>6</b>)—wire to be tested, (<b>7</b>)—sensor, and (<b>8</b>)—9 V dry battery.</p>
Full article ">Figure 7
<p>Schematic diagram of double parallel AC conductors.</p>
Full article ">Figure 8
<p>Schematic diagram of the sensor arrangement of the multi-sensor system.</p>
Full article ">Figure 9
<p>The measured voltage using the prototype and the ratio error characteristics.</p>
Full article ">Figure 10
<p>Momentary values of input (<b>red</b>) and sensor output (<b>blue</b>) voltage.</p>
Full article ">
13 pages, 4226 KiB  
Article
Chronic and Acute Effects on Skin Temperature from a Sport Consisting of Repetitive Impacts from Hitting a Ball with the Hands
by Jose Luis Sánchez-Jiménez, Robert Tejero-Pastor, María del Carmen Calzadillas-Valles, Irene Jimenez-Perez, Rosa Maria Cibrián Ortiz de Anda, Rosario Salvador-Palmer and Jose Ignacio Priego-Quesada
Sensors 2022, 22(21), 8572; https://doi.org/10.3390/s22218572 - 7 Nov 2022
Cited by 1 | Viewed by 1921
Abstract
Valencian handball consists in hitting the ball with the hands and it may contribute to injury development on the hands. This study aimed to analyze skin temperature asymmetries and recovery after a cold stress test (CST) in professional players of Valencian handball before [...] Read more.
Valencian handball consists in hitting the ball with the hands and it may contribute to injury development on the hands. This study aimed to analyze skin temperature asymmetries and recovery after a cold stress test (CST) in professional players of Valencian handball before and after a competition. Thirteen professional athletes and a control group of ten physically active participants were measured. For both groups, infrared images were taken at the baseline condition; later they underwent a thermal stress test (pressing for 2 min with the palm of the hand on a metal plate) and then recovery images were taken. In athletes, the images were also taken after their competition. Athletes at baseline condition presented lower temperatures (p < 0.05) in the dominant hand compared with the non-dominant hand. There were asymmetries in all regions after their match (p < 0.05). After CST, a higher recovery rate was found after the game. The regions with the most significant differences in variation, asymmetries and recovery patterns were the index, middle and ring fingers, and the palm of the dominant hand. Taking into account that lower temperatures and the absence of temperature variation may be the consequence of a vascular adaptation, thermography could be used as a method to prevent injuries in athletes from Valencian handball. Full article
Show Figures

Figure 1

Figure 1
<p>Different positions employed during the study. (<b>A</b>) Position to adapt the hands to environmental conditions. (<b>B</b>) Hand position during infrared imaging. (<b>C</b>) Cold stress test on a metal/aluminum plate.</p>
Full article ">Figure 2
<p>Regions of interest determined. (<b>1</b>) Thumb; (<b>2</b>) Index finger; (<b>3</b>) Middle finger; (<b>4</b>) Ring finger; (<b>5</b>) Little finger; (<b>6</b>) Thenar eminence; (<b>7</b>) Palm without thenar eminence; (<b>8</b>) Wrist.</p>
Full article ">Figure 3
<p>Example of the thermal images at baseline condition in a member of the control group (<b>A</b>) and a member of the athlete group (<b>B</b>). D = Dominant Hand; ND = Non-Dominant Hand.</p>
Full article ">Figure 4
<p>Mean and standard deviation of skin temperature variation in the dominant and non-dominant hand by ROIs in the athletes group. Differences were analyzed between the dominant and non-dominant hand (* <span class="html-italic">p</span> &lt; 0.05). S = Small Effect Size.</p>
Full article ">Figure 5
<p>Mean and standard deviation of the CST variations of the asymmetries for each minute after CST in all the groups. L = Large Effect Size. Difference between Athletes Group—Post Match and Athletes Group—Pre Match († <span class="html-italic">p</span> &lt; 0.05); Difference between Control Group and Athletes Group—Post Match (* <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">Figure 6
<p>Mean and standard deviation of the CST variation in dominant hand for each minute after CST in all the groups. L = Large Effect Size. Difference between Athletes Group—Post Match and Athletes Group—Pre Match († <span class="html-italic">p</span> &lt; 0.05; †† <span class="html-italic">p</span> &lt; 0.01); Difference between Control Group and Athletes Group—Post Match (* <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01); Difference between Control Group and Athletes Group—Pre Match (# <span class="html-italic">p</span> &lt; 0.05; ## <span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">Figure 7
<p>Mean and standard deviation of the CST variation in non-dominant hand for each minute after CST in all the groups. L = Large Effect Size. Difference between Athletes Group—Post Match and Athletes Group—Pre Match († <span class="html-italic">p</span> &lt; 0.05; †† <span class="html-italic">p</span> &lt; 0.01); Difference between Control Group and Athletes Group—Post Match (* <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01; *** <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">
14 pages, 3226 KiB  
Article
Monopole Antenna with Enhanced Bandwidth and Stable Radiation Patterns Using Metasurface and Cross-Ground Structure
by Patrick Danuor, Kyei Anim and Young-Bae Jung
Sensors 2022, 22(21), 8571; https://doi.org/10.3390/s22218571 - 7 Nov 2022
Cited by 3 | Viewed by 3828
Abstract
In this paper, a printed monopole antenna with stable omnidirectional radiation patterns is presented for applications in ocean buoy and the marine Internet of Things (IoT). The antenna is composed of a rectangular patch, a cross-ground structure, and two frequency-selective surface (FSS) unit [...] Read more.
In this paper, a printed monopole antenna with stable omnidirectional radiation patterns is presented for applications in ocean buoy and the marine Internet of Things (IoT). The antenna is composed of a rectangular patch, a cross-ground structure, and two frequency-selective surface (FSS) unit cells. The cross-ground structure is incorporated into the antenna design to maintain consistent monopole-like radiation patterns over the antenna’s operating band, and the FSS unit cells are placed at the backside of the antenna to improve the antenna gain aiming at the L-band. In addition, the FSS unit cells exhibit resonance characteristics that, when incorporated with the cross-ground structure, result in a broader impedance bandwidth compared to the conventional monopole antenna. To validate the structure, a prototype is fabricated and measured. Good agreement between the simulated and measured results show that the proposed antenna exhibits an impedance bandwidth of 83.2% from 1.65 to 4 GHz, compared to the conventional printed monopole antenna. The proposed antenna realizes a peak gain of 4.57 dBi and a total efficiency of 97% at 1.8 GHz. Full article
(This article belongs to the Special Issue Antenna Design and Sensors for Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Illustration of marine IoT for fishing gear automatic identification [<a href="#B10-sensors-22-08571" class="html-bibr">10</a>].</p>
Full article ">Figure 2
<p>Geometry of the conventional monopole antenna: (<b>a</b>) front view and (<b>b</b>) back view.</p>
Full article ">Figure 3
<p>Geometry of the conventional antenna incorporated with cross-ground structure: (<b>a</b>) front view, (<b>b</b>) back view, and (<b>c</b>) side view.</p>
Full article ">Figure 4
<p>Simulated results of the azimuth radiation patterns for (<b>a</b>) 1.7, (<b>b</b>) 1.8, (<b>c</b>) 1.86, and (<b>d</b>) 2 GHz.</p>
Full article ">Figure 5
<p>Simulated reflection coefficient of the conventional monopole only and with cross-ground structure.</p>
Full article ">Figure 6
<p>Geometry of proposed antenna with both cross-ground structure and FSS unit cells: (<b>a</b>) front view, (<b>b</b>) back view showing FSS unit cells, and (<b>c</b>) side view.</p>
Full article ">Figure 7
<p>Geometry of proposed FSS unit cell: (<b>a</b>) 2D view and (<b>b</b>) 3D view of simulation setup.</p>
Full article ">Figure 8
<p>Simulation results of (<b>a</b>) the reflection and transmission coefficient (S<sub>11</sub> and S<sub>21</sub>, respectively) of the proposed FSS unit cell and (<b>b</b>) gain results comparison after incorporation with FSS unit cells.</p>
Full article ">Figure 9
<p>Illustration of (<b>a</b>) FSS unit cell layer with inductive and capacitive components and (<b>b</b>) equivalent circuit model of proposed antenna structure with both cross-ground and FSS unit cells.</p>
Full article ">Figure 10
<p>Simulated results of the input reflection coefficient amplitude.</p>
Full article ">Figure 11
<p>Simulated results of the azimuth and elevation radiation patterns for (<b>a</b>) 1.7, (<b>b</b>) 1.8, (<b>c</b>) 1.86, and (<b>d</b>) 2 GHz.</p>
Full article ">Figure 12
<p>Photograph of the fabricated monopole antenna: (<b>a</b>) front view, (<b>b</b>) back view, and (<b>c</b>) far-field measurement set-up.</p>
Full article ">Figure 13
<p>Simulated and measured results of the azimuth and elevation radiation patterns for (<b>a</b>) 1.7 GHz, (<b>b</b>) 1.8 GHz, (<b>c</b>) 1.86 GHz, and (<b>d</b>) 2 GHz.</p>
Full article ">Figure 14
<p>Simulated and measured results of the input reflection coefficient amplitude.</p>
Full article ">Figure 15
<p>Results of the simulated and measured gain and antenna efficiency.</p>
Full article ">
19 pages, 3511 KiB  
Article
Binary PSO with Classification Trees Algorithm for Enhancing Power Efficiency in 5G Networks
by Mayada Osama, Salwa El Ramly and Bassant Abdelhamid
Sensors 2022, 22(21), 8570; https://doi.org/10.3390/s22218570 - 7 Nov 2022
Cited by 2 | Viewed by 2064
Abstract
The dense deployment of small cells (SCs) in the 5G heterogeneous networks (HetNets) fulfills the demand for vast connectivity and larger data rates. Unfortunately, the power efficiency (PE) of the network is reduced because of the elevated power consumption of the densely deployed [...] Read more.
The dense deployment of small cells (SCs) in the 5G heterogeneous networks (HetNets) fulfills the demand for vast connectivity and larger data rates. Unfortunately, the power efficiency (PE) of the network is reduced because of the elevated power consumption of the densely deployed SCs and the interference that arise between them. An approach to ameliorate the PE is proposed by switching off the redundant SCs using machine learning (ML) techniques while sustaining the quality of service (QoS) for each user. In this paper, a linearly increasing inertia weight–binary particle swarm optimization (IW-BPSO) algorithm for SC on/off switching is proposed to minimize the power consumption of the network. Moreover, a soft frequency reuse (SFR) algorithm is proposed using classification trees (CTs) to alleviate the interference and elevate the system throughput. The results show that the proposed algorithms outperform the other conventional algorithms, as they reduce the power consumption of the network and the interference among the SCs, ameliorating the total throughput and the PE of the system. Full article
(This article belongs to the Special Issue Energy-Efficient Communication Networks and Systems)
Show Figures

Figure 1

Figure 1
<p>General representation of HetNet scenario with densely deployed small cells.</p>
Full article ">Figure 2
<p>SFR example for hexagonal shaped cells with <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>u</mi> <mi>b</mi> </mrow> </msub> </mrow> </semantics></math> = 3.</p>
Full article ">Figure 3
<p>(<b>a</b>) A real example demonstrating a SC (the purple SC) with the center region (the grey region); (<b>b</b>) the seven used sub-bands.</p>
Full article ">Figure 4
<p>Classification tree (CT) example for three sub-bands.</p>
Full article ">Figure 5
<p>Number of active SCs for various numbers of UEs for different values of IW.</p>
Full article ">Figure 6
<p>Number of active SCs for various numbers of UEs in case of linearly increasing IW.</p>
Full article ">Figure 7
<p>Total system throughput for various numbers of UEs.</p>
Full article ">Figure 8
<p>Total system power consumption for various numbers of UEs.</p>
Full article ">Figure 9
<p>Power efficiency for various numbers of UEs.</p>
Full article ">Figure 10
<p>Outage probability for various SINR thresholds in the case of 900 UEs.</p>
Full article ">
19 pages, 2299 KiB  
Article
Design and Implementation of a Cloud PACS Architecture
by Jacek Kawa, Bartłomiej Pyciński, Michał Smoliński, Paweł Bożek, Marek Kwasecki, Bartosz Pietrzyk and Dariusz Szymański
Sensors 2022, 22(21), 8569; https://doi.org/10.3390/s22218569 - 7 Nov 2022
Cited by 5 | Viewed by 7339
Abstract
The limitations of the classic PACS (picture archiving and communication system), such as the backward-compatible DICOM network architecture and poor security and maintenance, are well-known. They are challenged by various existing solutions employing cloud-related patterns and services. However, a full-scale cloud-native PACS has [...] Read more.
The limitations of the classic PACS (picture archiving and communication system), such as the backward-compatible DICOM network architecture and poor security and maintenance, are well-known. They are challenged by various existing solutions employing cloud-related patterns and services. However, a full-scale cloud-native PACS has not yet been demonstrated. The paper introduces a vendor-neutral cloud PACS architecture. It is divided into two main components: a cloud platform and an access device. The cloud platform is responsible for nearline (long-term) image archive, data flow, and backend management. It operates in multi-tenant mode. The access device is responsible for the local DICOM (Digital Imaging and Communications in Medicine) interface and serves as a gateway to cloud services. The cloud PACS was first implemented in an Amazon Web Services environment. It employs a number of general-purpose services designed or adapted for a cloud environment, including Kafka, OpenSearch, and Memcached. Custom services, such as a central PACS node, queue manager, or flow worker, also developed as cloud microservices, bring DICOM support, external integration, and a management layer. The PACS was verified using image traffic from, among others, computed tomography (CT), magnetic resonance (MR), and computed radiography (CR) modalities. During the test, the system was reliably storing and accessing image data. In following tests, scaling behavior differences between the monolithic Dcm4chee server and the proposed solution are shown. The growing number of parallel connections did not influence the monolithic server’s overall throughput, whereas the performance of cloud PACS noticeably increased. In the final test, different retrieval patterns were evaluated to assess performance under different scenarios. The current production environment stores over 450 TB of image data and handles over 4000 DICOM nodes. Full article
(This article belongs to the Topic Advanced Systems Engineering: Theory and Applications)
Show Figures

Figure 1

Figure 1
<p>PACS diagram (schematic).</p>
Full article ">Figure 2
<p>Internal elements of the central PACS node.</p>
Full article ">Figure 3
<p>Connections between the access device and the cloud platform.</p>
Full article ">Figure 4
<p>Internal queue state and priority visualization (partial screenshot). Green status is assigned to studies fully available in Central PACS, orange to studies during transfer, and gray marks for cases waiting for transfer. PP denotes the highest priority.</p>
Full article ">Figure 5
<p>Access device activation.</p>
Full article ">Figure 6
<p>Traffic visualization during test days. Each series is marked with a separate bar. Different colors indicate acquisition devices.</p>
Full article ">Figure 7
<p>Cumulative traffic visualization during test days. Each imaging device’s model activity is composed of segments corresponding to a single series. The colors match <a href="#sensors-22-08569-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Searching all series created at a specific time. Total elapsed time was measured (since the issue of the request to display results).</p>
Full article ">Figure 9
<p>Data retrieval. The left part depicts the test data set; most instances occupied several hundred kB. The right part shows the total retrieval time (from issuing the request to finishing the write of the instance in the VM storage). Results are shown on a logarithmic scale for brevity.</p>
Full article ">Figure 10
<p>Image retrieval test set. On the horizontal axis, a number of parallel workers are depicted (each worker running in a separate thread). On the vertical axis, the time of every single worker in a batch, as well as the median time for each run, is presented. Green and blue boxes show times when retrieving from the Dcm4chee server or the cluster of central PACS nodes, respectively. In each run, the same amount (80) of studies were retrieved (ca. 16 GB). Most of the Dcm4chee-related operations finished in a similar time (hence the boxes are flattened), while the cluster-related operations were time-varying with average decreasing with the growing number of workers (throughput increases).</p>
Full article ">
23 pages, 1630 KiB  
Article
Detection of Physical Activity Using Machine Learning Methods Based on Continuous Blood Glucose Monitoring and Heart Rate Signals
by Lehel Dénes-Fazakas, Máté Siket, László Szilágyi, Levente Kovács and György Eigner
Sensors 2022, 22(21), 8568; https://doi.org/10.3390/s22218568 - 7 Nov 2022
Cited by 11 | Viewed by 3141
Abstract
Non-coordinated physical activity may lead to hypoglycemia, which is a dangerous condition for diabetic people. Decision support systems related to type 1 diabetes mellitus (T1DM) still lack the capability of automated therapy modification by recognizing and categorizing the physical activity. Further, this desired [...] Read more.
Non-coordinated physical activity may lead to hypoglycemia, which is a dangerous condition for diabetic people. Decision support systems related to type 1 diabetes mellitus (T1DM) still lack the capability of automated therapy modification by recognizing and categorizing the physical activity. Further, this desired adaptive therapy should be achieved without increasing the administrative load, which is already high for the diabetic community. These requirements can be satisfied by using artificial intelligence-based solutions, signals collected by wearable devices, and relying on the already available data sources, such as continuous glucose monitoring systems. In this work, we focus on the detection of physical activity by using a continuous glucose monitoring system and a wearable sensor providing the heart rate—the latter is accessible even in the cheapest wearables. Our results show that the detection of physical activity is possible based on these data sources, even if only low-complexity artificial intelligence models are deployed. In general, our models achieved approximately 90% accuracy in the detection of physical activity. Full article
(This article belongs to the Special Issue Recent Advances in Digital Healthcare and Applications)
Show Figures

Figure 1

Figure 1
<p>The ways of data extraction from the Ohio T1DM dataset for a given patient. Black arrows indicate 24 h long blocks according to the time stamps, midnight to midnight. Legend: blue blocks—CGM data available, exercise reported, no HR data available; green blocks—CGM data available, exercise reported, HR data available; transparent orange area—self-reported exercise, no CGM data available; transparent red area—probably an exercise event happened, but it was not reported.</p>
Full article ">Figure 2
<p>Abstraction of feature extraction during operation. The <span class="html-italic">v</span>, <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>p</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>p</mi> <mi>p</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>a</mi> <mi>p</mi> </mrow> </semantics></math> features originate from the <span class="html-italic">d</span>-kind features; thus, these are not listed here. In the case of the <math display="inline"><semantics> <mrow> <mi>d</mi> <msub> <mi>p</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, only the first four values are represented as a demonstration; however, all sampled values are considered from the window during operation.</p>
Full article ">Figure 3
<p>ROC curve of tested ML models for Ohio T1DM dataset using glucose features only.</p>
Full article ">Figure 4
<p>ROC curves of various ML models obtained on D1namo dataset using glucose features only.</p>
Full article ">Figure 5
<p>ROC curves of models on Ohio T1DM dataset using both blood glucose and heart rate features.</p>
Full article ">Figure 6
<p>ROC curves of models obtained on D1namo dataset with blood glucose and heart rate features.</p>
Full article ">Figure 7
<p>ROC curve of models on Ohio T1DM is the training data; the D1namo is the test with glucose and heart rate feature.</p>
Full article ">Figure 8
<p>AUC of models in all use-cases—test results.</p>
Full article ">
19 pages, 6345 KiB  
Article
Rice Crop Counting Using Aerial Imagery and GIS for the Assessment of Soil Health to Increase Crop Yield
by Syeda Iqra Hassan, Muhammad Mansoor Alam, Muhammad Yousuf Irfan Zia, Muhammad Rashid, Usman Illahi and Mazliham Mohd Su’ud
Sensors 2022, 22(21), 8567; https://doi.org/10.3390/s22218567 - 7 Nov 2022
Cited by 11 | Viewed by 3985
Abstract
Rice is one of the vital foods consumed in most countries throughout the world. To estimate the yield, crop counting is used to indicate improper growth, identification of loam land, and control of weeds. It is becoming necessary to grow crops healthy, precisely, [...] Read more.
Rice is one of the vital foods consumed in most countries throughout the world. To estimate the yield, crop counting is used to indicate improper growth, identification of loam land, and control of weeds. It is becoming necessary to grow crops healthy, precisely, and proficiently as the demand increases for food supplies. Traditional counting methods have numerous disadvantages, such as long delay times and high sensitivity, and they are easily disturbed by noise. In this research, the detection and counting of rice plants using an unmanned aerial vehicle (UAV) and aerial images with a geographic information system (GIS) are used. The technique is implemented in the area of forty acres of rice crop in Tando Adam, Sindh, Pakistan. To validate the performance of the proposed system, the obtained results are compared with the standard plant count techniques as well as approved by the agronomist after testing soil and monitoring the rice crop count in each acre of land of rice crops. From the results, it is found that the proposed system is precise and detects rice crops accurately, differentiates from other objects, and estimates the soil health based on plant counting data; however, in the case of clusters, the counting is performed in semi-automated mode. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>Preparation of dataset.</p>
Full article ">Figure 2
<p>Seedling plantation in region 1 and region 2.</p>
Full article ">Figure 3
<p>Training samples for extracting features.</p>
Full article ">Figure 4
<p>Block diagram of the research work.</p>
Full article ">Figure 5
<p>Mechanical and manual plantation in region 1.</p>
Full article ">Figure 6
<p>Point demarcation of region 1.</p>
Full article ">Figure 7
<p>Image shows the mechanical plantation of region 2.</p>
Full article ">Figure 8
<p>Point demarcation of region 2.</p>
Full article ">Figure 9
<p>Deep learning architecture.</p>
Full article ">Figure 10
<p>Workflow Chart.</p>
Full article ">Figure 11
<p>Identification of gaps in the mechanical plantation at scale 1:30 in region 1.</p>
Full article ">Figure 12
<p>Areas of interest are labeled as (1, 2, and 3) in region 2.</p>
Full article ">Figure 13
<p>Sowing plantation pattern at areas (1,2, and 3) in region 2.</p>
Full article ">Figure 14
<p>Plantation is uniform in area 1.</p>
Full article ">Figure 15
<p>Plantation is uniform in area 2.</p>
Full article ">Figure 16
<p>Plantation has extreme gapping in area 3.</p>
Full article ">
18 pages, 2498 KiB  
Article
A Hybrid Spider Monkey and Hierarchical Particle Swarm Optimization Approach for Intrusion Detection on Internet of Things
by Sandhya Ethala and Annapurani Kumarappan
Sensors 2022, 22(21), 8566; https://doi.org/10.3390/s22218566 - 7 Nov 2022
Cited by 17 | Viewed by 2379
Abstract
The Internet of Things (IoT) network integrates physical objects such as sensors, networks, and electronics with software to collect and exchange data. Physical objects with a unique IP address communicate with external entities over the internet to exchange data in the network. Due [...] Read more.
The Internet of Things (IoT) network integrates physical objects such as sensors, networks, and electronics with software to collect and exchange data. Physical objects with a unique IP address communicate with external entities over the internet to exchange data in the network. Due to a lack of security measures, these network entities are vulnerable to severe attacks. To address this, an efficient security mechanism for dealing with the threat and detecting attacks is necessary. The proposed hybrid optimization approach combines Spider Monkey Optimization (SMO) and Hierarchical Particle Swarm Optimization (HPSO) to handle the huge amount of intrusion data classification problems and improve detection accuracy by minimizing false alarm rates. After finding the best optimum values, the Random Forest Classifier (RFC) was used to classify attacks from the NSL-KDD and UNSW-NB 15 datasets. The SVM model obtained accuracy of 91.82%, DT of 98.99%, and RFC of 99.13%, and the proposed model obtained 99.175% for the NSL-KDD dataset. Similarly, SVM obtained accuracy of 85.88%, DT of 88.87%, RFC of 91.65%, and the proposed model obtained 99.18% for the UNSW NB-15 dataset. The proposed model achieved accuracy of 99.175% for the NSL-KDD dataset which is higher than the state-of-the-art techniques such as DNN of 97.72% and Ensemble Learning at 85.2%. Full article
(This article belongs to the Special Issue Advanced Management of Fog/Edge Networks and IoT Sensors Devices)
Show Figures

Figure 1

Figure 1
<p>Pictorial Representation of attackers towards IoT devices.</p>
Full article ">Figure 2
<p>The block diagram of proposed SMO-HPSO feature selection method.</p>
Full article ">Figure 3
<p>Steps involved in hybrid feature selection algorithm.</p>
Full article ">Figure 4
<p>Pictorial representation of the proposed hybrid SMO-HPSO for selecting g-best feature.</p>
Full article ">Figure 5
<p>Evaluation of performances for the SVM, DT, RFC, and the proposed method for the NSL-KDD dataset.</p>
Full article ">Figure 6
<p>Performance evaluation for the proposed SMO-HPSO for NSL-KDD dataset to multiple attack classes.</p>
Full article ">Figure 7
<p>Performance evaluation for the proposed SMO-HPSO for UNSW NB-15 Dataset.</p>
Full article ">
20 pages, 4045 KiB  
Article
A Generic Pixel Pitch Calibration Method for Fundus Camera via Automated ROI Extraction
by Tengfei Long, Yi Xu, Haidong Zou, Lina Lu, Tianyi Yuan, Zhou Dong, Jiqun Dong, Xin Ke, Saiguang Ling and Yingyan Ma
Sensors 2022, 22(21), 8565; https://doi.org/10.3390/s22218565 - 7 Nov 2022
Cited by 12 | Viewed by 2730
Abstract
Pixel pitch calibration is an essential step to make the fundus structures in the fundus image quantitatively measurable, which is important for the diagnosis and treatment of many diseases, e.g., diabetes, arteriosclerosis, hereditary optic atrophy, etc. The conventional calibration approaches require the specific [...] Read more.
Pixel pitch calibration is an essential step to make the fundus structures in the fundus image quantitatively measurable, which is important for the diagnosis and treatment of many diseases, e.g., diabetes, arteriosclerosis, hereditary optic atrophy, etc. The conventional calibration approaches require the specific parameters of the fundus camera or several specially shot images of the chess board, but these are generally not accessible, and the calibration results cannot be generalized to other cameras. Based on automated ROI (region of interest) and optic disc detection, the diameter ratio of ROI and optic disc (ROI–disc ratio) is quantitatively analyzed for a large number of fundus images. With the prior knowledge of the average diameter of an optic disc in fundus, the pixel pitch can be statistically estimated from a large number of fundus images captured by a specific camera without the availability of chess board images or detailed specifics of the fundus camera. Furthermore, for fundus cameras of FOV (fixed field-of-view), the pixel pitch of a fundus image of 45° FOV can be directly estimated according to the automatically measured diameter of ROI in the pixel. The average ROI–disc ratio is approximately constant, i.e., 6.404 ± 0.619 in the pixel, according to 40,600 fundus images, captured by different cameras, of 45° FOV. In consequence, the pixel pitch of a fundus image of 45° FOV can be directly estimated according to the automatically measured diameter of ROI in the pixel, and results show the pixel pitches of Canon CR2, Topcon NW400, Zeiss Visucam 200, and Newvision RetiCam 3100 cameras are 6.825 ± 0.666 μm, 6.625 ± 0.647 μm, 5.793 ± 0.565 μm, and 5.884 ± 0.574 μm, respectively. Compared with the manually measured pixel pitches, based on the method of ISO 10940:2009, i.e., 6.897 μm, 6.807 μm, 5.693 μm, and 6.050 μm, respectively, the bias of the proposed method is less than 5%. Since our method doesn’t require chess board images or detailed specifics, the fundus structures on the fundus image can be measured accurately, according to the pixel pitch obtained by this method, without knowing the type and parameters of the camera. Full article
(This article belongs to the Collection Biomedical Imaging & Instrumentation)
Show Figures

Figure 1

Figure 1
<p>Geometric bases of the eye and the fundus camera model. (<b>a</b>) shows the reduced emmetropic eye model, (<b>b</b>) shows simplified optical process of fundus camera, and (<b>c</b>) shows the geometric model of the FOV of a fundus camera.</p>
Full article ">Figure 2
<p>The image of a checkerboard captured by a fundus camera.</p>
Full article ">Figure 3
<p>The workflow of estimating pixel pitch of fundus image.</p>
Full article ">Figure 4
<p>Automated ROI measurement and optic disc measurement. (<b>a</b>–<b>d</b>) are the steps of detecting ROI, (<b>e</b>) shows the optic disc position target detection box annotation, (<b>f</b>) is the detected region of optic disc, (<b>g</b>–<b>i</b>) are the steps of disc edge detection in polar coordinate system, and (<b>j</b>,<b>k</b>) show the circumscribed circle fitting of the optic disc.</p>
Full article ">Figure 5
<p>ROI of fundus image. (<b>a</b>–<b>c</b>) are examples of ROI regions that are smaller, roughly the same size as, and larger than the fundus image, respectively.</p>
Full article ">Figure 6
<p>Diagram of pixel pitch measurement according to ISO 10940:2009. (<b>a</b>,<b>b</b>) are the schematic diagrams when shooting (place the ruler 1000 mm away from the pupil and ensure that the ruler scale in the center of the image can be seen clearly during shooting), (<b>c</b>) is the image acquired by actual measurement, and (<b>d</b>) is a partial enlarged view of (<b>c</b>).</p>
Full article ">Figure 7
<p>Statistics results of diameter of ROI and ROI–disc ratio for Canon (<b>a</b>,<b>b</b>), Topcon (<b>c</b>,<b>d</b>), Zeiss (<b>e</b>,<b>f</b>), Newvision (<b>g</b>,<b>h</b>), and all the cameras (<b>i</b>,<b>j</b>).</p>
Full article ">Figure 8
<p>Distribution of axial ametropia of 4682 individuals (9364 eyes).</p>
Full article ">Figure 9
<p>Fundus images of a myopic patient in 2012 and 2016. (<b>a</b>,<b>c</b>) are the fundus images taken by Cannon cameras of different models in 2012 and 2016, respectively. (<b>b</b>,<b>d</b>) are the labeled versions of (<b>a</b>,<b>c</b>), respectively, and show the regions of the atrophic arc highlighted in yellow. The areas of the atrophic arc regions are 17,586 px and 16,521 px in (<b>b</b>,<b>d</b>), respectively.</p>
Full article ">
21 pages, 5892 KiB  
Review
Smart Home Privacy Protection Methods against a Passive Wireless Snooping Side-Channel Attack
by Mohammad Ali Nassiri Abrishamchi, Anazida Zainal, Fuad A. Ghaleb, Sultan Noman Qasem and Abdullah M. Albarrak
Sensors 2022, 22(21), 8564; https://doi.org/10.3390/s22218564 - 7 Nov 2022
Cited by 10 | Viewed by 3935
Abstract
Smart home technologies have attracted more users in recent years due to significant advancements in their underlying enabler components, such as sensors, actuators, and processors, which are spreading in various domains and have become more affordable. However, these IoT-based solutions are prone to [...] Read more.
Smart home technologies have attracted more users in recent years due to significant advancements in their underlying enabler components, such as sensors, actuators, and processors, which are spreading in various domains and have become more affordable. However, these IoT-based solutions are prone to data leakage; this privacy issue has motivated researchers to seek a secure solution to overcome this challenge. In this regard, wireless signal eavesdropping is one of the most severe threats that enables attackers to obtain residents’ sensitive information. Even if the system encrypts all communications, some cyber attacks can still steal information by interpreting the contextual data related to the transmitted signals. For example, a “fingerprint and timing-based snooping (FATS)” attack is a side-channel attack (SCA) developed to infer in-home activities passively from a remote location near the targeted house. An SCA is a sort of cyber attack that extracts valuable information from smart systems without accessing the content of data packets. This paper reviews the SCAs associated with cyber–physical systems, focusing on the proposed solutions to protect the privacy of smart homes against FATS attacks in detail. Moreover, this work clarifies shortcomings and future opportunities by analyzing the existing gaps in the reviewed methods. Full article
(This article belongs to the Collection IoT and Smart Homes)
Show Figures

Figure 1

Figure 1
<p>Smart home applications.</p>
Full article ">Figure 2
<p>Types of home data and their related security measures.</p>
Full article ">Figure 3
<p>Side-channel attacks on a smart device.</p>
Full article ">Figure 4
<p>Taxonomy of the side-channel attacks on cyber–physical systems.</p>
Full article ">Figure 5
<p>Overview of the FATS attack processes.</p>
Full article ">Figure 6
<p>Flowchart of FATS attacks.</p>
Full article ">Figure 7
<p>Temporal manipulation in forwarding actual data packets.</p>
Full article ">Figure 8
<p>Fake data packet injections.</p>
Full article ">Figure 9
<p>A hybrid of temporal manipulation and fake packet injection techniques.</p>
Full article ">Figure 10
<p>The ConstRate scheme.</p>
Full article ">Figure 11
<p>The ProbRate scheme.</p>
Full article ">Figure 12
<p>The FitProbRate scheme.</p>
Full article ">Figure 13
<p>Privacy-preserving method based on events’ behavioral semantics.</p>
Full article ">Figure 14
<p>Actual activity mimicking method.</p>
Full article ">Figure 15
<p>Triad of evaluation metrics for protection methods.</p>
Full article ">
20 pages, 4348 KiB  
Article
Robust Estimation and Optimized Transmission of 3D Feature Points for Computer Vision on Mobile Communication Network
by Jin-Kyum Kim, Byung-Seo Park, Woosuk Kim, Jung-Tak Park, Sol Lee and Young-Ho Seo
Sensors 2022, 22(21), 8563; https://doi.org/10.3390/s22218563 - 7 Nov 2022
Cited by 1 | Viewed by 2495
Abstract
Due to the amount of transmitted data and the security of personal or private information in wireless communication, there are cases where the information for a multimedia service should be directly transferred from the user’s device to the cloud server without the captured [...] Read more.
Due to the amount of transmitted data and the security of personal or private information in wireless communication, there are cases where the information for a multimedia service should be directly transferred from the user’s device to the cloud server without the captured original images. This paper proposes a new method to generate 3D (dimensional) keypoints based on a user’s mobile device with a commercial RGB camera in a distributed computing environment such as a cloud server. The images are captured with a moving camera and 2D keypoints are extracted from them. After executing feature extraction between continuous frames, disparities are calculated between frames using the relationships between matched keypoints. The physical distance of the baseline is estimated by using the motion information of the camera, and the actual distance is calculated by using the calculated disparity and the estimated baseline. Finally, 3D keypoints are generated by adding the extracted 2D keypoints to the calculated distance. A keypoint-based scene change method is proposed as well. Due to the existing similarity between continuous frames captured from a camera, not all 3D keypoints are transferred and stored, only the new ones. Compared with the ground truth of the TUM dataset, the average error of the estimated 3D keypoints was measured as 5.98 mm, which shows that the proposed method has relatively good performance considering that it uses a commercial RGB camera on a mobile device. Furthermore, the transferred 3D keypoints were decreased to about 73.6%. Full article
Show Figures

Figure 1

Figure 1
<p>Stereo matching: (<b>a</b>) stereo camera configuration and depth definition and (<b>b</b>) disparity calculation.</p>
Full article ">Figure 2
<p>The steps of the SIFT framework.</p>
Full article ">Figure 3
<p>3D Feature extraction algorithm.</p>
Full article ">Figure 4
<p>Generation process of 3D keypoints.</p>
Full article ">Figure 5
<p>Process of 3D keypoint generation.</p>
Full article ">Figure 6
<p>Parallax (or disparity) analysis between stereo images: (<b>a</b>) stereo images, (<b>b</b>) parallax analysis result, (<b>c</b>) keypoint generation.</p>
Full article ">Figure 7
<p>Keypoint matching and overlapped keypoint detect between continuous frames.</p>
Full article ">Figure 8
<p>Keypoint update algorithm.</p>
Full article ">Figure 9
<p>3D keypoint database update.</p>
Full article ">Figure 10
<p>Distance estimation result of the baseline: (<b>a</b>) accelerating value from the gyrosensor, (<b>b</b>) speed, and (<b>c</b>) estimated distance.</p>
Full article ">Figure 11
<p>Keypoint-based stereo matching result: (<b>a</b>) RGB image, (<b>b</b>) keypoints of the previous frame (the left image), (<b>c</b>) keypoints of the current frame (right image), (<b>d</b>) 3D keypoints result plotted in a 3D space with RGB information.</p>
Full article ">Figure 12
<p>Keypoint update algorithm: keypoint information of the first frame (left top), keypoint information after 55 mm movement (right top), matched keypoint information (left bottom), and newly generated keypoint information (right bottom).</p>
Full article ">Figure 13
<p>Scene change detection result: (<b>a</b>) 74.57% overlapped keypoints, (<b>b</b>) 5.76% overlapped keypoints, indicating a scene change.</p>
Full article ">Figure 14
<p>TUM dataset: (<b>a</b>) depth map, (<b>b</b>) RGB, (<b>c</b>) point cloud and 3D keypoints.</p>
Full article ">Figure 15
<p>Processing time reduction through scene change detection and duplicate keypoint removal.</p>
Full article ">Figure 16
<p>Estimation result of corresponding points (<b>a</b>) without search range and (<b>b</b>) with search range of 200 pixels.</p>
Full article ">Figure 17
<p>3D keypoint results according to baseline length (top-view) of (<b>a</b>) 10 mm, (<b>b</b>) 55 mm, and (<b>c</b>) 150 mm.</p>
Full article ">
12 pages, 2762 KiB  
Article
A Multi-AUV Maritime Target Search Method for Moving and Invisible Objects Based on Multi-Agent Deep Reinforcement Learning
by Guangcheng Wang, Fenglin Wei, Yu Jiang, Minghao Zhao, Kai Wang and Hong Qi
Sensors 2022, 22(21), 8562; https://doi.org/10.3390/s22218562 - 7 Nov 2022
Cited by 21 | Viewed by 3107
Abstract
Target search for moving and invisible objects has always been considered a challenge, as the floating objects drift with the flows. This study focuses on target search by multiple autonomous underwater vehicles (AUV) and investigates a multi-agent target search method (MATSMI) for moving [...] Read more.
Target search for moving and invisible objects has always been considered a challenge, as the floating objects drift with the flows. This study focuses on target search by multiple autonomous underwater vehicles (AUV) and investigates a multi-agent target search method (MATSMI) for moving and invisible objects. In the MATSMI algorithm, based on the multi-agent deep deterministic policy gradient (MADDPG) method, we add spatial and temporal information to the reinforcement learning state and set up specialized rewards in conjunction with a maritime target search scenario. Additionally, we construct a simulation environment to simulate a multi-AUV search for the floating object. The simulation results show that the MATSMI method has about 20% higher search success rate and about 70 steps shorter search time than the traditional search method. In addition, the MATSMI method converges faster than the MADDPG method. This paper provides a novel and effective method for solving the maritime target search problem. Full article
(This article belongs to the Special Issue Sensors, Modeling and Control for Intelligent Marine Robots)
Show Figures

Figure 1

Figure 1
<p>The execution process of MATSMI.</p>
Full article ">Figure 2
<p>The training process of MATSMI.</p>
Full article ">Figure 3
<p>(<b>a</b>) ASSR curve, (<b>b</b>) AST curve, and (<b>c</b>) Reward curve for the MATSMI algorithm.</p>
Full article ">Figure 4
<p>Comparison of ASSR and AST curves of MATSMI and MADDPG algorithms.</p>
Full article ">Figure 5
<p>Search paths for the MATSMI algorithm.</p>
Full article ">Figure 6
<p>Search paths for the MADDPG algorithm.</p>
Full article ">Figure 7
<p>Search paths for the spiral search algorithm.</p>
Full article ">
20 pages, 3127 KiB  
Article
YPD-SLAM: A Real-Time VSLAM System for Handling Dynamic Indoor Environments
by Yi Wang, Haoyu Bu, Xiaolong Zhang and Jia Cheng
Sensors 2022, 22(21), 8561; https://doi.org/10.3390/s22218561 - 7 Nov 2022
Cited by 8 | Viewed by 2986
Abstract
Aiming at the problem that Simultaneous localization and mapping (SLAM) is greatly disturbed by many dynamic elements in the actual environment, this paper proposes a real-time Visual SLAM (VSLAM) algorithm to deal with a dynamic indoor environment. Firstly, a lightweight YoloFastestV2 deep learning [...] Read more.
Aiming at the problem that Simultaneous localization and mapping (SLAM) is greatly disturbed by many dynamic elements in the actual environment, this paper proposes a real-time Visual SLAM (VSLAM) algorithm to deal with a dynamic indoor environment. Firstly, a lightweight YoloFastestV2 deep learning model combined with NCNN and Mobile Neural Network (MNN) inference frameworks is used to obtain preliminary semantic information of images. The dynamic feature points are removed according to epipolar constraint and dynamic properties of objects between consecutive frames. Since reducing the number of feature points after rejection affects the pose estimation, this paper innovatively combines Cylinder and Plane Extraction (CAPE) planar detection. We generate planes from depth maps and then introduce planar and in-plane point constraints into the nonlinear optimization of SLAM. Finally, the algorithm is tested on the publicly available TUM (RGB-D) dataset, and the average improvement in localization accuracy over ORB-SLAM2, DS-SLAM, and RDMO-SLAM is about 91.95%, 27.21%, and 30.30% under dynamic sequences, respectively. The single-frame tracking time of the whole system is only 42.68 ms, which is 44.1%, being 14.6–34.33% higher than DS-SLAM, RDMO-SLAM, and RDS-SLAM respectively. The system that we proposed significantly increases processing speed, performs better in real-time, and is easily deployed on various platforms. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Framework of the YPD-SLAM system.</p>
Full article ">Figure 2
<p>The schematic diagram of dynamic point checking, the left diagram indicates the previous frame and the right diagram indicates the current frame.</p>
Full article ">Figure 3
<p>Main flow of planar segmentation [<a href="#B8-sensors-22-08561" class="html-bibr">8</a>].</p>
Full article ">Figure 4
<p>Factor graph optimization.</p>
Full article ">Figure 5
<p>Dynamic point rejection, from left to right: (<b>a</b>) original image; (<b>b</b>) target detection; (<b>c</b>) dynamic feature point detection for dynamic objects; (<b>d</b>) rejection of dynamic feature points.</p>
Full article ">Figure 6
<p>Planar detection, from top to bottom, for the highly dynamic sequences fr3/w_xyz, fr3/w_rpy, and fr3/w_half.</p>
Full article ">Figure 7
<p>Points in the 3D plane that have been matched. Their matching relationship is expressed in RGB images in the form of two-dimensional coordinate lines.</p>
Full article ">Figure 8
<p>Global map.</p>
Full article ">Figure 9
<p>Error plot for ATE. Black represents the groundtruth, blue represents the estimated trajectory, and red represents the gap between the estimated trajectory and the real trajectory.</p>
Full article ">Figure 10
<p>The RPE results of ORB-SLAM2 and YPD-SLAM in fr3/w_xyz, w_rpy, and w_half.</p>
Full article ">
12 pages, 3686 KiB  
Article
A Fissure-Aided Registration Approach for Automatic Pulmonary Lobe Segmentation Using Deep Learning
by Mengfan Xue, Lu Han, Yiran Song, Fan Rao and Dongliang Peng
Sensors 2022, 22(21), 8560; https://doi.org/10.3390/s22218560 - 7 Nov 2022
Cited by 4 | Viewed by 2250
Abstract
The segmentation of pulmonary lobes is important in clinical assessment, lesion location, and surgical planning. Automatic lobe segmentation is challenging, mainly due to the incomplete fissures or the morphological variation resulting from lung disease. In this work, we propose a learning-based approach that [...] Read more.
The segmentation of pulmonary lobes is important in clinical assessment, lesion location, and surgical planning. Automatic lobe segmentation is challenging, mainly due to the incomplete fissures or the morphological variation resulting from lung disease. In this work, we propose a learning-based approach that incorporates information from the local fissures, the whole lung, and priori pulmonary anatomy knowledge to separate the lobes robustly and accurately. The prior pulmonary atlas is registered to the test CT images with the aid of the detected fissures. The result of the lobe segmentation is obtained by mapping the deformation function on the lobes-annotated atlas. The proposed method is evaluated in a custom dataset with COPD. Twenty-four CT scans randomly selected from the custom dataset were segmented manually and are available to the public. The experiments showed that the average dice coefficients were 0.95, 0.90, 0.97, 0.97, and 0.97, respectively, for the right upper, right middle, right lower, left upper, and left lower lobes. Moreover, the comparison of the performance with a former learning-based segmentation approach suggests that the presented method could achieve comparable segmentation accuracy and behave more robustly in cases with morphological specificity. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the fissure-aided lung registration method for lobe segmentation.</p>
Full article ">Figure 2
<p>Illustration of the fissure-based registration method.</p>
Full article ">Figure 3
<p>Illustration of fissure segmentation.</p>
Full article ">Figure 4
<p>An example of the results of the experiment. (<b>a</b>) Original scan; (<b>b</b>) ground-truth; (<b>c</b>) FRV-Net; and (<b>d</b>) proposed method.</p>
Full article ">Figure 4 Cont.
<p>An example of the results of the experiment. (<b>a</b>) Original scan; (<b>b</b>) ground-truth; (<b>c</b>) FRV-Net; and (<b>d</b>) proposed method.</p>
Full article ">Figure 5
<p>An example of the results of the experiment, which is largely different from the training dataset. (<b>a</b>) Original scan; (<b>b</b>) ground-truth; (<b>c</b>) FRV-Net; and (<b>d</b>) proposed method.</p>
Full article ">
20 pages, 9391 KiB  
Article
Design and Implementation of Embedded-Based Vein Image Processing System with Enhanced Denoising Capabilities
by Jongwon Lee, Incheol Jeong, Kapyol Kim and Jinsoo Cho
Sensors 2022, 22(21), 8559; https://doi.org/10.3390/s22218559 - 7 Nov 2022
Cited by 4 | Viewed by 3015
Abstract
In general, it is very difficult to visually locate blood vessels for intravenous injection or surgery. In addition, if vein detection fails, physical and mental pain occurs to the patient and leads to financial loss in the hospital. In order to prevent this [...] Read more.
In general, it is very difficult to visually locate blood vessels for intravenous injection or surgery. In addition, if vein detection fails, physical and mental pain occurs to the patient and leads to financial loss in the hospital. In order to prevent this problem, NIR-based vein detection technology is developing. The proposed study combines vein detection and digital hair removal to eliminate body hair, a noise that hinders the accuracy of detection, improving the performance of the entire algorithm by about 10.38% over existing systems. In addition, as a result of performing venous detection of patients without body hair, 5.04% higher performance than the existing system was detected, and the proposed study results were verified. It is expected that the use of devices to which the proposed study is applied will provide more accurate vascular maps in general situations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Graph for the absorption rate of water, melanin, and hemoglobin according to the wavelength of light.</p>
Full article ">Figure 2
<p>Results of applying morphological operations to the outline: (<b>A</b>) is the result of extracting body hair position (Blackhat), (<b>B</b>) is the result of removing noise and amplifying body hair by applying binary morphology dilate and open, and (<b>C</b>) is an image after histogram smoothing, (<b>D</b>) is an image obtained by applying an average filter and grayscale morphology operation to remove noise and perform operations for vein enhancement.</p>
Full article ">Figure 3
<p>Vein image restored by Telea inpainting: (<b>A</b>) is an image showing body hair, and (<b>B</b>) is an image with Telea inpainting applied.</p>
Full article ">Figure 4
<p>Vein images with histogram normalization and equalization applied: (<b>A</b>) is the original image, and (<b>B</b>) is the result of applying histogram equalization. The contrast of venous vessels became clear through histogram smoothing.</p>
Full article ">Figure 5
<p>Structure of embedded vein projector: (<b>A</b>) is the structure of VeinVu-100 used in the proposed study, and (<b>B</b>) is an image of the device used.</p>
Full article ">Figure 6
<p>Proposed vein image processing system scenario.</p>
Full article ">Figure 7
<p>Graph of light wavelength measured by NIR camera.</p>
Full article ">Figure 8
<p>Flowchart of the proposed vein image processing algorithm.</p>
Full article ">Figure 9
<p>Comparison of image processing results with and without removing light: (<b>A</b>) is the original image, (<b>B</b>) has unbalanced lighting removed so the applied operations can usually work and strengthen the veins. (<b>C</b>) shows that the brightness of the unbalanced illumination is relatively darker than that of the vein, so the calculations are applied to the darkly illuminated area. This eliminates the contrast of veins.</p>
Full article ">Figure 10
<p>(<b>A</b>) is the original image, (<b>B</b>) is the background approximation by blurring, (<b>C</b>) is the inverted image of (<b>B</b>), and (<b>D</b>) is the background approximation based on summing (<b>A</b>,<b>C</b>) resulting an image with the lighting removed.</p>
Full article ">Figure 11
<p>Hair removal algorithm scenario.</p>
Full article ">Figure 12
<p>Vein image processing algorithm scenario.</p>
Full article ">Figure 13
<p>Images obtained by applying the vein image processing algorithm: (<b>A</b>) is the original image taken with an NIR camera, and (<b>B</b>) is the resultant image obtained through the proposed algorithm.</p>
Full article ">Figure 14
<p>Algorithm verification method using the dataset for verification of the vein image processing algorithm: (<b>A</b>) is the original image, and (<b>C</b>) is the original image with 85% transparency applied. (<b>B</b>) is an artificial blood vessel layer added for verification, and (<b>D</b>) is a composite image of (<b>A</b>–<b>C</b>). (<b>E</b>) is the result image obtained through the algorithm, and (<b>F</b>) is the result image obtained by comparing (<b>B</b>,<b>E</b>) with SSIM.</p>
Full article ">Figure 15
<p>Image operation results with body hair noise: (<b>A</b>) is a test image created for algorithm verification, and (<b>B</b>) is an artificial blood vessel layer used to create the image. (<b>C</b>) is the resultant image of the proposed algorithm including the hair noise removal function, and (<b>D</b>) is the resultant image of the Veinvu-100 algorithm.</p>
Full article ">Figure 16
<p>Result of applying SSIM to the results of the proposed algorithm and Veinvu-100 algorithm: (<b>A</b>,<b>B</b>) are binary images obtained by applying the proposed and Veinvu-100 algorithms, respectively, and (<b>C</b>,<b>D</b>) are the results of comparing (<b>A</b>,<b>B</b>) with the vascular layer using SSIM.</p>
Full article ">Figure 17
<p>Proposed Algorithm Operation SSIM Measurement Results.</p>
Full article ">Figure 18
<p>Image operation results without body hair noise: (<b>A</b>) is a test image created for algorithm verification, and (<b>B</b>) is an artificial blood vessel layer used to create the image. (<b>C</b>) is the resultant image of the proposed vein image processing algorithm, and (<b>D</b>) is the resultant image of the Veinvu-100 algorithm.</p>
Full article ">Figure 19
<p>Results of applying SSIM to the results of the proposed and Veinvu-100 algorithms: (<b>A</b>,<b>B</b>) are binary images of the results of applying the proposed and Veinvu-100 algorithms, respectively, and (<b>C</b>,<b>D</b>) are results of comparing (<b>A</b>,<b>B</b>) with the vessel layer using SSIM.</p>
Full article ">Figure 20
<p>Proposed Algorithm Operation SSIM Measurement Results.</p>
Full article ">Figure 21
<p>Proposed Algorithm Operation Speed Measurement Results.</p>
Full article ">
12 pages, 4560 KiB  
Article
Imaging and Deep Learning Based Approach to Leaf Wetness Detection in Strawberry
by Arth M. Patel, Won Suk Lee and Natalia A. Peres
Sensors 2022, 22(21), 8558; https://doi.org/10.3390/s22218558 - 7 Nov 2022
Cited by 4 | Viewed by 2174
Abstract
The Strawberry Advisory System (SAS) is a tool developed to help Florida strawberry growers determine the risk of common fungal diseases and the need for fungicide applications. Leaf wetness duration (LWD) is one of the important parameters in SAS disease risk modeling. By [...] Read more.
The Strawberry Advisory System (SAS) is a tool developed to help Florida strawberry growers determine the risk of common fungal diseases and the need for fungicide applications. Leaf wetness duration (LWD) is one of the important parameters in SAS disease risk modeling. By accurately measuring the LWD, disease risk can be better assessed, leading to less fungicide use and more economic benefits to the farmers. This research aimed to develop and test a more accurate leaf wetness detection system than traditional leaf wetness sensors. In this research, a leaf wetness detection system was developed and tested using color imaging of a reference surface and a convolutional neural network (CNN), which is one of the artificial-intelligence-based learning methods. The system was placed at two separate field locations during the 2021–2022 strawberry-growing season. The results from the developed system were compared against manual observation to determine the accuracy of the system. It was found that the AI- and imaging-based system had high accuracy in detecting wetness on a reference surface. The developed system can be used in SAS for determining accurate disease risks and fungicide recommendations for strawberry production and allows the expansion of the system to multiple locations. Full article
(This article belongs to the Special Issue AI-Based Sensors and Sensing Systems for Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>System to monitor a reference surface using an RGB camera at UF PSREU, Citra.</p>
Full article ">Figure 2
<p>System to monitor a reference surface using an RGB camera at UF GCREC, Wimauma.</p>
Full article ">Figure 3
<p>Block diagram of the system to monitor a reference surface and to detect wetness from images of the reference surface.</p>
Full article ">Figure 4
<p>Reference surface and camera enclosure with the RGB camera at UF PSREU, Citra.</p>
Full article ">Figure 5
<p>Example images of the reference surface: (<b>a</b>) color image acquired during normal daylight conditions and (<b>b</b>) color image acquired during the nighttime with the help of artificial illumination.</p>
Full article ">Figure 6
<p>Example nighttime images of the reference surface using artificial illumination: (<b>a</b>) with large water droplets formed due to rain and (<b>b</b>) with tiny water droplets formed due to dew.</p>
Full article ">Figure 7
<p>Example images of the reference surface: (<b>a</b>) the original image of the reference surface, (<b>b</b>) image with corrected barrel distortion, and (<b>c</b>) cropped image, which represents a 7.6 × 5 cm surface.</p>
Full article ">Figure 8
<p>Examples of images used in the training and test datasets: (<b>a</b>) “dry” class image and (<b>b</b>) “wet” class image.</p>
Full article ">Figure 9
<p>Details of the convolutional neural network layers used for the classification of the images of the reference surface into two classes.</p>
Full article ">Figure 10
<p>Training and validation accuracy trend.</p>
Full article ">Figure 11
<p>Example images of the reference surface with tiny water droplets during the dew onset period, shown in a red square.</p>
Full article ">Figure 12
<p>Example images of the reference surface with one water droplet, shown in a red circle.</p>
Full article ">
12 pages, 23180 KiB  
Article
Sagnac with Double-Sense Twisted Low-Birefringence Standard Fiber as Vibration Sensor
by Héctor Santiago-Hernández, Anuar Benjamín Beltrán-González, Azael Mora-Nuñez, Beethoven Bravo-Medina and Olivier Pottiez
Sensors 2022, 22(21), 8557; https://doi.org/10.3390/s22218557 - 7 Nov 2022
Cited by 1 | Viewed by 1947
Abstract
In this work, we study a double-sense twisted low-birefringence Sagnac loop structure as a sound/vibration sensing device. We study the relation between the adjustments of a wave retarder inside the loop (which allows controlling the transmission characteristic to deliver 10, 100, and 300 [...] Read more.
In this work, we study a double-sense twisted low-birefringence Sagnac loop structure as a sound/vibration sensing device. We study the relation between the adjustments of a wave retarder inside the loop (which allows controlling the transmission characteristic to deliver 10, 100, and 300 μW average power at the output of the system) and the response of the Sagnac sensor to vibration frequencies ranging from 0 to 22 kHz. For a 300 m loop Sagnac, two sets of experiments were carried out, playing at the same time all the sound frequencies mixed for ∼1 s, and playing a sweep of frequencies for 30 s. In both cases, the time- and frequency-domain transmission amplitudes are larger for an average power of 10 μW, and smaller for an average power of 300 μW. For mixed frequencies, the Fourier analysis shows that the Sagnac response is larger for low frequencies (from 0 to ∼5 kHz) than for high frequencies (from ∼5 kHz to ∼22 kHz). For a sweep of frequencies, the results reveal that the interferometer perceives all frequencies. However, beyond ∼2.5 kHz, harmonics are present each ∼50 Hz, revealing that some resonances are present. The results about the influence of the power transmission through the polarizer and power emission of laser diode (LD) on the Sagnac interferometer response at high frequencies reveal that our system is robust, and the results are highly reproducible, and harmonics do not depend on the state of polarization at the input of the Sagnac interferometer. Furthermore, increasing the LD output power from 5 mW to 67.5 mW allows us to eliminate noisy signals at the system output. in our setup, the minimum sound level detected was 56 dB. On the other hand, the experimental results of a 10 m loop OFSI reveal that the response at low frequencies (1.5 kHz to 5 kHz) is minor compared with the 300 m loop OFSI. However, the response at high frequencies is low but still enables the detection of these frequencies, yielding the possibility of tuning the response of the vibration sensor by varying the length of the Sagnac loop. Full article
(This article belongs to the Special Issue Advances in Fiber Laser Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the OFSI.</p>
Full article ">Figure 2
<p>Dependence of Sagnac transmission on (<b>a</b>) angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> of QWR in the Sagnac loop, (<b>b</b>) loop length (for <math display="inline"><semantics> <mi>α</mi> </semantics></math> = 0). The traces are calculated by using Equation (<a href="#FD1-sensors-22-08557" class="html-disp-formula">1</a>), with <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>14</mn> <mi>π</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>=</mo> <mn>0.13</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1.45</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Dependence of Sagnac transmission on refractive index in the loop, for different values of QWR angle <math display="inline"><semantics> <mi>α</mi> </semantics></math>. Double arrows illustrate the amplitude of transmission variations resulting from a given index fluctuation. The symbol (*) denotes multiplication.</p>
Full article ">Figure 4
<p>All frequencies detected during ∼1 s by OFSI for several average power values adjusted by QWR angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> (see <a href="#sensors-22-08557-f001" class="html-fig">Figure 1</a>).</p>
Full article ">Figure 5
<p>All frequencies detected in our setup for several average power adjusted by the angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> of QWR (see <a href="#sensors-22-08557-f001" class="html-fig">Figure 1</a>). Inset a zoom of frequencies at 2 kHz.</p>
Full article ">Figure 6
<p>Low-response region for frequencies detected during 1 s in our setup for several average power measured at the output and adjusted by the angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> of QWR (see <a href="#sensors-22-08557-f001" class="html-fig">Figure 1</a>).</p>
Full article ">Figure 7
<p>The OFSI response decays gradually at high frequencies. Specifically, the signal disappears at ∼17, ∼16.5, and ∼16 kHz, for blue, red, and black traces, respectively.</p>
Full article ">Figure 8
<p>Response of OFSI for a sweep of frequencies from 0 kHz to 23 kHz over 30 s for several average power values adjusted by the angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> of the QWR (see <a href="#sensors-22-08557-f001" class="html-fig">Figure 1</a>).</p>
Full article ">Figure 9
<p>All frequencies detected in our setup for several values of average power adjusted by the angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> of QWR (see <a href="#sensors-22-08557-f001" class="html-fig">Figure 1</a>). The inset shows a zoom on frequencies around 0.26 kHz.</p>
Full article ">Figure 10
<p>Frequency-dependent response of the system for several average power adjusted by the angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> of QWR (see <a href="#sensors-22-08557-f001" class="html-fig">Figure 1</a>): (<b>a</b>) in the 2 kHz region, (<b>b</b>) around 11.2 kHz; (<b>c</b>) at high frequencies.</p>
Full article ">Figure 11
<p>Influence on transmission response of (<b>a</b>) polarizer transmission and LD output power in the 12 kHz region, and of (<b>b</b>) LD output power at high frequencies.</p>
Full article ">Figure 12
<p>Temporal response of a 10 m loop OFSI. Blue trace = 10 <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">W</mi> </mrow> </semantics></math>, red trace = 30 <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">W</mi> </mrow> </semantics></math>, and black trace = 60 <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">W</mi> </mrow> </semantics></math> of average power at the output.</p>
Full article ">Figure 13
<p>Response of a 10 m loop OFSI: (<b>a</b>) All frequencies detected during ∼2.5 s for several average output power values adjusted by the QWR (See <a href="#sensors-22-08557-f001" class="html-fig">Figure 1</a>), inset shows a close-up on 9 kHz. (<b>b</b>) close-up on region from 15 kHz to 20 kHz. Black trace = 10 <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">W</mi> </mrow> </semantics></math>, red trace = 30 <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">W</mi> </mrow> </semantics></math>, blue trace = 60 <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">W</mi> </mrow> </semantics></math>.</p>
Full article ">
6 pages, 198 KiB  
Editorial
From Sensor Data to Educational Insights
by José A. Ruipérez-Valiente, Roberto Martínez-Maldonado, Daniele Di Mitri and Jan Schneider
Sensors 2022, 22(21), 8556; https://doi.org/10.3390/s22218556 - 7 Nov 2022
Cited by 7 | Viewed by 2070
Abstract
Technology is gradually becoming an integral part of learning at all levels of educational [...] Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
16 pages, 3943 KiB  
Article
Reliability and Validity of Inertial Sensor Assisted Reaction Time Measurement Tools among Healthy Young Adults
by Brent Harper, Michael Shiraishi and Rahul Soangra
Sensors 2022, 22(21), 8555; https://doi.org/10.3390/s22218555 - 6 Nov 2022
Cited by 3 | Viewed by 2643
Abstract
The assessment of movement reaction time (RT) as a sideline assessment is a valuable biomarker for mild TBI or concussion. However, such assessments require controlled laboratory environments, which may not be feasible for sideline testing during a game. Body-worn wearable devices are advantageous [...] Read more.
The assessment of movement reaction time (RT) as a sideline assessment is a valuable biomarker for mild TBI or concussion. However, such assessments require controlled laboratory environments, which may not be feasible for sideline testing during a game. Body-worn wearable devices are advantageous as being cost-effective, easy to don and use, wirelessly transmit data, and ensure unhindered movement performance. This study aimed to develop a Drop-stick Test System (DTS) with a wireless inertial sensor and confirm its reliability for different standing conditions (Foam versus No Foam) and task types (Single versus Dual), and postures (Standing versus sitting). Fourteen healthy young participants (seven females, seven males; age 24.7 ± 2.6 years) participated in this study. The participants were asked to catch a falling stick attached to the sensor during a drop test. Reaction Times (RTs) were calculated from data for each trial from DTS and laboratory camera system (gold standard). Intraclass correlation coefficients (ICC 3,k) were computed to determine inter-instrument reliability. The RT measurements from participants using the camera system and sensor-based DTS showed moderate to good inter-instrument reliability with an overall ICC of 0.82 (95% CI 0.78–0.85). Bland–Altman plots and 95% levels of agreement revealed a bias where the DTS underestimated RT by approximately 50 ms. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Initial hand position before the drop, (<b>b</b>) a drop-stick catch, (<b>c</b>) the tripod-assisted initial position, (<b>d</b>) zoomed-out figure showing infrared markers and inertial sensor.</p>
Full article ">Figure 2
<p>Schematic diagram showing how computer algorithm was created for dual-tasking condition.</p>
Full article ">Figure 3
<p>This picture shows the experimental setup for the dual-task condition. The participant sat and waited for instructions from the monitor, which randomly and instantaneously turned ‘green’ or ‘red’ after the drop. The participant was to ‘catch’ the stick if the monitor screen turned ‘green’ or let the stick fall (‘no catch’) if the screen turned ‘red.’.</p>
Full article ">Figure 4
<p>(<b>a</b>) Vertical marker position during a stick fall, (<b>b</b>) Vertical acceleration trajectory during a stick fall.</p>
Full article ">Figure 5
<p>Five Stick fall events: (i) Fall start (SX1), (ii) Free Fall Start (SX2), (iii) Free Fall Stop (SX3), (iv) Peak Deceleration (SX4), (v) Minima after Peak Deceleration (SX5).</p>
Full article ">Figure 6
<p>Plot differences between the camera and sensor systems RT versus the mean of two measurements during (<b>a</b>) standing and (<b>b</b>) sitting. The camera system reported a longer mean RT (bias) during standing by 0.05 s and sitting by 0.06 s.</p>
Full article ">Figure 7
<p>Plot differences between RT from the camera and sensor systems versus the mean of two measurements during (<b>a</b>) foam surface standing and (<b>b</b>) no firm surface standing. The camera system reported a longer mean RT (bias) during both conditions by 0.05 s.</p>
Full article ">Figure 8
<p>Plot differences between the camera and sensor systems RT versus the mean of two measurements for the overall trial with all conditions. Overall, camera systems were found to have a longer mean RT (bias) by 0.05 s.</p>
Full article ">Figure 9
<p>Plot differences between the camera and sensor systems RT versus the mean of two measurements during (<b>a</b>) Single Tasking and (<b>b</b>) Dual tasking. During the single task, the bias of RT was 0.06 s, and during dual tasking, the bias was 0.03 s.</p>
Full article ">
19 pages, 5962 KiB  
Article
Integrated Video and Acoustic Emission Data Fusion for Intelligent Decision Making in Material Surface Inspection System
by Andrey V. Chernov, Ilias K. Savvas, Alexander A. Alexandrov, Oleg O. Kartashov, Dmitry S. Polyanichenko, Maria A. Butakova and Alexander V. Soldatov
Sensors 2022, 22(21), 8554; https://doi.org/10.3390/s22218554 - 6 Nov 2022
Cited by 4 | Viewed by 2873
Abstract
In the field of intelligent surface inspection systems, particular attention is paid to decision making problems, based on data from different sensors. The combination of such data helps to make an intelligent decision. In this research, an approach to intelligent decision making based [...] Read more.
In the field of intelligent surface inspection systems, particular attention is paid to decision making problems, based on data from different sensors. The combination of such data helps to make an intelligent decision. In this research, an approach to intelligent decision making based on a data integration strategy to raise awareness of a controlled object is used. In the following article, this approach is considered in the context of reasonable decisions when detecting defects on the surface of welds that arise after the metal pipe welding processes. The main data types were RGB, RGB-D images, and acoustic emission signals. The fusion of such multimodality data, which mimics the eyes and ears of an experienced person through computer vision and digital signal processing, provides more concrete and meaningful information for intelligent decision making. The main results of this study include an overview of the architecture of the system with a detailed description of its parts, methods for acquiring data from various sensors, pseudocodes for data processing algorithms, and an approach to data fusion meant to improve the efficiency of decision making in detecting defects on the surface of various materials. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>The block diagram of the proposed study.</p>
Full article ">Figure 2
<p>The used raw data streams and the principles of their fusion.</p>
Full article ">Figure 3
<p>Parameters of a guaranteed identification area above the steel pipe surface.</p>
Full article ">Figure 4
<p>A 3D mechanical prototype model with an installed RGB-Depth camera and magnetic wheels.</p>
Full article ">Figure 5
<p>Positioning of the AE sensors on the surface of the pipe near the weld seam: (<b>a</b>) geometric parameters; (<b>b</b>) AE waves spread from the AE generator to the AE sensor.</p>
Full article ">Figure 6
<p>A sample pair of images in the collected dataset: (<b>a</b>) an RGB image; (<b>b</b>) a corresponding depth image.</p>
Full article ">Figure 7
<p>A sample scaleogram image. For input timeseries: (<b>a</b>) an input AE timeseries; (<b>b</b>) a corresponding scaleogram image.</p>
Full article ">Figure 8
<p>A proposed pipeline for the integrated data fusion with the decision making procedure.</p>
Full article ">Figure 9
<p>Experimental setup installed on the pipe.</p>
Full article ">Figure 10
<p>The core detail and layers of the DCNN for data fusion based on ResNet-18.</p>
Full article ">Figure 11
<p>An example of the training results.</p>
Full article ">Figure 12
<p>An example of training results for early fusion.</p>
Full article ">Figure 13
<p>A result of classification for different approaches: (<b>a</b>) a test sample without a defect; (<b>b</b>) a test sample with a defect.</p>
Full article ">
Previous Issue
Back to TopTop