[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,619)

Search Parameters:
Keywords = pipeline detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6779 KiB  
Article
Acoustic Emission-Based Pipeline Leak Detection and Size Identification Using a Customized One-Dimensional DenseNet
by Faisal Saleem, Zahoor Ahmad, Muhammad Farooq Siddique, Muhammad Umar and Jong-Myon Kim
Sensors 2025, 25(4), 1112; https://doi.org/10.3390/s25041112 - 12 Feb 2025
Abstract
Effective leak detection and leak size identification are essential for maintaining the operational safety, integrity, and longevity of industrial pipelines. Traditional methods often suffer from high noise sensitivity, limited adaptability to non-stationary signals, and excessive computational costs, which limits their feasibility for real-time [...] Read more.
Effective leak detection and leak size identification are essential for maintaining the operational safety, integrity, and longevity of industrial pipelines. Traditional methods often suffer from high noise sensitivity, limited adaptability to non-stationary signals, and excessive computational costs, which limits their feasibility for real-time monitoring applications. This study presents a novel acoustic emission (AE)-based pipeline monitoring approach, integrating Empirical Wavelet Transform (EWT) for adaptive frequency decomposition with customized one-dimensional DenseNet architecture to achieve precise leak detection and size classification. The methodology begins with EWT-based signal segmentation, which isolates meaningful frequency bands to enhance leak-related feature extraction. To further improve signal quality, adaptive thresholding and denoising techniques are applied, filtering out low-amplitude noise while preserving critical diagnostic information. The denoised signals are processed using a DenseNet-based deep learning model, which combines convolutional layers and densely connected feature propagation to extract fine-grained temporal dependencies, ensuring the accurate classification of leak presence and severity. Experimental validation was conducted on real-world AE data collected under controlled leak and non-leak conditions at varying pressure levels. The proposed model achieved an exceptional leak detection accuracy of 99.76%, demonstrating its ability to reliably differentiate between normal operation and multiple leak severities. This method effectively reduces computational costs while maintaining robust performance across diverse operating environments. Full article
(This article belongs to the Special Issue Feature Papers in Fault Diagnosis & Sensors 2025)
32 pages, 5118 KiB  
Review
A Review of Recent Advances in Unidirectional Ultrasonic Guided Wave Techniques for Nondestructive Testing and Evaluation
by Ali Abuassal, Lei Kang, Lucas Martinho, Alan Kubrusly, Steve Dixon, Edward Smart, Hongjie Ma and David Sanders
Sensors 2025, 25(4), 1050; https://doi.org/10.3390/s25041050 - 10 Feb 2025
Viewed by 360
Abstract
Unidirectional ultrasonic guided waves (UGWs) play a crucial role in the nondestructive testing and evaluation (NDT&E) domains, offering unique advantages in detecting material defects, evaluating structural integrity, and improving the accuracy of thickness measurements. This review paper thoroughly studies the state of the [...] Read more.
Unidirectional ultrasonic guided waves (UGWs) play a crucial role in the nondestructive testing and evaluation (NDT&E) domains, offering unique advantages in detecting material defects, evaluating structural integrity, and improving the accuracy of thickness measurements. This review paper thoroughly studies the state of the art of unidirectional UGWs before presenting a comprehensive review of the foundational mathematical principles of unidirectional UGWs, focusing on the recent advancements in their methodologies and applications. This review introduces ultrasonic guided waves and their modes before looking at mode excitability and selectivity, signal excitation, and mechanisms used to generate and receive guided waves unidirectionally. This paper outlines the applications of unidirectional UGWs to reflect their effectiveness, for instance, in measuring thickness and in identifying defects such as cracks and corrosion in pipelines, etc. The paper also studies the challenges associated with unidirectional UGW generation and utilisation, such as multi-mode and side lobes. It includes a review of the literature to mitigate these challenges. Finally, this paper highlights promising future perspectives and develops directions for the technique. This review aims to create a useful resource for researchers and practitioners to comprehend unidirectional ultrasonic guided waves’ capabilities, challenges, and prospects in NDT&E applications. Full article
(This article belongs to the Special Issue Exploring the Sensing Potential of Acoustic Wave Devices)
Show Figures

Figure 1

Figure 1
<p>Propagation of SH wave mode, illustrated by the gradient arrow, in a plate with a thickness of <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>d</mi> </mrow> </semantics></math>, where the propagation is along <span class="html-italic">x</span> and the material particle displacements are along <span class="html-italic">y</span>.</p>
Full article ">Figure 2
<p>SH-mode phase velocity dispersion curves for a flat plate of aluminium with a 5 mm thickness and bulk shear wave velocity of <math display="inline"><semantics> <msub> <mi>c</mi> <mi>T</mi> </msub> </semantics></math> = 3.1 mm/<math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s over a frequency–thickness product range of 0–14 MHz·mm. SH0 represents the fundamental mode, which is the only non-dispersive mode and has the same phase velocity as the bulk shear wave. The other modes, SH1 to SH8, are dispersive modes. These dispersion curves were generated using Dispersion Calculator software (v3.0) [<a href="#B68-sensors-25-01050" class="html-bibr">68</a>].</p>
Full article ">Figure 3
<p>Lamb wave dispersion curves for symmetric and anti-symmetric modes in an aluminium plate of a 5 mm thickness with a bulk shear wave velocity of <math display="inline"><semantics> <msub> <mi>c</mi> <mi>T</mi> </msub> </semantics></math> = 3.1 mm/<math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s over a frequency–thickness product range of 0–14 MHz·mm. The red lines represent the symmetric modes S0 to S6, and the blue lines represent the anti-symmetric modes A0 to A6. Modes A0 and S0 are the fundamental modes. This dispersion curve was generated using Dispersion Calculator software (v3.0) [<a href="#B68-sensors-25-01050" class="html-bibr">68</a>].</p>
Full article ">Figure 4
<p>This diagram illustrates a general example to explain the principle of generating unidirectional ultrasonic guided waves. The blue and grey rectangles represent sources A and B, which simultaneously emit ultrasonic waves in both forward and backward directions. The excitation signals ((<b>top middle</b>) graph) are depicted with different colours corresponding to their sources. The weakened side is shown on the right-hand side, with the individual and total waves resulting from the controlled destructive interference. Conversely, the left-hand side demonstrates the enhanced side, where the controlled constructive interference occurs. Redrawn based on [<a href="#B104-sensors-25-01050" class="html-bibr">104</a>].</p>
Full article ">Figure 5
<p>An example of an operating region for a unidirectional ultrasonic transducer that generates an SH1 mode with the base signal at a centre frequency of 577.7 kHz. The blue and orange curves demonstrate the maximum and minimum loci of interference on the positive and negative sides, respectively. The black dashed line shows the dispersion curve of the SH fundamental mode (SH0), and the black arced line is the dispersion curve of the SH1 mode. As the minimum locus aligns with the dispersive curve, ideal destructive interference can be realised. Redrawn based on [<a href="#B103-sensors-25-01050" class="html-bibr">103</a>].</p>
Full article ">Figure 6
<p>A metallic plate-like structure where an ultrasonic transducer with a transmitter, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>x</mi> </msub> </semantics></math>, propagates waves and non-negligible side lobes appear. The waves generated by the side lobes in the transducer’s radiation pattern are reflected at the plate’s edge and reach the receiving transducer, <math display="inline"><semantics> <msub> <mi>R</mi> <mi>x</mi> </msub> </semantics></math>, as do those of the required main lobe, complicating signal interpretation. Redrawn based on [<a href="#B129-sensors-25-01050" class="html-bibr">129</a>].</p>
Full article ">Figure 7
<p>Side-shifted EMATs seen when increasing the number of PPM rows and expanding the coils from one row (<b>top-left</b>) to four rows in each array (<b>bottom-right</b>). The green and the blue square blocks are the magnets’ north and south poles, respectively. Redrawn based on [<a href="#B135-sensors-25-01050" class="html-bibr">135</a>].</p>
Full article ">Figure 8
<p>Configuration of a unidirectional wideband SH guided wave phased-array magnetostrictive patch transducer producing both a static magnetic field, coming from the magnetostrictive patches, and a dynamic magnetic field, generated by the alternating current flowing through the coils. The static magnetic field is perpendicular to the dynamic magnetic field. Redrawn based on [<a href="#B27-sensors-25-01050" class="html-bibr">27</a>].</p>
Full article ">Figure 9
<p>Schematic diagram of a time-delayed layer-based piezoelectric transducer. The transducer consists of time-delayed layers (H1 and H2) of different heights and two rectangular thickness-shear (d15)-mode piezoelectric wafers (A and B). D represents half the lateral spacing between the two line force sources. Redrawn based on [<a href="#B108-sensors-25-01050" class="html-bibr">108</a>].</p>
Full article ">Figure 10
<p>This diagram describes the operation principle of a side-shifted unidirectional dual PPM EMAT. The magnets’ north and south poles are represented by square blocks (green and blue), while blue and orange wires describe the two racetrack coils. The injected currents are, respectively, represented by <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>I</mi> <mn>2</mn> </msub> </semantics></math>, which are excited by a delay of 90<sup>∘</sup>. The enhanced and the weakened sides are shown on the right- and left-hand sides of the EMAT, respectively. The two coil sets are side-shifted by a distance <span class="html-italic">d</span> and longitudinally shifted by a quarter-wavelength (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>). Redrawn based on [<a href="#B44-sensors-25-01050" class="html-bibr">44</a>].</p>
Full article ">Figure 11
<p>An example photograph of the multiple-row side-shifted PPM EMAT devised in [<a href="#B135-sensors-25-01050" class="html-bibr">135</a>] with the PCB dual-coil revised in [<a href="#B136-sensors-25-01050" class="html-bibr">136</a>], which includes (<b>a</b>) two coils carrying the currents <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>I</mi> <mn>2</mn> </msub> </semantics></math> and (<b>b</b>) the rows of a PPM array.</p>
Full article ">Figure 12
<p>Schematic of wave propagation through a defect in one direction and reflected from the defect in the contrasting direction. <math display="inline"><semantics> <msub> <mi>T</mi> <mi>x</mi> </msub> </semantics></math> represents the transmitter, while <math display="inline"><semantics> <msub> <mi>R</mi> <mi>x</mi> </msub> </semantics></math> denotes the receiver. The blue and red arrows illustrate the passing and reflected waves. Redrawn based on [<a href="#B102-sensors-25-01050" class="html-bibr">102</a>].</p>
Full article ">
24 pages, 1713 KiB  
Article
A Performance Analysis of You Only Look Once Models for Deployment on Constrained Computational Edge Devices in Drone Applications
by Lucas Rey, Ana M. Bernardos, Andrzej D. Dobrzycki, David Carramiñana, Luca Bergesio, Juan A. Besada and José Ramón Casar
Electronics 2025, 14(3), 638; https://doi.org/10.3390/electronics14030638 - 6 Feb 2025
Viewed by 377
Abstract
Advancements in embedded systems and Artificial Intelligence (AI) have enhanced the capabilities of Unmanned Aircraft Vehicles (UAVs) in computer vision. However, the integration of AI techniques o-nboard drones is constrained by their processing capabilities. In this sense, this study evaluates the deployment of [...] Read more.
Advancements in embedded systems and Artificial Intelligence (AI) have enhanced the capabilities of Unmanned Aircraft Vehicles (UAVs) in computer vision. However, the integration of AI techniques o-nboard drones is constrained by their processing capabilities. In this sense, this study evaluates the deployment of object detection models (YOLOv8n and YOLOv8s) on both resource-constrained edge devices and cloud environments. The objective is to carry out a comparative performance analysis using a representative real-time UAV image processing pipeline. Specifically, the NVIDIA Jetson Orin Nano, Orin NX, and Raspberry Pi 5 (RPI5) devices have been tested to measure their detection accuracy, inference speed, and energy consumption, and the effects of post-training quantization (PTQ). The results show that YOLOv8n surpasses YOLOv8s in its inference speed, achieving 52 FPS on the Jetson Orin NX and 65 fps with INT8 quantization. Conversely, the RPI5 failed to satisfy the real-time processing needs in spite of its suitability for low-energy consumption applications. An analysis of both the cloud-based and edge-based end-to-end processing times showed that increased communication latencies hindered real-time applications, revealing trade-offs between edge (low latency) and cloud processing (quick processing). Overall, these findings contribute to providing recommendations and optimization strategies for the deployment of AI models on UAVs. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram of the indoor controlled flight environment equipped with OptiTrack sensors, drones with access points, and a TurtleBot target on the ground.</p>
Full article ">Figure 2
<p>Examples of different dataset images where the object to be detected has been manually annotated. Images are captured from different angles and heights and with different background environments.</p>
Full article ">Figure 3
<p>Quantization and deployment process of YOLOv8 models on the Jetson Orin Nano, Jetson Orin NX, and Raspberry Pi 5.</p>
Full article ">Figure 4
<p>Figure showing the results on the mean iteration times of each model (YOLOv8s or YOLOv8n) with different quantization versions (FP32, FP16, or INT8) within a device (the Orin NX, Orin Nano, or Raspberry Pi 5).</p>
Full article ">Figure 5
<p>Figure showing the results of the FPS tests of each isolated model (YOLOv8s or YOLOv8n) with different quantization versions (FP32, FP16, and INT8) within a device.</p>
Full article ">Figure 6
<p>Energy consumption by model and device.</p>
Full article ">Figure 7
<p>General system deployment architecture.</p>
Full article ">Figure 8
<p>Data flow for real-time video processing and predictions at the edge.</p>
Full article ">
23 pages, 602 KiB  
Article
The Scalable Detection and Resolution of Data Clumps Using a Modular Pipeline with ChatGPT
by Nils Baumgartner, Padma Iyenghar, Timo Schoemaker and Elke Pulvermüller
Software 2025, 4(1), 3; https://doi.org/10.3390/software4010003 - 2 Feb 2025
Viewed by 550
Abstract
This paper explores a modular pipeline architecture that integrates ChatGPT, a Large Language Model (LLM), to automate the detection and refactoring of data clumps—a prevalent type of code smell that complicates software maintainability. Data clumps refer to clusters of code that are often [...] Read more.
This paper explores a modular pipeline architecture that integrates ChatGPT, a Large Language Model (LLM), to automate the detection and refactoring of data clumps—a prevalent type of code smell that complicates software maintainability. Data clumps refer to clusters of code that are often repeated and should ideally be refactored to improve code quality. The pipeline leverages ChatGPT’s capabilities to understand context and generate structured outputs, making it suitable for addressing complex software refactoring tasks. Through systematic experimentation, our study not only addresses the research questions outlined but also demonstrates that the pipeline can accurately identify data clumps, particularly excelling in cases that require semantic understanding—where localized clumps are embedded within larger codebases. While the solution significantly enhances the refactoring workflow, facilitating the management of distributed clumps across multiple files, it also presents challenges such as occasional compiler errors and high computational costs. Feedback from developers underscores the usefulness of LLMs in software development but also highlights the essential role of human oversight to correct inaccuracies. These findings demonstrate the pipeline’s potential to enhance software maintainability, offering a scalable and efficient solution for addressing code smells in real-world projects, and contributing to the broader goal of enhancing software maintainability in large-scale projects. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

Figure 1
<p>Visualization of communication with services for one pipeline step.</p>
Full article ">Figure 2
<p>Exemplary lifecycles of the context in the pipeline.</p>
Full article ">Figure 3
<p>Flowcharts comparing both experiments.</p>
Full article ">Figure 4
<p>Data clump file path distance.</p>
Full article ">Figure 5
<p>Data clump field distances.</p>
Full article ">Figure 6
<p>Percentage of detected data clumps per project.</p>
Full article ">Figure 7
<p>Influence of input format on surety.</p>
Full article ">Figure 8
<p>Removed data clumps by margin.</p>
Full article ">Figure 9
<p>Likert scale results of the developer feedback survey (8 participants).</p>
Full article ">
31 pages, 6413 KiB  
Article
Noise-to-Convex: A Hierarchical Framework for SAR Oriented Object Detection via Scattering Keypoint Feature Fusion and Convex Contour Refinement
by Shuoyang Liu, Ming Tong, Bokun He, Jiu Jiang and Chu He
Electronics 2025, 14(3), 569; https://doi.org/10.3390/electronics14030569 - 31 Jan 2025
Viewed by 388
Abstract
Oriented object detection has become a hot topic in SAR image interpretation. Due to the unique imaging mechanism, SAR objects are represented as clusters of scattering points surrounded by coherent speckle noise, leading to blurred outlines and increased false alarms in complex scenes. [...] Read more.
Oriented object detection has become a hot topic in SAR image interpretation. Due to the unique imaging mechanism, SAR objects are represented as clusters of scattering points surrounded by coherent speckle noise, leading to blurred outlines and increased false alarms in complex scenes. To address these challenges, we propose a novel noise-to-convex detection paradigm with a hierarchical framework based on the scattering-keypoint-guided diffusion detection transformer (SKG-DDT), which consists of three levels. At the bottom level, the strong-scattering-region generation (SSRG) module constructs the spatial distribution of strong scattering regions via a diffusion model, enabling the direct identification of approximate object regions. At the middle level, the scattering-keypoint feature fusion (SKFF) module dynamically locates scattering keypoints across multiple scales, capturing their spatial and structural relationships with the attention mechanism. Finally, the convex contour prediction (CCP) module at the top level refines the object outline by predicting fine-grained convex contours. Furthermore, we unify the three-level framework into an end-to-end pipeline via a detection transformer. The proposed method was comprehensively evaluated on three public SAR datasets, including HRSID, RSDD-SAR, and SAR-Aircraft-v1.0. The experimental results demonstrate that the proposed method attains an AP50 of 86.5%, 92.7%, and 89.2% on these three datasets, respectively, which is an increase of 0.7%, 0.6%, and 1.0% compared to the existing state-of-the-art method. These results indicate that our approach outperforms existing algorithms across multiple object categories and diverse scenes. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Traditional object-outline modeling representations are displayed from left to right: the horizontal bounding box; the oriented bounding box; and the adaptive RepPoints. The red rectangle represents the bounding boxes, the red crosses indicate the object centers, and the red dots denote RepPoints. (<b>b</b>) Compared to traditional representations, our method adaptively captures the object-scattering keypoints and integrates their features to predict a precise convex contour, which provides a fine-grained description for the object’s blurred outline. The red dots represent corresponding scattering keypoints or convex points, red lines form the convex contour, green dashed lines indicate offsets of points, and the green rectangle indicates the object’s area.</p>
Full article ">Figure 2
<p>The hierarchical framework of the proposed method. Bottom level: approximate object regions are generated by learning the spatial distribution of strong scattering regions. Middle level: scattering keypoints are located based on the bottom-level region, then their features are combined for geometric insight. Top level: the convex contour is refined by leveraging both the bottom-level region and the middle-level keypoint features. The orange rectangle denotes the object region, red dots are scattering keypoints or convex points, and red lines form the convex contour.</p>
Full article ">Figure 3
<p>The architecture of the proposed method consists of a feature extractor and a detection transformer head. The feature extractor uses a backbone and a channel mapper to extract multi-scale features. The detection transformer head includes a 6-layer transformer encoder, along with the SSRG, SKFF, and CCP module. The orange rectangle denotes the object region, red dots are scattering keypoints or convex points, green dashed lines indicate offsets of points, and red lines form the convex contour.</p>
Full article ">Figure 4
<p>Various designs of the denoising-decoder layer integrating the time-embedding condition: (<b>a</b>) Concatenation; (<b>b</b>) All layer adaptive layer normalization; (<b>c</b>) Single layer adaptive layer normalization. The blue boxes represent newly added structures or inputs, the green boxes indicate the noisy region, and the green lines indicates the information flow of noisy regions. For simplicity, the standard layer normalization after each multi-head self-attention, cross-attention, and feedforward network is omitted.</p>
Full article ">Figure 5
<p>A visualization of the predicted keypoints for different methods. For simplicity, we only visualized the scattering keypoints predicted at a certain scale in the SKFF module. (<b>a</b>) Attention; (<b>b</b>) deformable attention; (<b>c</b>) SKFF. The orange rectangle represents the object region, and red dots are keypoints.</p>
Full article ">Figure 6
<p>The structure of the proposed SKFF module. It takes as input the query feature and the object region (reference region) from the SSRG module. Then, it dynamically localizes scattering keypoints at multiple scales and generates their corresponding attention matrix through the MLP. Based on the attention matrix, the features from these keypoints are effectively fused. The orange rectangle represents the object region, red dots are the detected scattering keypoints, the cuboids with four colors within the gray box represent feature maps at four scales.</p>
Full article ">Figure 7
<p>An illustration of the convex contour prediction process. Starting from the object region (left), offset predictions <math display="inline"><semantics> <mrow> <mo>(</mo> <mo>Δ</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>θ</mi> <mo>)</mo> </mrow> <mo>,</mo> <mo>Δ</mo> <msub> <mi>y</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>θ</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math> generate a set of convex points <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>θ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. These points are refined into the final convex contour using the Jarvis March algorithm <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>(</mo> <mo>·</mo> <mo>)</mo> </mrow> </semantics></math>. The entire procedure is supervised by the dynamically weighted CIoU (DW-CIoU) loss to ensure robust contour alignment with ground truth. The orange rectangle represents the object region, red dots are convex points, green dashed lines indicate offsets of convex points, red lines form the convex contour, and the green rectangle is the ground truth box.</p>
Full article ">Figure 8
<p>Detailed PR curves for various oriented detectors on HRSID and RSDD-SAR. The legends of different colors in each figure represent the integral under the corresponding curve. The first two rows depict the PR curves on HRSID, while the last two rows display the PR curves on RSDD-SAR. (<b>a</b>–<b>h</b>) Our method, SSADet, ReDet, Roi-Transformer, Oriented RepPoints, FADet, FPDDet, and OEGR-DETR. (<b>i</b>–<b>p</b>) are similar to (<b>a</b>–<b>h</b>).</p>
Full article ">Figure 9
<p>PR curves of various detectors on SAR-Aircraft-v1.0.</p>
Full article ">Figure 10
<p>The detection results of various oriented detectors on the HRSID. We choose a complex inshore scene and a densely distributed scene to thoroughly demonstrate the superior detection performance of the proposed method. The green rectangle, red rectangle, yellow circle, red circle, and orange circle correspond to detection results, ground truths, missing objects, false alarms, and low localization accuracy results, repectively. (<b>a</b>) Ground truth; (<b>b</b>) ReDet; (<b>c</b>) Roi-Transformer; (<b>d</b>) Oriented RepPoints; (<b>e</b>) CFA; (<b>f</b>) SSADet; (<b>g</b>) Oriented RCNN; (<b>h</b>) Gliding Vertex; (<b>i</b>) FPDDet; (<b>j</b>) OG-BBAV; (<b>k</b>) FADet; (<b>l</b>) our method.</p>
Full article ">Figure 11
<p>Detection results of various oriented detectors on the RSDD-SAR. We choose a complex inshore scene and a densely distributed scene to thoroughly demonstrate the superior detection performance of the proposed method. The green rectangle, red rectangle, yellow circle, red circle, and orange circle correspond to detection results, ground truths, missing objects, false alarms, and low localization accuracy results, repectively. (<b>a</b>) Ground truth; (<b>b</b>) ReDet; (<b>c</b>) Roi-Transformer; (<b>d</b>) Oriented RepPoints; (<b>e</b>) CFA; (<b>f</b>) SSADet; (<b>g</b>) Oriented RCNN; (<b>h</b>) Gliding Vertex; (<b>i</b>) FPDDet; (<b>j</b>) OG-BBAV; (<b>k</b>) FADet; (<b>l</b>) our method.</p>
Full article ">Figure 12
<p>Detection results on the SAR-Aircraft-v1.0. We choose the complex interference scene (first two rows) and the densely distributed scene (last two rows) to thoroughly demonstrate the superior performance of the proposed method. The green rectangle, red rectangle, yellow circle, red circle and orange circle correspond to detection results, ground truths, missing objects, false alarms and low localization accuracy results, respectively. (<b>a</b>) Ground truth; (<b>b</b>) Deformable DETR; (<b>c</b>) Cascade RCNN; (<b>d</b>) RepPoints; (<b>e</b>) EBDet; (<b>f</b>) our method.</p>
Full article ">Figure 13
<p>PR curves of different advancements in the proposed method.</p>
Full article ">Figure 14
<p>The results of the predicted convex contour for ships and planes. (<b>a</b>) Ground truth. (<b>b</b>) The approximate object region, the predicted convex points, and the corresponding convex contour. (<b>c</b>) The convex contour and its minimal rectangle in OBBs (above) or HBBs (below). The green rectangle in (<b>a</b>) is the ground truth and in (<b>c</b>) is the minimal rectangle of the convex contour, the blue rectangle is the approximate object region generated by SSRG module, the red dots are convex points, and the red lines consist of the corresponding convex contour.</p>
Full article ">Figure 15
<p>The results of the predicted speckle and scattering keypoints in SKFF module. (<b>a</b>) The predicted scattering keypoints integrated from multiple feature levels. (<b>b</b>) The fused scattering keypoints of all feature levels. (<b>c</b>–<b>f</b>) The predicted keypoints of feature levels 0, 1, 2, and 3 respectively.</p>
Full article ">Figure 16
<p>The ablation results for different query numbers on the HRSID dataset. We compare the <math display="inline"><semantics> <msub> <mi>AP</mi> <mn>50</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>AP</mi> <mn>75</mn> </msub> </semantics></math> with varying numbers of queries from 100 to 900.</p>
Full article ">
28 pages, 5895 KiB  
Article
Multiple Co-Infecting Caliciviruses in Oral Fluid and Enteric Samples of Swine Detected by a Novel RT-qPCR Assay and a 3′RACE-PCR-NGS Method
by Zoltán László, Péter Pankovics, Péter Urbán, Róbert Herczeg, Gyula Balka, Barbara Igriczi, Attila Cságola, Mihály Albert, Fruzsina Tóth, Gábor Reuter and Ákos Boros
Viruses 2025, 17(2), 193; https://doi.org/10.3390/v17020193 - 30 Jan 2025
Viewed by 382
Abstract
Caliciviruses including noro- and sapoviruses of family Caliciviridae are important enteric human and swine pathogens, while others, like valoviruses, are less known. In this study, we developed a detection and typing pipeline for the most prevalent swine enteric caliciviruses—sapovirus GIII (Sw-SaV), norovirus GII [...] Read more.
Caliciviruses including noro- and sapoviruses of family Caliciviridae are important enteric human and swine pathogens, while others, like valoviruses, are less known. In this study, we developed a detection and typing pipeline for the most prevalent swine enteric caliciviruses—sapovirus GIII (Sw-SaV), norovirus GII (Sw-NoV), and valovirus GI (Sw-VaV). The pipeline integrates triplex RT-qPCR, 3′RACE semi-nested PCR, and next-generation sequencing (NovaSeq, Illumina) techniques. A small-scale epidemiological investigation was conducted on archived enteric and, for the first time, on oral fluid/saliva samples of diarrheic and asymptomatic swine of varying ages from Hungary and Slovakia. In enteric samples, Sw-SaV was the most prevalent, detected in 26.26% of samples, primarily in diarrheic pigs with low Cq values, followed by Sw-NoV (2.53%) in nursery pigs. In oral fluid samples, Sw-NoV predominated (7.46%), followed by Sw-SaV (4.39%). Sw-VaVs were sporadically found in both sample types. A natural, asymptomatic Sw-SaV outbreak was retrospectively detected where the transient shedding of the virus was <2 weeks. Complete capsid sequences (n = 59; 43 Sw-SaV, 13 Sw-NoV, and 3 Sw-VaV) including multiple (up to five) co-infecting variants were identified. Sw-SaV sequences belong to seven genotypes, while Sw-NoV and Sw-VaV strains clustered into distinct sub-clades, highlighting the complex diversity of these enteric caliciviruses in swine. Full article
(This article belongs to the Special Issue Porcine Viruses 2024)
Show Figures

Figure 1

Figure 1
<p>Nucleotide (nt) sequence alignments of the junctions of polymerase (Pol) and capsid (CAP) encoding genome regions swine norovirus GII (Sw-NoV, (<b>A</b>)), swine sapovirus GIII (Sw-SaV, (<b>B</b>)) and swine valovirus GI (Sw-VaV, (<b>C</b>)) sequences with the binding sites of oligonucleotide primers (green) and probes (magenta) of the RT-qPCR assay used in this study. Note that: only those of the Sw-SaV-GIII sequences were included in this representative alignment which shows a difference at the primer/probe binding region. The binding sites of F1 and F2 forward primers (which have identical sequences as the forward and probe oligonucleotides of the qPCR assays, see <a href="#viruses-17-00193-t001" class="html-table">Table 1</a>) used for 3′RACE semi-nested RT-PCR reactions were also indicated with dark and light blue boxes, respectively. The locations of the aligned regions including the primer/probe binding sites are marked with dotted lines in the schematic genome maps and black boxes in the alignment, respectively. The identity graphs above the alignments show identical (green bars) and moderately variable (pale yellow bars) nts. Only nts different from the consensus sequence are shown as letters with base-specific colours (C = blue, A = red, T = green and G = pale yellow) in the alignments.</p>
Full article ">Figure 2
<p>Logarithmic amplification plots and standard curves of singleplex swine sapovirus (<b>A</b>), norovirus (<b>B</b>), valovirus (<b>C</b>) and triplex assays (<b>D</b>) using 10-fold serial dilutions of mixed viral RNA standards as templates (1 ×10<sup>8</sup>/1 × 10<sup>7</sup> to 1 × 10<sup>1</sup> copies/reaction). Horizontal lines in the amplification plots indicate the arbitrary thresholds of 500, 400 and 150 for Sw-SaV/6-FAM, Sw-NoV/SUN and Sw-VaV/Cy5, respectively. Each dilution had triple technical replicates. The amplification efficiency (E), correlation coefficient (R<sup>2</sup>), slope and Y-intercept (y-int) values were calculated automatically by the Bio-Rad CFX Maestro 2.2 ver. 5.2.008.0222 software. RFU: relative fluorescence units.</p>
Full article ">Figure 3
<p>Box plot of measured Cq values of oral fluid samples (OF, grey boxes) and enteric samples (Ent., clear boxes) of swine sapovirus (SaV), norovirus (NoV) and valovirus (VaV). Centre lines show the medians; box limits indicate the 25th and 75th percentiles as determined by R software version 3.1; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; outliers are represented by dots; crosses represent sample means; width of the boxes is proportional to the square root of the sample size; data points are plotted as black (OF) or coloured (Ent.) circles. The number of sample points (N = x) was found below the X-axis. Cq values from enteric samples of diarrheic and non-diarrheic animals are marked with pink and green dots, respectively. The horizontal dashed line indicates the position of the Cq value of 23.00.</p>
Full article ">Figure 4
<p>Swine sapovirus (Sw-SaV) positivity (in percentages) was detected with the triplex RT-qPCR assay in enteric samples of <span class="html-italic">n</span> = 42 pigs as a part of a follow-up study. D: days of age. Insert Box plot of measured Cq values of Sw-SaV in the enteric samples. Full circles: Cq values from D70 samples, empty circles: Cq values of D77 samples. Centre lines show the medians; box limits indicate the 25th and 75th percentiles as determined by R software; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; outliers are represented by dots; crosses represent sample means. The number of sample points (n = x) was found above the X-axis.</p>
Full article ">Figure 5
<p>Phylogenetic analysis of sapovirus (SaV) VP1 nucleotide sequences. The Neighbour-Joining phylogenetic tree (Jukes-Cantor method, 1000 bootstrap/BS replicates, BS values less than 50 were eliminated from the tree) contains all the SaV VP1 sequences (<span class="html-italic">n</span> = 43) determined in this study (marked with coloured circles) together with its most closely related strains identified by BLASTn search as well as representatives of the known SaV genogroups (GI-GXIX). Horizontal dashed lines indicate the borders between presumed (based on only phylogenetic separation) genotypes. VP1 sequences from the same farm were marked with circles of identical colour. VP1 sequences from the same sample could be identified by the same sample ID found [between square brackets] in the study strain names. Examples of various VP1 sequences found in a single sample were marked with red, green or blue fonts of the strain names. Sequence variants with more than 83.1% nt identity were marked with double arrowheads. The scale bar represents the number of substitutions per site, indicating genetic distance between taxa.</p>
Full article ">Figure 6
<p>Phylogenetic analysis of full-length norovirus (NoV) VP1 nucleotide sequences. The Neighbour-Joining phylogenetic tree (Jukes-Cantor method, 1000 bootstrap/BS replicates, BS values less than 50 were eliminated from the tree) contains all the NoV VP1 sequences (<span class="html-italic">n =</span> 13) determined in this study (in bold) together with its most closely related strains identified by BLASTn search (including all the known swine NoV VP1 sequences) as well as representative sequences of NoV GII genotypes (GII.1-GII.26). A valovirus sequence was used as an outgroup. A main NoV lineage which contains all the known swine NoVs of genotypes GII.11, GII.18 and GII.19 was marked with a green background. Farm names in italics were found next to the study sequences. Farms where multiple types of NoVs were detected are underlined. Sequences from the same sample were marked with identical circles. Two main sub-clades (sc-1 and sc-2) of GII.11 are marked with red and blue lines. OF: oral fluid. The scale bar represents the number of substitutions per site, indicating genetic distance between taxa.</p>
Full article ">Figure 7
<p>Phylogenetic analysis of full-length swine valovirus (VaV) VP1 nucleotide sequences. The Neighbour-Joining phylogenetic tree (Jukes-Cantor method, 1000 bootstrap/BS replicates, BS values less than 50 were eliminated from the tree) contains all the VaV VP1 sequences (<span class="html-italic">n =</span> 3) determined in this study (in bold) together with all the known swine valovirus VP1 sequences. A marmot norovirus sequence was used as an outgroup. Study sequences from enteric or oral fluid/OF samples were marked with black and empty circles, respectively. Two main sub-clades (sc-1 and sc-2) are marked with green and purple lines. The scale bar represents the number of substitutions per site, indicating genetic distance between taxa.</p>
Full article ">
22 pages, 7687 KiB  
Article
Water Pipeline Leak Detection Method Based on Transfer Learning
by Jian Cheng, Zhu Jiang, Hengyu Wu and Xiang Zhang
Water 2025, 17(3), 368; https://doi.org/10.3390/w17030368 - 28 Jan 2025
Viewed by 437
Abstract
In order to improve the accuracy of leakage detection in water pipelines, this paper proposes a novel method based on Transformer and transfer learning. A laboratory test platform was established to obtain datasets with rich leakage characteristics. An enhanced feature extraction technique using [...] Read more.
In order to improve the accuracy of leakage detection in water pipelines, this paper proposes a novel method based on Transformer and transfer learning. A laboratory test platform was established to obtain datasets with rich leakage characteristics. An enhanced feature extraction technique using a shift window input method mapped the NPW sequences into embedding vectors, effectively capturing the fine-grained features while reducing the sequence length, thereby enhancing the Transformer’s retention of sequence details. An improved Transformer encoder was pre-trained on the Experimental pipeline dataset and refined with limited leakage data from real pipelines for accurate detection. Additionally, a novel signal difference-based method was introduced for precise leak localization. The pressure signal was denoised, and the inflection points were identified by subtracting two signals. The points between the inflection and lowest signal points were traversed, with slope calculations optimizing the time delay computations. A leakage simulation test was conducted on a section of a raw water pipeline in Shanghai, and the test results confirmed the effectiveness of these methods. A 100% detection rate, zero false alarms, and a relative positioning error of less than 3.14% were achieved on a test set of 45 instances. Full article
Show Figures

Figure 1

Figure 1
<p>General framework diagram.</p>
Full article ">Figure 2
<p>Layout of raw water pipeline test section.</p>
Full article ">Figure 3
<p>Installation of the experimental device.</p>
Full article ">Figure 4
<p>Pressure fluctuation diagram of the raw water pipeline, The red rectangle highlights the leakage signal, while the green rectangle indicates normal fluctuations resembling leakage.</p>
Full article ">Figure 5
<p>Laboratory equipment. (<b>a</b>) Overview diagram of the experimental pipeline system; (<b>b</b>) actual image of the experimental pipeline system.</p>
Full article ">Figure 6
<p>NPW signals. (<b>a</b>) NPW signal collected from laboratory pipeline; (<b>b</b>) NPW signal collected from raw water pipeline.</p>
Full article ">Figure 7
<p>Encoder architecture.</p>
Full article ">Figure 8
<p>Shift window input.</p>
Full article ">Figure 9
<p>Parameter-based transformer–TL training.</p>
Full article ">Figure 10
<p>Positioning schematic diagram.</p>
Full article ">Figure 11
<p>Simulation signal positioning results.</p>
Full article ">Figure 12
<p>Delay caused by different causes.</p>
Full article ">Figure 13
<p>Confusion matrices. (<b>a</b>) Confusion matrix of the model on the experimental pipeline dataset; (<b>b</b>) confusion matrix of the model on the raw water pipeline dataset.</p>
Full article ">Figure 14
<p>The inflection point located by the proposed method.</p>
Full article ">
12 pages, 4045 KiB  
Article
Analysis of Short Tandem Repeat Expansions in a Cohort of 12,496 Exomes from Patients with Neurological Diseases Reveals Variable Genotyping Rate Dependent on Exome Capture Kits
by Clarissa Rocca, David Murphy, Chris Clarkson, Matteo Zanovello, Delia Gagliardi, Queen Square Genomics, Rauan Kaiyrzhanov, Javeria Alvi, Reza Maroofian, Stephanie Efthymiou, Tipu Sultan, Jana Vandrovcova, James Polke, Robyn Labrum, Henry Houlden and Arianna Tucci
Genes 2025, 16(2), 169; https://doi.org/10.3390/genes16020169 - 28 Jan 2025
Viewed by 638
Abstract
Background/Objectives: Short tandem repeat expansions are the most common cause of inherited neurological diseases. These disorders are clinically and genetically heterogeneous, such as in myotonic dystrophy and spinocerebellar ataxia, and they are caused by different repeat motifs in different genomic locations. Major advances [...] Read more.
Background/Objectives: Short tandem repeat expansions are the most common cause of inherited neurological diseases. These disorders are clinically and genetically heterogeneous, such as in myotonic dystrophy and spinocerebellar ataxia, and they are caused by different repeat motifs in different genomic locations. Major advances in bioinformatic tools used to detect repeat expansions from short read sequencing data in the last few years have led to the implementation of these workflows into next generation sequencing pipelines in healthcare. Here, we aimed to evaluate the clinical utility of analysing repeat expansions through exome sequencing in a large cohort of genetically undiagnosed patients with neurological disorders. Methods: We here analyse 27 disease-causing DNA repeats found in the coding, intronic and untranslated regions in 12,496 exomes in patients with a range of neurogenetic conditions. Results: We identified—and validated by polymerase chain reaction—29 repeat expansions across a range of loci, 48% (n = 14) of which were diagnostic. We then analysed the genotyping performance across all repeat loci and found that, despite high coverage in most repeats in coding regions, some loci had low genotyping rates, such as those that cause spinocerebellar ataxia 2 (ATXN2, 0.1–8.4%) and Huntington disease (HTT, 0.2–58.2%), depending on the capture kit. Conversely, while most intronic repeats were not genotyped, we found a high genotyping rate in the intronic locus that causes spinocerebellar ataxia 36 (NOP56, 30.1–98.3%) and in the one that causes myotonic dystrophy type 1 (DMPK, myotonic dystrophy type 1). Conclusions: We show that the key factors that influence the genotyping rate of repeat expansion loci analysis are the sequencing read length and exome capture kit. These results provide important information about the performance of exome sequencing as a genetic test for repeat expansion disorders. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic overview of the study workflow.</p>
Full article ">Figure 2
<p>Cohort overview and study design. The map illustrates the global distribution of 12,496 cases included in the cohort, with participant numbers represented by coloured circles: Europe (N = 8649, blue), East Asia (N = 1602, yellow), Africa (N = 404, red), America (N = 334, dark red), and South Asia (N = 68, green). The right panel provides the demographic information and diagnostic categories included in the analysis. The study design is summarised in the blue boxes at the bottom.</p>
Full article ">Figure 3
<p>Total number of repeat expansions identified by EH, visual inspection and PCR validation. (<b>A</b>) 365 repeat expansions identified by EH with the visual inspection outcome. Loci are divided into three groups: coding, intron and UTR. Green bars represent calls that passed visual inspection, yellow bars are for calls that were categorised in the “borderline” group and red bars indicate samples that failed visual inspection. Loci that do not have a bar next to them did not have any expanded calls predicted by EH. (<b>B</b>) The outcome of PCR-tested samples. The light blue bars indicate samples that tested positive for PCR, while the pink bars represent samples that tested negative. Stripes indicate cases that were in the visual inspection “Pass” category, whereas dots represent cases that were “borderline” after visual inspection.</p>
Full article ">Figure 4
<p>Pedigree of SCA3 family and MRI scan of proband. The red arrow shows the proband. (<b>A</b>) Square = male; circle = female; black filled symbol = affected individual; white symbols = unaffected individuals; diagonal line = deceased individual. Double lines indicate consanguinity. (<b>B</b>) MRI scan of patient IV.8. The red arrow indicates cerebellar atrophy.</p>
Full article ">Figure 5
<p>Targeted loci and coverage according to the four most used exome sequencing kits in this cohort. (<b>A</b>) The RED loci are categorised based on their genomic location: coding, intron and UTR. Target (purple): the specific region of the gene is targeted by the exome kit. Not target (yellow): the region of interest is not covered by the exome kit. The percentage indicates how much of the region is not covered. For example, in <span class="html-italic">ATN1</span>, 60% of the region of interest is not covered by the SureSelect V4 kit. When not specified, the percentage of target or not target is 0%. The exome sequencing kits are represented by different bars: SureSelect V6, SureSelect V4, Nextera and TruSeq. The dashed lines under each group indicate the total number of RED loci analysed in each category: 12 coding, 7 intronic and 8 UTRs. (<b>B</b>) Heatmap showing the coverage of the analysed RED loci across different genomic regions. Coverage is represented by the number of sequencing reads mapping to each locus, as indicated by the colour scale. (<b>C</b>) 3D plots of the genotyping rate for EH-generated calls by read length and sequencing kit. The three plots show EH calls in coding, intron and UTR loci. In each plot, calls are divided by locus and read length. The four different colours represent the different exome capture kits used.</p>
Full article ">
17 pages, 3899 KiB  
Article
Evaluating Pipeline Inspection Technologies for Enhanced Corrosion Detection in Mining Water Transport Systems
by Víctor Tuninetti, Matías Huentemilla, Álvaro Gómez, Angelo Oñate, Brahim Menacer, Sunny Narayan and Cristóbal Montalba
Appl. Sci. 2025, 15(3), 1316; https://doi.org/10.3390/app15031316 - 27 Jan 2025
Viewed by 626
Abstract
Water transport pipelines in the mining industry face significant corrosion challenges due to extreme environmental conditions, such as arid climates, temperature fluctuations, and abrasive soils. This study evaluates the effectiveness of three advanced inspection technologies—Guided Wave Ultrasonic Testing (GWUT), Metal Magnetic Memory (MMM), [...] Read more.
Water transport pipelines in the mining industry face significant corrosion challenges due to extreme environmental conditions, such as arid climates, temperature fluctuations, and abrasive soils. This study evaluates the effectiveness of three advanced inspection technologies—Guided Wave Ultrasonic Testing (GWUT), Metal Magnetic Memory (MMM), and In-Line Inspection (ILI)—in maintaining pipeline integrity under such conditions. A structured methodology combining diagnostic assessment, technology research, and comparative evaluation was applied, using key performance indicators like detection capability, operational impact, and feasibility. The results show that GWUT effectively identifies surface anomalies and wall thinning over long pipeline sections but faces depth and diameter limitations. MMM excels at detecting early-stage stress and corrosion in inaccessible locations, benefiting from minimal preparation and strong market availability. ILI provides comprehensive internal and external assessments but requires piggable pipelines and operational adjustments, limiting its use in certain systems. A case study of critical aqueducts of mining site water supply illustrates real-world technology selection challenges. The findings underscore the importance of an integrated inspection approach, leveraging the complementary strengths of these technologies to ensure reliable pipeline integrity management. Future research should focus on quantitative performance metrics and cost-effectiveness analyses to optimize inspection strategies for mining infrastructure. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart for Selecting and Evaluating Pipeline Inspection Technologies.</p>
Full article ">Figure 2
<p>Different corrosion mechanisms detected in mining water pipelines.</p>
Full article ">Figure 3
<p>Overview of Pipeline Inspection Technologies for Mining Water Systems. (<b>a</b>) Guided Wave Ultrasonic Testing and (<b>b</b>) Metal Magnetic Memory (MMM). In-Line Inspection (ILI) technologies: (<b>c</b>) smart balls and robotic solutions based on Magnetic Flux Leakage combined with (<b>d</b>) ultrasonic and (<b>e</b>) electromagnetic acoustic transducer.</p>
Full article ">Figure 4
<p>The summary of resulting assessment indicates that Guided Wave Ultrasonic Testing scored moderately for above-ground sections but has limited reach for buried pipelines. Metal Magnetic Memory (MMM) performs well in terms of non-intrusive inspection and coverage, though it is restricted to ferromagnetic pipelines and may be influenced by nearby parallel lines. In-Line Inspection (ILI) demonstrated high effectiveness for comprehensive internal analysis but requires significant infrastructure setup.</p>
Full article ">
16 pages, 986 KiB  
Article
Research on Detection Methods for Gas Pipeline Networks Under Small-Hole Leakage Conditions
by Ying Zhao, Lingxi Yang, Qingqing Duan, Zhiqiang Zhao and Zheng Wang
Sensors 2025, 25(3), 755; https://doi.org/10.3390/s25030755 - 26 Jan 2025
Viewed by 664
Abstract
Gas pipeline networks are vital urban infrastructure, susceptible to leaks caused by natural disasters and adverse weather, posing significant safety risks. Detecting and localizing these leaks is crucial for mitigating hazards. However, existing methods often fail to effectively model the time-varying structural data [...] Read more.
Gas pipeline networks are vital urban infrastructure, susceptible to leaks caused by natural disasters and adverse weather, posing significant safety risks. Detecting and localizing these leaks is crucial for mitigating hazards. However, existing methods often fail to effectively model the time-varying structural data of pipelines, limiting their detection capabilities. This study introduces a novel approach for leak detection using a spatial–temporal attention network (STAN) tailored for small-hole leakage conditions. A graph attention network (GAT) is first used to model the spatial dependencies between sensors, capturing the dynamic patterns of adjacent nodes. An LSTM model is then employed for encoding and decoding time series data, incorporating a temporal attention mechanism to capture evolving changes over time, thus improving detection accuracy. The proposed model is evaluated using Pipeline Studio software and compared with state-of-the-art models on a gas pipeline simulation dataset. Results demonstrate competitive precision (91.7%), recall (96.5%), and F1-score (0.94). Furthermore, the method effectively identifies sensor statuses and temporal dynamics, reducing leakage risks and enhancing model performance. This study highlights the potential of deep learning techniques in addressing the challenges of leak detection and emphasizes the effectiveness of spatial–temporal modeling for improved detection accuracy. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

Figure 1
<p>Complex spatiotemporal correlation. (<b>a</b>) Gas network structure: There are four sensors and one leak point, and supply is represented by the gas supply station. (<b>b</b>) Dynamic spatial correlation: Adjacent sensors are not always highly correlated. For example, the correlation between sensors 1 and 2 in the figure weakens over time. Dynamic temporal correlation: The network flow represented by the current sensor may be more correlated with the flow when the leak occurs at a distant time.</p>
Full article ">Figure 2
<p>Architecture of the proposed STAN method.</p>
Full article ">Figure 3
<p>Gas network leakage model for experiments.</p>
Full article ">Figure 4
<p>Gas network leakage model for experiments.</p>
Full article ">Figure 5
<p>Transient operating conditions of each pipeline downstream flow rate with a 20 mm leakage aperture in the pipe network structure.</p>
Full article ">Figure 6
<p>Transient working conditions of each pipeline pressure with a 20 mm leakage hole in the pipe network structure.</p>
Full article ">Figure 7
<p>Comparison of the performance of different models in terms of accuracy under the conditions of leak aperture of 20 mm, 40 mm, and 60 mm.</p>
Full article ">
45 pages, 20140 KiB  
Article
Development and Experimental Validation of a Sense-and-Avoid System for a Mini-UAV
by Marco Fiorio, Roberto Galatolo and Gianpietro Di Rito
Drones 2025, 9(2), 96; https://doi.org/10.3390/drones9020096 - 26 Jan 2025
Viewed by 766
Abstract
This paper provides an overview of the three-year effort to design and implement a prototypical sense-and-avoid (SAA) system based on a multisensory architecture leveraging data fusion between optical and radar sensors. The work was carried out within the context of the Italian research [...] Read more.
This paper provides an overview of the three-year effort to design and implement a prototypical sense-and-avoid (SAA) system based on a multisensory architecture leveraging data fusion between optical and radar sensors. The work was carried out within the context of the Italian research project named TERSA (electrical and radar technologies for remotely piloted aircraft systems) undertaken by the University of Pisa in collaboration with its industrial partners, aimed at the design and development of a series of innovative technologies for remotely piloted aircraft systems of small scale (MTOW < 25 Kgf). The system leverages advanced computer vision algorithms and an extended Kalman filter to enhance obstacle detection and tracking capabilities. The “Sense” module processes environmental data through a radar and an electro-optical sensor, while the “Avoid” module utilizes efficient geometric algorithms for collision prediction and evasive maneuver computation. A novel hardware-in-the-loop (HIL) simulation environment was developed and used for validation, enabling the evaluation of closed-loop real-time interaction between the “Sense” and “Avoid” subsystems. Extensive numerical simulations and a flight test campaign demonstrate the system’s effectiveness in real-time detection and the avoidance of non-cooperative obstacles, ensuring compliance with UAV aero mechanical and safety constraints in terms of minimum separation requirements. The novelty of this research lies in (1) the design of an innovative and efficient visual processing pipeline tailored for SWaP-constrained mini-UAVs, (2) the formulation an EKF-based data fusion strategy integrating optical data with a custom-built Doppler radar, and (3) the development of a unique HIL simulation environment with realistic scenery generation for comprehensive system evaluation. The findings underscore the potential for deploying such advanced SAA systems in tactical UAV operations, significantly contributing to the safety of flight in non-segregated airspaces Full article
Show Figures

Figure 1

Figure 1
<p>Reference TERSA aircraft.</p>
Full article ">Figure 2
<p>Minimum detection range analytical computation [<a href="#B30-drones-09-00096" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Minimum turn radius as a function of aerodynamic, structural, and propulsive constraints for TERSA aircraft. (<b>b</b>) Minimum detection required range as a function of intruder aircraft airspeed.</p>
Full article ">Figure 4
<p>Maximum attainable flight path angle @ <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>=</mo> <mn>1000</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> for stationary climb.</p>
Full article ">Figure 5
<p>SAA typical system integration high-level scheme.</p>
Full article ">Figure 6
<p>High-level schematics of the proposed SAA system.</p>
Full article ">Figure 7
<p>Intruder aircraft azimuth and elevation angle reconstruction.</p>
Full article ">Figure 8
<p>Sense module computer vision pipeline.</p>
Full article ">Figure 9
<p>Visual representation of the pyramidal expansion of the algorithm.</p>
Full article ">Figure 10
<p>Example of KLT feature matcher output.</p>
Full article ">Figure 11
<p>Flowchart of KLT algorithm.</p>
Full article ">Figure 12
<p>High-level scheme of the finite state machine handling the conflict detection and resolution problem.</p>
Full article ">Figure 13
<p>Radar signal processing architecture.</p>
Full article ">Figure 14
<p>(<b>a</b>) Example of a range-Doppler map, (<b>b</b>) CFAR technique, (<b>c</b>) CA-CFAR technique.</p>
Full article ">Figure 15
<p>(<b>a</b>) Bi-dimensional CFAR; (<b>b</b>) binary mask showing detected tracks.</p>
Full article ">Figure 16
<p>Phase monopulse configuration.</p>
Full article ">Figure 17
<p>Metrics used for the evaluation of avoidance maneuvers (<b>a</b>); initial position of the intruder aircraft (<b>b</b>).</p>
Full article ">Figure 17 Cont.
<p>Metrics used for the evaluation of avoidance maneuvers (<b>a</b>); initial position of the intruder aircraft (<b>b</b>).</p>
Full article ">Figure 18
<p>Results of evasive maneuver for different initial position and velocity states of the intruder aircraft; (<b>a</b>) minimum distance; (<b>b</b>) maximum normal deviation with respect to the original trajectory; (<b>c</b>) UAV’s 2D trajectory; (<b>d</b>) UAV aileron deflection.</p>
Full article ">Figure 18 Cont.
<p>Results of evasive maneuver for different initial position and velocity states of the intruder aircraft; (<b>a</b>) minimum distance; (<b>b</b>) maximum normal deviation with respect to the original trajectory; (<b>c</b>) UAV’s 2D trajectory; (<b>d</b>) UAV aileron deflection.</p>
Full article ">Figure 19
<p>Intruder position state (<b>a</b>–<b>c</b>) and velocity state (<b>d</b>–<b>f</b>) reconstruction by the EKF for a starboard encounter; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>u</mi> <mi>a</mi> <mi>v</mi> </mrow> </msub> <mo>=</mo> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>i</mi> <mi>t</mi> <mi>n</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>22</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>. The first dashed vertical line indicates the moment the intruder is detectable (within radar range); the second dashed line indicates the moment where the EKF has reached convergence, and its output is fed into avoidance algorithms.</p>
Full article ">Figure 19 Cont.
<p>Intruder position state (<b>a</b>–<b>c</b>) and velocity state (<b>d</b>–<b>f</b>) reconstruction by the EKF for a starboard encounter; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>u</mi> <mi>a</mi> <mi>v</mi> </mrow> </msub> <mo>=</mo> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>i</mi> <mi>t</mi> <mi>n</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>22</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>. The first dashed vertical line indicates the moment the intruder is detectable (within radar range); the second dashed line indicates the moment where the EKF has reached convergence, and its output is fed into avoidance algorithms.</p>
Full article ">Figure 20
<p>High-level scheme of the complete simulation framework.</p>
Full article ">Figure 21
<p>Video stream send (<b>a</b>) and receive (<b>b</b>) pipeline schematics.</p>
Full article ">Figure 22
<p>SAA camera viewpoint as rendered in Flight Gear (flying over the city of Pisa).</p>
Full article ">Figure 23
<p>Results of a HIL collision scenario in the complete simulation environment. Intruder approaching from starboard side. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>U</mi> <mi>A</mi> <mi>V</mi> </mrow> </msub> <mo>=</mo> <mn>22</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>i</mi> <mi>n</mi> <mi>t</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>50</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math> Three-dimensional trajectory (<b>a</b>), Euler angles (<b>b</b>), speed components in NED reference frame (<b>c</b>), load factors (<b>d</b>).</p>
Full article ">Figure 23 Cont.
<p>Results of a HIL collision scenario in the complete simulation environment. Intruder approaching from starboard side. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>U</mi> <mi>A</mi> <mi>V</mi> </mrow> </msub> <mo>=</mo> <mn>22</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>i</mi> <mi>n</mi> <mi>t</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>50</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math> Three-dimensional trajectory (<b>a</b>), Euler angles (<b>b</b>), speed components in NED reference frame (<b>c</b>), load factors (<b>d</b>).</p>
Full article ">Figure 24
<p>Comparison between elevation (<b>a</b>) and azimuth (<b>b</b>) measurements with ground truth for the complete simulation framework.</p>
Full article ">Figure 25
<p>Flight Gear rendering of a starboard collision scenario on a coastal landscape with intruder position highlighted with a red bounding box as detected by sense algorithms within the complete simulation framework. Subfigures (<b>a</b>–<b>f</b>) show the evolution of the evasive maneuver.</p>
Full article ">Figure 25 Cont.
<p>Flight Gear rendering of a starboard collision scenario on a coastal landscape with intruder position highlighted with a red bounding box as detected by sense algorithms within the complete simulation framework. Subfigures (<b>a</b>–<b>f</b>) show the evolution of the evasive maneuver.</p>
Full article ">Figure 26
<p>SAA system prototype.</p>
Full article ">Figure 27
<p>(<b>a</b>) Antenna placement within the nose mockup; (<b>b</b>) camera sensor chosen for the system implementation; (<b>c</b>) Jetson Nano vision processing unit.</p>
Full article ">Figure 28
<p>SAA system position and UAV trajectory during flight test at Lucca-Tassignano Airport.</p>
Full article ">Figure 29
<p>Reconstructed UAV position states vs. telemetry in base reference frame: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math> position state, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>y</mi> </mrow> </semantics></math> position state, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>z</mi> </mrow> </semantics></math> position state.</p>
Full article ">Figure 30
<p>Relative azimuth (<b>a</b>,<b>c</b>) and elevation (<b>b</b>,<b>d</b>) angles between target UAV and SAA reference frame for two different flight phases.</p>
Full article ">
16 pages, 2038 KiB  
Article
Enhancing Colony Detection of Microorganisms in Agar Dishes Using SAM-Based Synthetic Data Augmentation in Low-Data Scenarios
by Kim Mennemann, Nikolas Ebert, Laurenz Reichardt and Oliver Wasenmüller
Appl. Sci. 2025, 15(3), 1260; https://doi.org/10.3390/app15031260 - 26 Jan 2025
Viewed by 411
Abstract
In many medical and pharmaceutical processes, continuous hygiene monitoring relies on manual detection of microorganisms in agar dishes by skilled personnel. While deep learning offers the potential for automating this task, it often faces limitations due to insufficient training data, a common issue [...] Read more.
In many medical and pharmaceutical processes, continuous hygiene monitoring relies on manual detection of microorganisms in agar dishes by skilled personnel. While deep learning offers the potential for automating this task, it often faces limitations due to insufficient training data, a common issue in colony detection. To address this, we propose a simple yet efficient SAM-based pipeline for Copy-Paste data augmentation to enhance detection performance, even with limited data. This paper explores a method where annotated microbial colonies from real images were copied and pasted into empty agar dish images to create new synthetic samples. These new samples inherited the annotations of the colonies inserted into them so that no further labeling was required. The resulting synthetic datasets were used to train a YOLOv8 detection model, which was then fine-tuned on just 10 to 1000 real images. The best fine-tuned model, trained on only 1000 real images, achieved an mAP of 60.6, while a base model trained on 5241 real images achieved 64.9. Although far fewer real images were used, the fine-tuned model performed comparably well, demonstrating the effectiveness of the SAM-based Copy-Paste augmentation. This approach matches or even exceeds the performance of the current state of the art in synthetic data generation in colony detection and can be expanded to include more microbial species and agar dishes. Full article
(This article belongs to the Special Issue Research on Machine Learning in Computer Vision)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed pipeline. First, colonies are segmented using a pre-trained and frozen Segment Anything Model [<a href="#B15-applsci-15-01260" class="html-bibr">15</a>]. Next, poor segmentations are filtered out to avoid introducing artifacts into the synthetic images. The segmented colonies are then inserted onto new, empty agar plates. Finally, YOLOv8 [<a href="#B48-applsci-15-01260" class="html-bibr">48</a>] is pre-trained on the synthetic data and fine-tuned on real data to achieve optimal accuracy.</p>
Full article ">Figure 2
<p>Examples of good and bad segmentations of colonies from the AGAR dataset [<a href="#B4-applsci-15-01260" class="html-bibr">4</a>] with SAM [<a href="#B15-applsci-15-01260" class="html-bibr">15</a>].</p>
Full article ">Figure 3
<p>Examples of generated data (f.l.t.r.): real image from the AGAR dataset, generated image where the colonies match the background, and generated image where the colonies do not match the background.</p>
Full article ">Figure 4
<p>Comparison of the mAP of YOLOv8-Nano [<a href="#B48-applsci-15-01260" class="html-bibr">48</a>] after pre-training on synthetic images and fine-tuning on real images across all classes in the AGAR dataset [<a href="#B4-applsci-15-01260" class="html-bibr">4</a>]. The synthetic images utilize various opacity values for the inpainted colonies.</p>
Full article ">Figure 5
<p>Comparison of different sizes of fine-tuning datasets of YOLOv8-Nano [<a href="#B48-applsci-15-01260" class="html-bibr">48</a>] on the AGAR dateset [<a href="#B4-applsci-15-01260" class="html-bibr">4</a>]. (<b>a</b>) Mean Average Precision (mAP). (<b>b</b>) Average Precision at an IoU-threshold at 0.5 (AP<sup>50</sup>).</p>
Full article ">
25 pages, 3057 KiB  
Review
Next-Generation Sequencing Methods to Determine the Accuracy of Retroviral Reverse Transcriptases: Advantages and Limitations
by Javier Martínez del Río and Luis Menéndez-Arias
Viruses 2025, 17(2), 173; https://doi.org/10.3390/v17020173 - 26 Jan 2025
Viewed by 580
Abstract
Retroviruses, like other RNA viruses, mutate at very high rates and exist as genetically heterogeneous populations. The error-prone activity of viral reverse transcriptase (RT) is largely responsible for the observed variability, most notably in HIV-1. In addition, RTs are widely used in biotechnology [...] Read more.
Retroviruses, like other RNA viruses, mutate at very high rates and exist as genetically heterogeneous populations. The error-prone activity of viral reverse transcriptase (RT) is largely responsible for the observed variability, most notably in HIV-1. In addition, RTs are widely used in biotechnology to detect RNAs and to clone expressed genes, among many other applications. The fidelity of retroviral RTs has been traditionally analyzed using enzymatic (gel-based) or reporter-based assays. However, these methods are laborious and have important limitations. The development of next-generation sequencing (NGS) technologies opened the possibility of obtaining reverse transcription error rates from a large number of sequences, although appropriate protocols had to be developed. In this review, we summarize the developments in this field that allowed the determination of RNA-dependent DNA synthesis error rates for different RTs (viral and non-viral), including methods such as PRIMER IDs, REP-SEQ, ARC-SEQ, CIR-SEQ, SMRT-SEQ and ROLL-SEQ. Their advantages and limitations are discussed. Complementary DNA (cDNA) synthesis error rates obtained in different studies, using RTs and RNAs of diverse origins, are presented and compared. Future improvements in methodological pipelines will be needed for the precise identification of mutations in the RNA template, including modified bases. Full article
(This article belongs to the Section Animal Viruses)
Show Figures

Figure 1

Figure 1
<p>HIV-1 RT structure. (<b>A</b>) Crystal structure of HIV-1 RT bound to a DNA primer-template and an incoming deoxythymidine triphosphate (dTTP) (PDB ID: 1RTD). Subdomains of the p66 DNA polymerase domain are shown as red (fingers), green (palm), orange (thumb) and yellow (connection). The RNase H domain is shown as mint green. The p51 subunit is colored gray. The template strand is represented in dark blue, the primer strand in light blue, and the incoming dTTP is shown as purple. Coordinating metal ions (Mg<sup>2+</sup>) in the DNA polymerase and RNase H active sites are represented as dark gray spheres. (<b>B</b>) Detailed view of the DNA polymerase active site, showing the location of the catalytic triad Asp110, Asp185 and Asp186, along with key residues Lys65, Arg72 and Gln151, the incoming dTTP, the template-primer and the two metal ions. (<b>C</b>) Detailed view of the RNase H active site, showing the location of Asp443, Glu478, Asp498 and Asp549 (DEDD motif) interacting with the metal ions and the template-primer.</p>
Full article ">Figure 2
<p>Overview of forward mutation assays with the <span class="html-italic">lacZ</span>α gene. The initial steps (outlined with dashed lines) differ depending on the desired measurement. For RNA-dependent DNA polymerase fidelity assessment, the RT uses an RNA molecule with the <span class="html-italic">lacZ</span>α sequence as a template, and the obtained DNA is hybridized with a gapped DNA plasmid carrying the complementary sequence of the <span class="html-italic">lacZ</span>α gene. In the case of DNA-dependent DNA polymerase fidelity, the gapped DNA plasmid serves as template for the RT to produce the complementary <span class="html-italic">lacZ</span>α gene strand. Following the initial steps, <span class="html-italic">E. coli</span> is transformed with the resulting plasmids, and errors are identified as light blue or colorless colonies.</p>
Full article ">Figure 3
<p>Rationale of consensus sequencing. (<b>A</b>) Error correction of consensus sequencing. Depending on the method, multiple copies of an original molecule (DNA or RNA) are obtained and sequenced, or the molecule is sequenced several times. Sequences that derive from the same molecule are aligned, and a consensus is established. True mutations from the original molecule, which are expected to be present in all copies, are added to the consensus. On the other hand, artifacts, which are expected to be inconsistent across the copies, are discarded. (<b>B</b>) Formula to calculate error rates. Once consensus sequences are obtained, an accurate error rate can be calculated. If we consider that the consensus sequences shown on the left have a length of 100 nt, the error rate of the original molecules would be 5 × 10<sup>−3</sup> errors/base.</p>
Full article ">Figure 4
<p>Barcode-based methods for reverse transcription error rate measurements. Brown crosses represent RNA errors, green crosses depict RT errors and yellow crosses represent artifacts (i.e., other errors, such as sequencing or PCR errors). (<b>A</b>) PRIMER IDS [<a href="#B49-viruses-17-00173" class="html-bibr">49</a>]. Reverse transcription is performed using primers with barcodes at their 5′ ends. The labeled cDNAs are amplified by PCR and sequenced. Combined RNA and reverse transcription errors are expected to be in all sequences sharing a barcode. (<b>B</b>) Replicated sequencing (REP-SEQ) [<a href="#B57-viruses-17-00173" class="html-bibr">57</a>]. RNA molecules are tagged with barcodes and attached to beads. Reverse transcription is performed, and cDNAs are washed away multiple times. The resulting cDNAs are then sequenced. Transcription errors are expected to be present in all reads sharing a barcode and are distinguished from RT errors and artifacts. (<b>C</b>) Accurate RNA consensus sequencing (ARC-SEQ) [<a href="#B58-viruses-17-00173" class="html-bibr">58</a>]. Barcoded RNA molecules are circularized and subjected to rolling circle reverse transcription, yielding tandem-repeated cDNA copies. These multimeric cDNAs are restricted to cDNA monomers, which are then tagged with barcodes, amplified through PCR and sequenced. Transcription errors are expected to be present in all sequences sharing an RNA barcode, while reverse transcription errors should be shared only among sequences with the same cDNA barcode.</p>
Full article ">Figure 5
<p>Barcode-free methods for reverse transcription error rate measurements. Brown crosses represent RNA errors, green crosses depict RT errors and yellow crosses represent artifacts (i.e., other errors, such as sequencing or PCR errors). (<b>A</b>) Circular sequencing (CIR-SEQ) [<a href="#B63-viruses-17-00173" class="html-bibr">63</a>]. Circularized DNA fragments (for example, cDNAs obtained after reverse transcription) are obtained and used as a template for rolling circle amplification reactions to obtain multimer DNAs. After sequencing, the repeated units of each multimer cDNA are identified and aligned. Transcription errors are expected to be found in all the aligned sequences from the same multimer cDNA molecule. (<b>B</b>) Single molecule real-time sequencing (SMRT-SEQ) [<a href="#B64-viruses-17-00173" class="html-bibr">64</a>]. RNA is used as template for reverse transcription. The obtained cDNAs are used as templates for second-strand DNA synthesis. PacBio SMRT adapters are attached to the dsDNA and subjected to SMRT repeated sequencing. Each read of the original cDNA strand is aligned. Combined RNA transcription and reverse transcription errors are expected to be in all the aligned sequences. Additionally, second-strand DNA synthesis errors could be identified by aligning the reads of the complementary strand. (<b>C</b>) Rolling circle sequencing (ROLL-SEQ) [<a href="#B65-viruses-17-00173" class="html-bibr">65</a>]. Rolling circle reverse transcription reactions are conducted to obtain multimeric cDNAs. The complementary DNA strand is then synthesized, and PacBio SMRT adapters are attached. SMRT repeated sequencing is performed, and the repeated monomers of the cDNA strand are identified for each read. RNA errors are expected to be in all the monomers of the same read and in all reads, while RT errors are expected to be only in the aligned reads of the same monomer.</p>
Full article ">Figure 6
<p>Sensitivity in cDNA barcoding methods. (<b>A</b>) shows an example where sensitivity is affected by the introduction of an artifact during PCR or sequencing at a position already containing an RT mutation. (<b>B</b>) illustrates a scenario where sensitivity is impacted by assigning the same barcode to sequences originating from different cDNAs. This can occur if barcodes with identical sequences are used by chance during reverse transcription. Alternatively, different barcodes may initially be assigned, but a mutation introduced during library preparation can generate a new barcode sequence that matches another barcode already present in the library pool. Sensitivity is calculated as the percentage of original mutations found in the consensus sequences relative to all mutations present in the original cDNAs that underwent PCR amplification and sequencing.</p>
Full article ">Figure 7
<p>Effect of mutations in barcodes on the generation of offspring barcodes. (<b>A</b>) Impact of introducing an artifact in a barcode during the early PCR cycles. (<b>B</b>) Impact of introducing an artifact in the later PCR cycles or during sequencing. (<b>C</b>) Barcode frequency distribution in a typical NGS experiment. Ordinate values are represented on a logarithmic scale. Adapted from [<a href="#B56-viruses-17-00173" class="html-bibr">56</a>].</p>
Full article ">
16 pages, 13461 KiB  
Article
Wi-Filter: WiFi-Assisted Frame Filtering on the Edge for Scalable and Resource-Efficient Video Analytics
by Lawrence Lubwama, Jungik Jang, Jisung Pyo, Joon Yoo and Jaehyuk Choi
Sensors 2025, 25(3), 701; https://doi.org/10.3390/s25030701 - 24 Jan 2025
Viewed by 480
Abstract
With the growing prevalence of large-scale intelligent surveillance camera systems, the burden on real-time video analytics pipelines has significantly increased due to continuous video transmission from numerous cameras. To mitigate this strain, recent approaches focus on filtering irrelevant video frames early in the [...] Read more.
With the growing prevalence of large-scale intelligent surveillance camera systems, the burden on real-time video analytics pipelines has significantly increased due to continuous video transmission from numerous cameras. To mitigate this strain, recent approaches focus on filtering irrelevant video frames early in the pipeline, at the camera or edge device level. In this paper, we propose Wi-Filter, an innovative filtering method that leverages Wi-Fi signals from wireless edge devices, such as Wi-Fi-enabled cameras, to optimize filtering decisions dynamically. Wi-Filter utilizes channel state information (CSI) readily available from these wireless cameras to detect human motion within the field of view, adjusting the filtering threshold accordingly. The motion-sensing models in Wi-Filter (Wi-Fi assisted Filter) are trained using a self-supervised approach, where CSI data are automatically annotated via synchronized camera feeds. We demonstrate the effectiveness of Wi-Filter through real-world experiments and prototype implementation. Wi-Filter achieves motion detection accuracy exceeding 97.2% and reduces false positive rates by up to 60% while maintaining a high detection rate, even in challenging environments, showing its potential to enhance the efficiency of video analytics pipelines. Full article
Show Figures

Figure 1

Figure 1
<p>Limitations of static, predefined threshold selection. (<b>a</b>) In bright conditions, YOLOv4 accurately identified the target regardless of the threshold value. (<b>b</b>) Under low-light conditions, YOLOv4 struggled with detection failures at a high threshold value (<math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>) but correctly identified the target with a lower threshold (<math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>). (<b>c</b>) A very low threshold may lead to false positives, such as misidentifying a chair as a human, resulting in unnecessary video frame forwarding.</p>
Full article ">Figure 2
<p>CSI spectrogram obtained for two different states, no person in the room and one person walking in the room, under (<b>a</b>) bright and (<b>b</b>) dark conditions.</p>
Full article ">Figure 3
<p>Wi-Filter architecture.</p>
Full article ">Figure 4
<p>CSI auto-labeling architecture for Wi-Filter.</p>
Full article ">Figure 5
<p>Overview of the real-time human presence detection process and lightweight CNN-based binary classifier architecture for the threshold selector.</p>
Full article ">Figure 6
<p>Accuracy of threshold selector for motion sensing for various window sizes.</p>
Full article ">Figure 7
<p>Comparison of the filtering performances of the static threshold technique (two settings) and Wi-Filter. (<b>a</b>) True positives in four different places under bright (experiment IDs 1 and 2) and dark conditions (IDs 3 and 4); (<b>b</b>) false positive.</p>
Full article ">Figure 8
<p>Average computing resources of the static threshold technique (two settings) and Wi-Filter: (<b>a</b>) CPU utilization, and (<b>b</b>) network transmission rate.</p>
Full article ">
17 pages, 478 KiB  
Review
Automated Machine Learning in Dentistry: A Narrative Review of Applications, Challenges, and Future Directions
by Sohaib Shujaat
Diagnostics 2025, 15(3), 273; https://doi.org/10.3390/diagnostics15030273 - 24 Jan 2025
Viewed by 637
Abstract
The adoption of automated machine learning (AutoML) in dentistry is transforming clinical practices by enabling clinicians to harness machine learning (ML) models without requiring extensive technical expertise. This narrative review aims to explore the impact of autoML in dental applications. A comprehensive search [...] Read more.
The adoption of automated machine learning (AutoML) in dentistry is transforming clinical practices by enabling clinicians to harness machine learning (ML) models without requiring extensive technical expertise. This narrative review aims to explore the impact of autoML in dental applications. A comprehensive search of PubMed, Scopus, and Google Scholar was conducted without time and language restrictions. Inclusion criteria focused on studies evaluating autoML applications and performance for dental tasks. Exclusion criteria included non-dental studies, single-case reports, and conference abstracts. This review highlights multiple promising applications of autoML in dentistry. Diagnostic tasks showed high accuracy, such as 95.4% precision in dental implant classification and 92% accuracy in paranasal sinus disease detection. Predictive tasks also demonstrated promise, including 84% accuracy for ICU admissions due to dental infections and 93.9% accuracy in orthodontic extraction predictions. AutoML frameworks like Google Vertex AI and H2O AutoML emerged as key tools for these applications. AutoML shows great promise in transforming dentistry by facilitating data-driven decision-making and improving patient care quality through accessible, automated solutions. Future advancements should focus on enhancing model interpretability, developing large and annotated datasets, and creating pipelines tailored to dental tasks. Educating clinicians on autoML and integrating domain-specific knowledge into automated platforms could further bridge the gap between complex ML technology and practical dental applications. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow of automated machine learning where one or more steps can be automated unlike conventional manual machine learning which requires expert oversight.</p>
Full article ">
Back to TopTop