[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,461)

Search Parameters:
Keywords = pipeline network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 998 KiB  
Article
TExCNN: Leveraging Pre-Trained Models to Predict Gene Expression from Genomic Sequences
by Guohao Dong, Yuqian Wu, Lan Huang, Fei Li and Fengfeng Zhou
Genes 2024, 15(12), 1593; https://doi.org/10.3390/genes15121593 - 12 Dec 2024
Viewed by 282
Abstract
Background/Objectives: Understanding the relationship between DNA sequences and gene expression levels is of significant biological importance. Recent advancements have demonstrated the ability of deep learning to predict gene expression levels directly from genomic data. However, traditional methods are limited by basic word encoding [...] Read more.
Background/Objectives: Understanding the relationship between DNA sequences and gene expression levels is of significant biological importance. Recent advancements have demonstrated the ability of deep learning to predict gene expression levels directly from genomic data. However, traditional methods are limited by basic word encoding techniques, which fail to capture the inherent features and patterns of DNA sequences. Methods: We introduce TExCNN, a novel framework that integrates the pre-trained models DNABERT and DNABERT-2 to generate word embeddings for DNA sequences. We partitioned the DNA sequences into manageable segments and computed their respective embeddings using the pre-trained models. These embeddings were then utilized as inputs to our deep learning framework, which was based on convolutional neural network. Results: TExCNN outperformed current state-of-the-art models, achieving an average R2 score of 0.622, compared to the 0.596 score achieved by the DeepLncLoc model, which is based on the Word2Vec model and a text convolutional neural network. Furthermore, when the sequence length was extended from 10,500 bp to 50,000 bp, TExCNN achieved an even higher average R2 score of 0.639. The prediction accuracy improved further when additional biological features were incorporated. Conclusions: Our experimental results demonstrate that the use of pre-trained models for word embedding generation significantly improves the accuracy of predicting gene expression. The proposed TExCNN pipeline performes optimally with longer DNA sequences and is adaptable for both cell-type-independent and cell-type-dependent predictions. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The datasets used in this study. The features include DNA sequences, half-life data, and transcription factor target data. The labels correspond to gene expression values.</p>
Full article ">Figure 2
<p>Performance comparison between DeepLncLoc and TExCNN. Different feature configurations are evaluated, including (<b>a</b>) DNA sequence features only, (<b>b</b>) DNA sequence and mRNA half-life features, and (<b>c</b>) DNA sequence, mRNA half-life, and TF features. The horizontal axis represents the minimum (Min), average (Avg), and maximum (Max) values of each model’s 10 independent runs. The vertical axis shows the R<sup>2</sup> values.</p>
Full article ">Figure 3
<p>Evaluation of different feature combinations on the extended dataset. Different feature configurations have been evaluated, including (1) DNA sequence features only (DNA), (2) DNA sequence and mRNA half-life features (DNA + half-life), and (3) DNA sequence, mRNA half-life, and TF features (DNA + half-life + TF). The comparison is between the TExCNN models with (<b>a</b>) 10,500-bp and (<b>b</b>) 50,000-bp input lengths. The horizontal axis represents the minimum (Min), average (Avg), and maximum (Max) values of each model’s 10 independent runs. The experiment is conducted on the extended dataset with 50,000-bp sequence length. The vertical axis gives the R<sup>2</sup> values.</p>
Full article ">Figure 4
<p>The R<sup>2</sup> values of predicting gene expression levels on 57 cells and tissues.</p>
Full article ">Figure 5
<p>Evaluation of TExCNN trained on four different datasets. The horizontal axis represents the minimum (Min), average (Avg), and maximum (Max) values of each model’s 10 independent runs. The vertical axis gives the R<sup>2</sup> values.</p>
Full article ">Figure A1
<p>The comparison experiment between different pre-processing methods. The horizontal axis represents the minimum (Min), average (Avg), and maximum (Max) values of each model’s 10 independent runs. The vertical axis gives the R<sup>2</sup> values.</p>
Full article ">Figure A2
<p>The ablation experiment of TExCNN. The horizontal axis represents the minimum (Min), average (Avg), and maximum (Max) values of each model’s 10 independent runs. The vertical axis gives the R<sup>2</sup> values.</p>
Full article ">Figure A3
<p>The results of the 10-fold validation test. Different feature configurations have been evaluated, including (1) DNA sequence features only (DNA), (2) DNA sequence and mRNA half-life features (DNA + half-life), and (3) DNA sequence, mRNA half-life, and TF features (DNA + half-life + TF). The comparison is between the TExCNN models with (<b>a</b>) 10,500-bp and (<b>b</b>) 50,000-bp input lengths. The horizontal axis represents the minimum (Min), average (Avg), and maximum (Max) values of each model’s 10 independent runs. The vertical axis gives the R<sup>2</sup> values.</p>
Full article ">Figure A4
<p>The results for the independent dataset. Different feature configurations have been evaluated, including (1) DNA sequence features only (DNA), (2) DNA sequence and mRNA half-life features (DNA + half-life), and (3) DNA sequence, mRNA half-life, and TF features (DNA + half-life + TF). The comparison is between the TExCNN models with (<b>a</b>) 10,500-bp and (<b>b</b>) 50,000-bp input lengths. The horizontal axis represents the minimum (Min), average (Avg), and maximum (Max) values of each model’s 10 independent runs. The vertical axis gives the R<sup>2</sup> values.</p>
Full article ">
24 pages, 6943 KiB  
Article
Multi-Channel Fusion Decision-Making Online Detection Network for Surface Defects in Automotive Pipelines Based on Transfer Learning VGG16 Network
by Jian Song, Yingzhong Tian and Xiang Wan
Sensors 2024, 24(24), 7914; https://doi.org/10.3390/s24247914 - 11 Dec 2024
Viewed by 286
Abstract
Although approaches for the online surface detection of automotive pipelines exist, low defect area rates, small-sample and long-tailed data, and the difficulty of detection due to the variable morphology of defects are three major problems faced when using such methods. In order to [...] Read more.
Although approaches for the online surface detection of automotive pipelines exist, low defect area rates, small-sample and long-tailed data, and the difficulty of detection due to the variable morphology of defects are three major problems faced when using such methods. In order to solve these problems, this study combines traditional visual detection methods and deep neural network technology to propose a transfer learning multi-channel fusion decision network without significantly increasing the number of network layers or the structural complexity. Each channel of the network is designed according to the characteristics of different types of defects. Dynamic weights are assigned to achieve decision-level fusion through the use of a matrix of indicators to evaluate the performance of each channel’s recognition ability. In order to improve the detection efficiency and reduce the amount of data transmission and processing, an improved ROI detection algorithm for surface defects is proposed. It can enable the rapid screening of target surfaces for the high-quality and rapid acquisition of surface defect images. On an automotive pipeline surface defect dataset, the detection accuracy of the multi-channel fusion decision network with transfer learning was 97.78% and its detection speed was 153.8 FPS. The experimental results indicate that the multi-channel fusion decision network could simultaneously take into account the needs for real-time detection and accuracy, synthesize the advantages of different network structures, and avoid the limitations of single-channel networks. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Overview of structural framework for online detection and offline training deployment of multi-channel fusion decision model.</p>
Full article ">Figure 2
<p>Improved surface defect ROI detection algorithm flow.</p>
Full article ">Figure 3
<p>Structure of the VGG16 transfer learning network.</p>
Full article ">Figure 4
<p>Structure of residual maximum average pooling feature extraction module.</p>
Full article ">Figure 5
<p>Structure of residual minimum space pyramid pooling feature-extraction module.</p>
Full article ">Figure 6
<p>Structure of residual full dilated convolutional feature-extraction module.</p>
Full article ">Figure 7
<p>Multi-channel fusion decision network based on migration learning VGG16.</p>
Full article ">Figure 8
<p>Pipe surface image acquisition system.</p>
Full article ">Figure 9
<p>Pipeline defect category.</p>
Full article ">Figure 10
<p>Network accuracy and loss error.</p>
Full article ">Figure 11
<p>Confusion matrix for network.</p>
Full article ">Figure 12
<p>Improved transfer learning VGG16 network structure diagram.</p>
Full article ">Figure 13
<p>Confusion matrix performance comparison for each network.</p>
Full article ">Figure 14
<p>Multi-channel fusion decision network based on transfer learning VGG16.</p>
Full article ">Figure 15
<p>Confusion matrix for fused network.</p>
Full article ">
12 pages, 1808 KiB  
Article
Implementation of Automatic Segmentation Framework as Preprocessing Step for Radiomics Analysis of Lung Anatomical Districts
by Alessandro Stefano, Fabiano Bini, Nicolò Lauciello, Giovanni Pasini, Franco Marinozzi and Giorgio Russo
BioMedInformatics 2024, 4(4), 2309-2320; https://doi.org/10.3390/biomedinformatics4040125 - 11 Dec 2024
Viewed by 343
Abstract
Background: The advent of artificial intelligence has significantly impacted radiology, with radiomics emerging as a transformative approach that extracts quantitative data from medical images to improve diagnostic and therapeutic accuracy. This study aimed to enhance the radiomic workflow by applying deep learning, through [...] Read more.
Background: The advent of artificial intelligence has significantly impacted radiology, with radiomics emerging as a transformative approach that extracts quantitative data from medical images to improve diagnostic and therapeutic accuracy. This study aimed to enhance the radiomic workflow by applying deep learning, through transfer learning, for the automatic segmentation of lung regions in computed tomography scans as a preprocessing step. Methods: Leveraging a pipeline articulated in (i) patient-based data splitting, (ii) intensity normalization, (iii) voxel resampling, (iv) bed removal, (v) contrast enhancement and (vi) model training, a DeepLabV3+ convolutional neural network (CNN) was fine tuned to perform whole-lung-region segmentation. Results: The trained model achieved high accuracy, Dice coefficient (0.97) and BF (93.06%) scores, and it effectively preserved lung region areas and removed confounding anatomical regions such as the heart and the spine. Conclusions: This study introduces a deep learning framework for the automatic segmentation of lung regions in CT images, leveraging an articulated pipeline and demonstrating excellent performance of the model, effectively isolating lung regions while excluding confounding anatomical structures. Ultimately, this work paves the way for more efficient, automated preprocessing tools in lung cancer detection, with potential to significantly improve clinical decision making and patient outcomes. Full article
(This article belongs to the Section Imaging Informatics)
Show Figures

Figure 1

Figure 1
<p>Semantic segmentation architecture of pretrained deeplabv3+.</p>
Full article ">Figure 2
<p>(<b>a</b>) Slice of a CT volume; (<b>b</b>) resampled slice; (<b>c</b>) slice after threshold; (<b>d</b>) slice after bed removal; (<b>e</b>) slice cropped to the thoracic region; and (<b>f</b>) slice after CLAHE.</p>
Full article ">Figure 3
<p>Comparison of manual (<b>a</b>) and automatic (<b>b</b>) segmentation.</p>
Full article ">Figure 4
<p>(<b>a</b>) Image before the automated deep learning whole lung district segmentation. (<b>b</b>) Image after the application of the automated deep learning whole lung district segmentation.</p>
Full article ">
18 pages, 6618 KiB  
Article
A Convolutional Graph Neural Network Model for Water Distribution Network Leakage Detection Based on Segment Feature Fusion Strategy
by Xuan Li and Yongqiang Wu
Water 2024, 16(24), 3555; https://doi.org/10.3390/w16243555 - 10 Dec 2024
Viewed by 400
Abstract
In this study, an innovative leak detection model based on Convolutional Graph Neural Networks (CGNNs) is proposed to enhance response speed during pipeline bursts and to improve detection accuracy. By integrating node features into pipe segment features, the model effectively combines CGNN with [...] Read more.
In this study, an innovative leak detection model based on Convolutional Graph Neural Networks (CGNNs) is proposed to enhance response speed during pipeline bursts and to improve detection accuracy. By integrating node features into pipe segment features, the model effectively combines CGNN with water distribution networks, achieving leak detection at the pipe segment level. Optimizing the receptive field and convolutional layers ensures high detection performance even with sparse monitoring device density. Applied to two representative water distribution networks in City H, China, the model was trained on synthetic leak data generated by EPANET simulations and validated using real-world leak events. The experimental results show that the model achieves 90.28% accuracy in high-density monitoring areas, and over 85% accuracy within three pipe segments of actual leaks in low-density areas (10%–20%). The impact of feature engineering on model performance is also analyzed and strategies are suggested for optimizing monitoring point placement, further improving detection efficiency. This research provides valuable technical support for the intelligent management of water distribution networks under resource-limited conditions. Full article
(This article belongs to the Section Urban Water Management)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the leak localization model process.</p>
Full article ">Figure 2
<p>Eight warning events in the SCADA system of City H.</p>
Full article ">Figure 3
<p>The process of fusing node features into segment features through convolutional layers. (<b>a</b>) Pipe segment 5 in the water supply network. (<b>b</b>) Convolutional layer connectivity. (<b>c</b>) Receptive field expansion of node 5. (<b>d</b>) Receptive field expansion of node 6. (<b>e</b>) Final receptive field of pipe segment 5.</p>
Full article ">Figure 4
<p>A schematic overview of the entire process of the Convolutional Graph Neural Network (CGNN).</p>
Full article ">Figure 5
<p>Water distribution network of City H.</p>
Full article ">Figure 6
<p>On-site photos of pressure sensors in City H.</p>
Full article ">Figure 7
<p>Water supply network map of Ring A Water Plant.</p>
Full article ">Figure 8
<p>Zonal analysis of the water distribution network in City H.</p>
Full article ">
23 pages, 106560 KiB  
Article
RLUNet: Overexposure-Content-Recovery-Based Single HDR Image Reconstruction with the Imaging Pipeline Principle
by Yiru Zheng, Wei Wang, Xiao Wang and Xin Yuan
Appl. Sci. 2024, 14(23), 11289; https://doi.org/10.3390/app142311289 - 3 Dec 2024
Viewed by 611
Abstract
With the popularity of High Dynamic Range (HDR) display technology, consumer demand for HDR images is increasing. Since HDR cameras are expensive, reconstructing High Dynamic Range (HDR) images from traditional Low Dynamic Range (LDR) images is crucial. However, existing HDR image reconstruction algorithms [...] Read more.
With the popularity of High Dynamic Range (HDR) display technology, consumer demand for HDR images is increasing. Since HDR cameras are expensive, reconstructing High Dynamic Range (HDR) images from traditional Low Dynamic Range (LDR) images is crucial. However, existing HDR image reconstruction algorithms often fail to recover fine details and do not adequately address the fundamental principles of the LDR imaging pipeline. To overcome these limitations, the Reversing Lossy UNet (RLUNet) has been proposed, aiming to effectively balance dynamic range expansion and recover overexposed areas through a deeper understanding of LDR image pipeline principles. The RLUNet model comprises the Reverse Lossy Network, which is designed according to the LDR–HDR framework and focuses on reconstructing HDR images by recovering overexposed regions, dequantizing, linearizing the mapping, and suppressing compression artifacts. This framework, grounded in the principles of the LDR imaging pipeline, is designed to reverse the operations involved in lossy image operations. Furthermore, the integration of the Texture Filling Module (TFM) block with the Recovery of Overexposed Regions (ROR) module in the RLUNet model enhances the visual performance and detail texture of the overexposed areas in the reconstructed HDR image. The experiments demonstrate that the proposed RLUNet model outperforms various state-of-the-art methods on different testsets. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Existing HDR reconstruction methods struggle with overexposed content and dynamic range. Our approach reverses the lossy operation, achieving more accurate HDR results and reducing artifacts (second and fourth columns), while reconstructing finer textures (first and third columns). The test images used in this study are courtesy of [<a href="#B24-applsci-14-11289" class="html-bibr">24</a>] and all HDR images are tone-mapped [<a href="#B23-applsci-14-11289" class="html-bibr">23</a>].</p>
Full article ">Figure 2
<p>Principles of the LDR imaging pipeline and lossy image quality operations. Visual representations are provided to illustrate the degradation in image quality resulting from these lossy operations.</p>
Full article ">Figure 3
<p>Reconstruction of HDR images from LDR images is typically divided into several subtasks. The RLUNet model is specifically designed to reconstruct HDR images in terms of Recovering Overexposed Regions (ROR), Dequantization (DQ), Reconstruction of Linearized Mappings (RLM), and Compression Artifact Adjustment Module (CAAM), which focuses on reversing operations that cause a loss of image quality.</p>
Full article ">Figure 4
<p>The overall structure of the proposed Reverse Lossy UNet (RLUNet).</p>
Full article ">Figure 5
<p>The scheme of the proposed Texture Filling Module (TFM) block.</p>
Full article ">Figure 6
<p>The composition of the Imaging Pipeline Module (IPM) block.</p>
Full article ">Figure 7
<p>Qualitative results of our method comparison with state-of-the-art methods on the NTIRE testset [<a href="#B24-applsci-14-11289" class="html-bibr">24</a>]. The first and the last images are LDR input and the corresponding ground truth HDR images. The remaining images are the result images from existing methods, HDRCNN [<a href="#B38-applsci-14-11289" class="html-bibr">38</a>], ExpandNet [<a href="#B42-applsci-14-11289" class="html-bibr">42</a>], FHDR [<a href="#B41-applsci-14-11289" class="html-bibr">41</a>], DeepHDR [<a href="#B39-applsci-14-11289" class="html-bibr">39</a>], SingleHDR [<a href="#B36-applsci-14-11289" class="html-bibr">36</a>], Input, HDRUNet [<a href="#B23-applsci-14-11289" class="html-bibr">23</a>], KUNet [<a href="#B44-applsci-14-11289" class="html-bibr">44</a>], CEVR [<a href="#B34-applsci-14-11289" class="html-bibr">34</a>] and our RLUNet, respectively. In addition, the red box zooms in on the details of the overexposed region, and the green box zooms in on the details of the underexposed region.</p>
Full article ">Figure 8
<p>Qualitative results of our method comparison with state-of-the-art methods on HDR-Real testset [<a href="#B36-applsci-14-11289" class="html-bibr">36</a>]. The first and the last images are LDR input and the corresponding ground truth HDR images. The remaining images are the result images from existing methods, HDRCNN [<a href="#B38-applsci-14-11289" class="html-bibr">38</a>], ExpandNet [<a href="#B42-applsci-14-11289" class="html-bibr">42</a>], FHDR [<a href="#B41-applsci-14-11289" class="html-bibr">41</a>], DeepHDR [<a href="#B39-applsci-14-11289" class="html-bibr">39</a>], SingleHDR [<a href="#B36-applsci-14-11289" class="html-bibr">36</a>], Input, HDRUNet [<a href="#B23-applsci-14-11289" class="html-bibr">23</a>], KUNet [<a href="#B44-applsci-14-11289" class="html-bibr">44</a>], CEVR [<a href="#B34-applsci-14-11289" class="html-bibr">34</a>] and our RLUNet, respectively. In addition, the red box magnifies the artifact suppression effect in overexposed areas, The green box magnifies the color and detail reconstruction of the overexposed area.</p>
Full article ">Figure 9
<p>Visual presentation of ablation studies on different modules.</p>
Full article ">Figure 10
<p>Reconstruction results of our RLUNet algorithm in comparison with state-of-the-art algorithms on a dataset with real-world images. The first and the last images are the LDR input and our RLUNet result image. The remaining images are the result images from existing methods, HDRCNN [<a href="#B38-applsci-14-11289" class="html-bibr">38</a>], ExpandNet [<a href="#B42-applsci-14-11289" class="html-bibr">42</a>], FHDR [<a href="#B41-applsci-14-11289" class="html-bibr">41</a>], DeepHDR [<a href="#B39-applsci-14-11289" class="html-bibr">39</a>], SingleHDR [<a href="#B36-applsci-14-11289" class="html-bibr">36</a>], HDRUNet [<a href="#B23-applsci-14-11289" class="html-bibr">23</a>], KUNet [<a href="#B44-applsci-14-11289" class="html-bibr">44</a>] and CEVR [<a href="#B34-applsci-14-11289" class="html-bibr">34</a>], respectively.</p>
Full article ">
18 pages, 5787 KiB  
Article
Numerical Simulation Study on Reverse Source Tracing for Heating Pipeline Network Leaks Based on Adjoint Equations
by Jie Wang, Yue Zhu, Songyu Zou, Shuai Xue, Le Chen, Weilong Hou, Shengwei Xin, Jinglan Li and Zhongyan Liu
Processes 2024, 12(12), 2710; https://doi.org/10.3390/pr12122710 - 1 Dec 2024
Viewed by 458
Abstract
In order to identify the leak source in complex heating pipeline networks, a timely and effective simulation of the leakage process was conducted. The open-source computational fluid dynamics software OpenFOAM 5.0 was combined with the PISO algorithm to simulate the pressure during the [...] Read more.
In order to identify the leak source in complex heating pipeline networks, a timely and effective simulation of the leakage process was conducted. The open-source computational fluid dynamics software OpenFOAM 5.0 was combined with the PISO algorithm to simulate the pressure during the leakage in water supply networks, transforming the reverse source tracing problem into the solution of an adjoint equation. The validation of the transient adjoint equation for single-phase flow was completed through simulation, and the pressure wave change graph at the moment of the network leakage was solved, which was consistent with the experimental results. Using the open-source finite element analysis software OpenFOAM 5.0, the positioning accuracy of pipeline leak points can be controlled within the range from 92% to 96%. Based on the pressure wave change graph, the position of the leak source in the complex network was determined using the reverse source tracing method combined with the second correlation theory. The results show that the calculation speed of the PISO algorithm combined with the adjoint equation is significantly better than that of the individual SIMPLE and PISO algorithms, thereby proving the superiority of the adjoint method. Full article
(This article belongs to the Special Issue Model Predictive Control of Heating and Cooling Systems)
Show Figures

Figure 1

Figure 1
<p>Experimental system.</p>
Full article ">Figure 2
<p>Distribution of pressure measurement points and leak points on the experimental rig.</p>
Full article ">Figure 3
<p>Pipeline network Front view of the experimental bench. (<b>a</b>) Front view of the experimental bench; (<b>b</b>) Side view of the experimental bench.</p>
Full article ">Figure 4
<p>4 × 4 and 2 × 2 Mesh Division.</p>
Full article ">Figure 5
<p>Principle diagram of the novel time delay estimation method.</p>
Full article ">Figure 6
<p>Schematic layout diagram of the pipeline network.</p>
Full article ">Figure 7
<p>Pressure and correlation comparison chart at leak point 1 before and after the leak (<b>a</b>) P varies with time; (<b>b</b>) R<sub>s1s2</sub> (τ) varies with time.</p>
Full article ">Figure 8
<p>Pressure and correlation comparison chart at leak point 2 before and after the leak (<b>a</b>) P varies with time; (<b>b</b>) R<sub>s1s2</sub> (τ) varies with time.</p>
Full article ">Figure 9
<p>Pressure and correlation comparison chart at leak point 3 before and after the leak (<b>a</b>) P varies with time; (<b>b</b>) R<sub>s1s2</sub> (τ) varies with time.</p>
Full article ">Figure 10
<p>Leak detection and localization flowchart.</p>
Full article ">Figure 11
<p>Schematic diagram of potential leak locations.</p>
Full article ">Figure 12
<p>Comparison of computational speeds of different methods.</p>
Full article ">Figure 13
<p>Pressure variation chart. (<b>a</b>) experimental data; (<b>b</b>) simulation data.</p>
Full article ">Figure 14
<p>Comparison of experimental localization accuracy at different hole diameters.</p>
Full article ">Figure 15
<p>Comparison of pipeline localization accuracy at different leak points.</p>
Full article ">
22 pages, 17763 KiB  
Article
Computer Vision Technology for Short Fiber Segmentation and Measurement in Scanning Electron Microscopy Images
by Evgenii Kurkin, Evgenii Minaev, Andrey Sedelnikov, Jose Gabriel Quijada Pioquinto, Vladislava Chertykovtseva and Andrey Gavrilov
Technologies 2024, 12(12), 249; https://doi.org/10.3390/technologies12120249 - 29 Nov 2024
Viewed by 742
Abstract
Computer vision technology for the automatic recognition and geometric characterization of carbon and glass fibers in scanning electron microscopy images is proposed. The proposed pipeline, combining the SAM model and DeepLabV3+, provides the generalizability and accuracy of the foundational SAM model and the [...] Read more.
Computer vision technology for the automatic recognition and geometric characterization of carbon and glass fibers in scanning electron microscopy images is proposed. The proposed pipeline, combining the SAM model and DeepLabV3+, provides the generalizability and accuracy of the foundational SAM model and the ability to quickly train on a small amount of data via the DeepLabV3+ model. The pipeline was trained several times more rapidly with lower requirements for computing resources than fine-tuning the SAM model, with comparable inference time. On the basis of the pipeline, an end-to-end technology for processing images of electron microscopic fibers was developed, the input of which is images with metadata and the output of which is statistics on the distribution of the geometric characteristics of the fibers. This innovation is of great practical importance for modeling the physical characteristics of materials. This paper proposes a few-shot training procedure for the DeepLabV3+/SAM pipeline, combining the training of the DeepLabV3+ model weights and the SAM model parameters. It allows effective training of the pipeline using only 37 real labeled images. The pipeline was then adapted to a new type of fiber and background using 15 additional real labeled images. This article also proposes a method for generating synthetic data for training neural network models, which improves the quality of segmentation by the IoU and PixAcc metrics from 0.943 and 0.949 to 0.953 and 0.959, i.e., by 1% on average. The developed pipeline significantly reduces the time required to evaluate fiber length in scanning electron microscope images. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of the artificial images used to train the Mask R-CNN neural network [<a href="#B29-technologies-12-00249" class="html-bibr">29</a>].</p>
Full article ">Figure 2
<p>Examples of the performance of Mask R-CNN when detecting short fibers in: (<b>a</b>) an image with few fibers, and (<b>b</b>) in an image with more fibers and overlapping sets.</p>
Full article ">Figure 3
<p>Heating of composite material in a muffle furnace.</p>
Full article ">Figure 4
<p>A sample subjected to heating by muffle furnace.</p>
Full article ">Figure 5
<p>Obtaining images of short fibers with the help of the Tescan Vega scanning electron microscope.</p>
Full article ">Figure 6
<p>Examples of images taken with the Tescan Vega scanning electron microscope.</p>
Full article ">Figure 7
<p>The fiber labeling of two real images.</p>
Full article ">Figure 8
<p>Example of an image created by the NX API.</p>
Full article ">Figure 9
<p>A two-stage pipeline is proposed combining SAM and DeepLabV3+.</p>
Full article ">Figure 10
<p>DeepLabV3+ architecture.</p>
Full article ">Figure 11
<p>SAM architecture.</p>
Full article ">Figure 12
<p>Example of metadata.</p>
Full article ">Figure 13
<p>A sample of the database of real images.</p>
Full article ">Figure 14
<p>A sample of the database of artificial images.</p>
Full article ">Figure 15
<p>A sample of the extra database of glass fibers real images.</p>
Full article ">Figure 16
<p>Examples of creating fibers and background label masks using DeepLabv3+ trained with different datasets.</p>
Full article ">Figure 17
<p>Examples of creating the masks of the fiber and background labels using DeepLabv3+ trained with the expansion of the dataset for different fiber types.</p>
Full article ">Figure 18
<p>SAM model hyperparameters search.</p>
Full article ">Figure 19
<p>SAM model results (<b>a</b>) default parameters, (<b>b</b>) optimized parameters.</p>
Full article ">Figure 20
<p>A qualitative comparison (<b>a</b>) Input image (<b>b</b>) Manual segmentation (<b>c</b>) DeepLabv3+ (<b>d</b>) DeepLabv3+/Hough, (<b>e</b>) SAM, (<b>f</b>) DeepLabv3+/SAM.</p>
Full article ">Figure 21
<p>Histograms of the distribution of lengths, thicknesses, and length-to-thickness ratios blue—automatic segmentation, green—manual segmentation.</p>
Full article ">Figure 22
<p>Glass fiber automatic segmentation with fiber length detection in micrometers.</p>
Full article ">Figure 23
<p>Manual glass fiber measurement.</p>
Full article ">Figure 24
<p>Challenging conditions for fiber segmentation: (<b>a</b>) intersecting fibers, (<b>b</b>) high fiber density, (<b>c</b>) large amounts of foreign matter.</p>
Full article ">
27 pages, 8948 KiB  
Article
Defect Detection and 3D Reconstruction of Complex Urban Underground Pipeline Scenes for Sewer Robots
by Ruihao Liu, Zhongxi Shao, Qiang Sun and Zhenzhong Yu
Sensors 2024, 24(23), 7557; https://doi.org/10.3390/s24237557 - 26 Nov 2024
Viewed by 630
Abstract
Detecting defects in complex urban sewer scenes is crucial for urban underground structure health monitoring. However, most image-based sewer defect detection models are complex, have high resource consumption, and fail to provide detailed damage information. To increase defect detection efficiency, visualize pipelines, and [...] Read more.
Detecting defects in complex urban sewer scenes is crucial for urban underground structure health monitoring. However, most image-based sewer defect detection models are complex, have high resource consumption, and fail to provide detailed damage information. To increase defect detection efficiency, visualize pipelines, and enable deployment on edge devices, this paper proposes a computer vision-based robotic defect detection framework for sewers. The framework encompasses positioning, defect detection, model deployment, 3D reconstruction, and the measurement of realistic pipelines. A lightweight Sewer-YOLO-Slim model is introduced, which reconstructs the YOLOv7-tiny network by adjusting its backbone, neck, and head. Channel pruning is applied to further reduce the model’s complexity. Additionally, a multiview reconstruction technique is employed to build a 3D model of the pipeline from images captured by the sewer robot, allowing for accurate measurements. The Sewer-YOLO-Slim model achieves reductions of 60.2%, 60.0%, and 65.9% in model size, parameters, and floating-point operations (FLOPs), respectively, while improving the mean average precision (mAP) by 1.5%, reaching 93.5%. Notably, the pruned model is only 4.9 MB in size. Comprehensive comparisons and analyses are conducted with 12 mainstream detection algorithms to validate the superiority of the proposed model. The model is deployed on edge devices with the aid of TensorRT for acceleration, and the detection speed reaches 15.3 ms per image. For a real section of the pipeline, the maximum measurement error of the 3D reconstruction model is 0.57 m. These results indicate that the proposed sewer inspection framework is effective, with the detection model exhibiting advanced performance in terms of accuracy, low computational demand, and real-time capability. The 3D modeling approach offers valuable insights for underground pipeline data visualization and defect measurement. Full article
Show Figures

Figure 1

Figure 1
<p>Pipeline defect detection and 3D scene reconstruction framework.</p>
Full article ">Figure 2
<p>Three views of the amphibious wheeled pipe inspection robot.</p>
Full article ">Figure 3
<p>Pipeline detection and data collection process.</p>
Full article ">Figure 4
<p>Five types of defect examples and labels.</p>
Full article ">Figure 5
<p>YOLOv7-tiny network structure.</p>
Full article ">Figure 6
<p>The technical process of the proposed method. (<b>a</b>) The improved structure of YOLOv7-tiny. (<b>b</b>) Channel pruning is applied to the improved model. (<b>c</b>) The process of Sewer-YOLO-Slim being deployed on the edge detection device.</p>
Full article ">Figure 7
<p>The architecture of FasterNet and its improvements.</p>
Full article ">Figure 8
<p>The structure of the GSCConv module.</p>
Full article ">Figure 9
<p>The structure of the VoVGSCCSP module.</p>
Full article ">Figure 10
<p>The DyHead structure. (<b>a</b>) Description of a three-dimensional tensor from a general view, where L, C, and S represent the number of layers, channels, and the height and width of the feature map, respectively. (<b>b</b>) Three attention mechanisms (scale-aware, spatial, and channel) are applied in sequence. (<b>c</b>) The specific application of DyHead in the detection framework.</p>
Full article ">Figure 11
<p>Schematic diagram of the channel pruning algorithm.</p>
Full article ">Figure 12
<p>The localization process of the amphibious wheeled pipeline inspection robot.</p>
Full article ">Figure 13
<p>Three-dimensional reconstruction process of the pipeline scene.</p>
Full article ">Figure 14
<p>Channel changes before and after pruning.</p>
Full article ">Figure 15
<p>The improved YOLOv7-tiny’s training loss and the precision–recall curve. (<b>a</b>) Loss curves at three different stages during the entire training process of the improved model. (<b>b</b>) Precision–recall curve of the improved algorithm.</p>
Full article ">Figure 16
<p>Comparison of different models on the USDID dataset. (<b>a</b>) Detection of leaks. (<b>b</b>) Detection of misplacements and fouling. (<b>c</b>) Detection of obstacles and misplacements.</p>
Full article ">Figure 17
<p>Onsite video inspection results.</p>
Full article ">Figure 18
<p>SIFT feature extraction and matching effect diagram.</p>
Full article ">Figure 19
<p>Motion trajectory tracking of the amphibious wheeled robot.</p>
Full article ">Figure 20
<p>Positioning of the amphibious wheeled robot during image collection.</p>
Full article ">Figure 21
<p>Textured 3D model of the sewer scene.</p>
Full article ">Figure 22
<p>Three views of the textured 3D model of the inspection well.</p>
Full article ">Figure 23
<p>Three defects in the 3D model.</p>
Full article ">Figure 24
<p>Three defects inside the 3D model.</p>
Full article ">Figure 25
<p>Inspection well bore diameter measurements.</p>
Full article ">Figure 26
<p>Textured 3D model of the drainage pipe based on bidirectional data.</p>
Full article ">Figure 27
<p>Defect location and measurement of the axial pipeline 3D model.</p>
Full article ">Figure 28
<p>Three-dimensional measurement of internal pipeline defect damage. (<b>a</b>) Measurement of Defect 1. (<b>b</b>) Measurement of Defect 2.</p>
Full article ">
19 pages, 7127 KiB  
Article
Refinement of Control Strategies for Wheel-Fan Systems in High-Speed Air-Floating Vehicles Operating in Atmospheric Pressure Pipelines
by Kun Zhang, Bin Jiao, Yuliang Bian, Zeming Liu, Tiehua Ma and Changxin Chen
Aerospace 2024, 11(12), 974; https://doi.org/10.3390/aerospace11120974 - 26 Nov 2024
Viewed by 416
Abstract
This study explored the optimization of control systems for atmospheric pipeline air-floating vehicles traveling at ground level by introducing a novel composite wheel-fan system that integrates both wheels and fans. To evaluate the control impedance, the system simulates road conditions like inclines, uneven [...] Read more.
This study explored the optimization of control systems for atmospheric pipeline air-floating vehicles traveling at ground level by introducing a novel composite wheel-fan system that integrates both wheels and fans. To evaluate the control impedance, the system simulates road conditions like inclines, uneven surfaces, and obstacles by using fixed, random, and high torque settings. The hub motor of the wheel fan is managed through three distinct algorithms: PID, fuzzy PID, and the backpropagation neural network (BP). Each algorithm’s control strategy is outlined, and tracking experiments were conducted across straight, circular, and curved trajectories. Analysis of these experiments supports a hybrid control approach: initiating with fuzzy PID, employing the PID algorithm on straight paths, and utilizing the BP neural network for sinusoidal and circular paths. The adaptive capacity of the BP neural network suggests its potential to eventually supplant the PID algorithm in straight path scenarios over extended testing and operation, ensuring improved control performance. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the position of the bottom wheel fan.</p>
Full article ">Figure 2
<p>Schematic diagram of the wheel-fan structure.</p>
Full article ">Figure 3
<p>Equivalent coordinate relationship between the center of the experimental mobile platform and the center of the wheel fan.</p>
Full article ">Figure 4
<p>Plot of wheel fan versus hub motor coordinates.</p>
Full article ">Figure 5
<p>Genetic algorithm, preferably PID, flowchart.</p>
Full article ">Figure 6
<p>Diagram of the n–t curve comparison of the original parameter and the first set of optimized parameters.</p>
Full article ">Figure 7
<p>Comparison T–t curve diagram of the original parameter and the first set optimization parameters.</p>
Full article ">Figure 8
<p>The curve relationship diagram of the original parameters Iabc-t.</p>
Full article ">Figure 9
<p>The curve relationship diagram of the first set of optimization parameters Iabc-t.</p>
Full article ">Figure 10
<p>The n–t curve comparison chart of the original parameter and the second set of optimized parameters.</p>
Full article ">Figure 11
<p>The T–t curve comparison chart of the original parameter and the second set of optimized parameters.</p>
Full article ">Figure 12
<p>The curve relationship diagram of the second set of optimized parameters Iabc-t.</p>
Full article ">Figure 13
<p>PID control algorithm schematic diagram.</p>
Full article ">Figure 14
<p>PID algorithm embedded software design flow.</p>
Full article ">Figure 15
<p>Vague adaptive PID box diagram.</p>
Full article ">Figure 16
<p>Blog controller frame diagram.</p>
Full article ">Figure 17
<p>Fuzzy PID operation flowchart of the hub motor wheel-to-wheel experimental mobile platform.</p>
Full article ">Figure 18
<p>Schematic of neural network.</p>
Full article ">Figure 19
<p>Flow of BP neural network algorithm in the embedded system.</p>
Full article ">Figure 20
<p>Test plot of impedance for two parameters.</p>
Full article ">Figure 21
<p>Comparison of the three algorithms tested in the driving condition of the straight line test.</p>
Full article ">Figure 22
<p>Comparison of the test results of the three algorithms in the driving condition of the circular test.</p>
Full article ">Figure 23
<p>Comparison of the test results of the three algorithms under the driving condition of the curve test.</p>
Full article ">Figure 24
<p>Comparison of the time spent by the three algorithms.</p>
Full article ">
10 pages, 2220 KiB  
Article
Prediction of Blast Vibration Velocity of Buried Steel Pipe Based on PSO-LSSVM Model
by Hongyu Zhang, Shengwu Tu, Senlin Nie and Weihua Ming
Sensors 2024, 24(23), 7437; https://doi.org/10.3390/s24237437 - 21 Nov 2024
Viewed by 307
Abstract
In order to ensure the safe operation of adjacent buried pipelines under blast vibration, it is of great practical engineering significance to accurately predict the peak vibration velocity ofburied pipelines under blasting loads. Relying on the test results of the buried steel pipe [...] Read more.
In order to ensure the safe operation of adjacent buried pipelines under blast vibration, it is of great practical engineering significance to accurately predict the peak vibration velocity ofburied pipelines under blasting loads. Relying on the test results of the buried steel pipe blast model test, a sensitivity analysis of relevant influencing factors was carried out by using the gray correlation analysis method. A least squares support vector machine (LS-SVM) model was established to predict the peak vibration velocity of the pipeline and determine the best parameter combination in the LS-SVM model through a local particle swarm optimization (PSO), and the results of the PSO-LSSVM model were predicted. These were compared with BP neural network model and Sa’s empirical formula. The results show that the fitting correlation coefficient (R2), root mean square error (RMSE), average relative error (MRE), and Nash coefficient (NSE) of the PSO-LSSVM model for the prediction of pipeline peak vibration velocity are 91.51%, 2.95%, 8.69%, and 99.03%, showing that the PSO-LSSVM model has a higher prediction accuracy and better generalization ability, which provides a new idea for the vibration velocity prediction of buried pipelines under complex blasting conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Processing flow of PSO-LSSVM model.</p>
Full article ">Figure 2
<p>Model test layout diagram.</p>
Full article ">Figure 3
<p>Installation diagram of blast vibration meter.</p>
Full article ">Figure 4
<p>Fitness curve of PSO-LSSVM model.</p>
Full article ">Figure 5
<p>A comparison between the true value and the predicted value of the training sample of the PSO-LSSVM model.</p>
Full article ">Figure 6
<p>Comparison of prediction results of different models.</p>
Full article ">
22 pages, 9902 KiB  
Article
Analytical Fragility Surfaces and Global Sensitivity Analysis of Buried Operating Steel Pipeline Under Seismic Loading
by Gersena Banushi
Appl. Sci. 2024, 14(22), 10735; https://doi.org/10.3390/app142210735 - 20 Nov 2024
Viewed by 473
Abstract
The structural integrity of buried pipelines is threatened by the effects of Permanent Ground Deformation (PGD), resulting from seismic-induced landslides and lateral spreading due to liquefaction, requiring accurate analysis of the system performance. Analytical fragility functions allow us to estimate the likelihood of [...] Read more.
The structural integrity of buried pipelines is threatened by the effects of Permanent Ground Deformation (PGD), resulting from seismic-induced landslides and lateral spreading due to liquefaction, requiring accurate analysis of the system performance. Analytical fragility functions allow us to estimate the likelihood of seismic damage along the pipeline, supporting design engineers and network operators in prioritizing resource allocation for mitigative or remedial measures in spatially distributed lifeline systems. To efficiently and accurately evaluate the seismic fragility of a buried operating steel pipeline under longitudinal PGD, this study develops a new analytical model, accounting for the asymmetric pipeline behavior in tension and compression under varying operational loads. This validated model is further implemented within a fragility function calculation framework based on the Monte Carlo Simulation (MCS), allowing us to efficiently assess the probability of the pipeline exceeding the performance limit states, conditioned to the PGD demand. The evaluated fragility surfaces showed that the probability of the pipeline exceeding the performance criteria increases for larger soil displacements and lengths, as well as cover depths, because of the greater mobilized soil reaction counteracting the pipeline deformation. The performed Global Sensitivity Analysis (GSA) highlighted the influence of the PGD and soil–pipeline interaction parameters, as well as the effect of the service loads on structural performance, requiring proper consideration in pipeline system modeling and design. Overall, the proposed analytical fragility function calculation framework provides a useful methodology for effectively assessing the performance of operating pipelines under longitudinal PGD, quantifying the effect of the uncertain parameters impacting system response. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Pipeline subjected to longitudinal PGD: (<b>a</b>) 3D view; (<b>b</b>) 2D schematic representation.</p>
Full article ">Figure 2
<p>Pipeline response to longitudinal PGD according to analytical model in [<a href="#B11-applsci-14-10735" class="html-bibr">11</a>], assuming symmetric material behavior for tension and compression: (<b>a</b>) case I; (<b>b</b>) case II.</p>
Full article ">Figure 3
<p>Schematic representation of operating pipeline response subjected to longitudinal PGD: (<b>a</b>) pipeline displacement subjected to longitudinal soil block movement (case II); (<b>b</b>) soil–pipeline system behaving like a pull-out test under tension (region I) and compression (region IV).</p>
Full article ">Figure 4
<p>Schematic representation of the axial constitutive behavior of the steel pipe material, defined within the associated von Mises plasticity with isotropic hardening [<a href="#B30-applsci-14-10735" class="html-bibr">30</a>].</p>
Full article ">Figure 5
<p>The comparison between the numerical, the conventional [<a href="#B8-applsci-14-10735" class="html-bibr">8</a>,<a href="#B11-applsci-14-10735" class="html-bibr">11</a>,<a href="#B13-applsci-14-10735" class="html-bibr">13</a>], and the proposed analytical models, evaluating the pipeline performance under longitudinal PGD (<span class="html-italic">L<sub>b</sub></span> = 300 m) in terms of maximum tensile and compressive pipe strain as a function of the ground displacement <span class="html-italic">δ</span>.</p>
Full article ">Figure 6
<p>The variation of the critical soil block length, <span class="html-italic">L<sub>cr</sub></span> = (<span class="html-italic">F<sub>t,max</sub></span> − <span class="html-italic">F<sub>c,max</sub></span>)/<span class="html-italic">f<sub>s</sub></span>, as a function of the ground displacement <span class="html-italic">δ</span>, with an indication of the critical values (<span class="html-italic">δ<sub>cr,i</sub></span>, <span class="html-italic">L<sub>cr,i</sub></span>) associated with the achievement of the pipeline performance limit states.</p>
Full article ">Figure 7
<p>The peak axial strain magnitude in the pressurized pipeline (<span class="html-italic">P<sub>i</sub></span>/<span class="html-italic">P<sub>max</sub></span> = 0.75, Δ<span class="html-italic">T</span> = 50 °C) as a function of the PGD length <span class="html-italic">L<sub>b</sub></span> and displacement <span class="html-italic">δ</span> for (<b>a</b>) tension and (<b>b</b>) compression. The dashed horizontal curves represent the strain isolines corresponding to the NOL and PIL performance limit states.</p>
Full article ">Figure 8
<p>The peak axial strain magnitude in the unpressurized pipeline (<span class="html-italic">P<sub>i</sub></span>/<span class="html-italic">P<sub>max</sub></span> = 0, Δ<span class="html-italic">T</span> = 0 °C) as a function of the PGD length <span class="html-italic">L<sub>b</sub></span> and displacement <span class="html-italic">δ</span> for (<b>a</b>) tension and (<b>b</b>) compression. The dashed horizontal curves represent the strain isolines corresponding to the NOL and PIL performance limit states.</p>
Full article ">Figure 9
<p>Fragility surface of buried pipeline (<span class="html-italic">H<sub>c</sub></span> = 1.5 m) for (<b>a</b>) Normal Operability Limit (NOL) and (<b>b</b>) Pressure Integrity Limit (PIL).</p>
Full article ">Figure 10
<p>Schematic representation of the performance assessment of the buried pipeline subjected to the PGD demand (<span class="html-italic">δ</span>, <span class="html-italic">L<sub>b</sub></span>), using the deterministic and fragility analysis framework.</p>
Full article ">Figure 11
<p>Fragility surface of buried pipeline for different cover depths and performance limit states: (<b>a</b>) <span class="html-italic">H<sub>c</sub></span> = 1.0 m, NOL; (<b>b</b>) <span class="html-italic">H<sub>c</sub></span> = 1.0 m, PIL and (<b>c</b>) <span class="html-italic">H<sub>c</sub></span> = 2.0 m, NOL; and (<b>d</b>) <span class="html-italic">H<sub>c</sub></span> = 2.0 m, PIL.</p>
Full article ">Figure 12
<p>The comparison of the first-order and total-order sensitivity indices of the system input parameters for the (<b>a</b>) NOL and (<b>b</b>) PIL performance limit states.</p>
Full article ">Figure A1
<p>Response of the pressurized pipeline (<span class="html-italic">P<sub>i</sub></span>/<span class="html-italic">P<sub>max</sub></span> = 0.75, Δ<span class="html-italic">T</span> = 50 °C) to longitudinal PGD with block length <span class="html-italic">L<sub>b</sub></span> = 200 m (case I): (<b>a</b>) pipe axial force; (<b>b</b>) pipe axial stress; (<b>c</b>) soil friction; (<b>d</b>) ground displacement; (<b>e</b>) pipe axial displacement; (<b>f</b>) pipe axial strain vs. distance from tension crack.</p>
Full article ">Figure A2
<p>Response of the unpressurized pipeline (<span class="html-italic">P<sub>i</sub></span>/<span class="html-italic">P<sub>max</sub></span> = 0, Δ<span class="html-italic">T</span> = 0 °C) to longitudinal PGD with block length <span class="html-italic">L<sub>b</sub></span> = 200 m (case I): (<b>a</b>) pipe axial force; (<b>b</b>) pipe axial stress; (<b>c</b>) soil friction; (<b>d</b>) ground displacement; (<b>e</b>) pipe axial displacement; (<b>f</b>) pipe axial strain vs. distance from tension crack.</p>
Full article ">Figure A3
<p>Response of the pressurized pipeline (<span class="html-italic">P<sub>i</sub></span>/<span class="html-italic">P<sub>max</sub></span> = 0.75, Δ<span class="html-italic">T</span> = 50 °C) to longitudinal PGD with block length <span class="html-italic">L<sub>b</sub></span> = 300 m (case II): (<b>a</b>) pipe axial force; (<b>b</b>) pipe axial stress; (<b>c</b>) soil friction; (<b>d</b>) ground displacement; (<b>e</b>) pipe axial displacement; (<b>f</b>) pipe axial strain vs. distance from tension crack.</p>
Full article ">Figure A4
<p>Response of the pressurized pipeline (<span class="html-italic">P<sub>i</sub></span>/<span class="html-italic">P<sub>max</sub></span> = 0, Δ<span class="html-italic">T</span> = 0 °C) to longitudinal PGD with block length <span class="html-italic">L<sub>b</sub></span> = 300 m (case II): (<b>a</b>) pipe axial force; (<b>b</b>) pipe axial stress; (<b>c</b>) soil friction; (<b>d</b>) ground displacement; (<b>e</b>) pipe axial displacement; (<b>f</b>) pipe axial strain vs. distance from tension crack.</p>
Full article ">
27 pages, 28012 KiB  
Article
A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement
by Boyang You and Barmak Honarvar Shakibaei Asli
Big Data Cogn. Comput. 2024, 8(11), 164; https://doi.org/10.3390/bdcc8110164 - 20 Nov 2024
Viewed by 545
Abstract
To address the challenge of rapid geometric model development in the digital twin industry, this paper presents a comprehensive pipeline for constructing 3D models from images using monocular vision imaging principles. Firstly, a structure-from-motion (SFM) algorithm generates a 3D point cloud from photographs. [...] Read more.
To address the challenge of rapid geometric model development in the digital twin industry, this paper presents a comprehensive pipeline for constructing 3D models from images using monocular vision imaging principles. Firstly, a structure-from-motion (SFM) algorithm generates a 3D point cloud from photographs. The feature detection methods scale-invariant feature transform (SIFT), speeded-up robust features (SURF), and KAZE are compared across six datasets, with SIFT proving the most effective (matching rate higher than 0.12). Using K-nearest-neighbor matching and random sample consensus (RANSAC), refined feature point matching and 3D spatial representation are achieved via antipodal geometry. Then, the Poisson surface reconstruction algorithm converts the point cloud into a mesh model. Additionally, texture images are enhanced by leveraging a visual geometry group (VGG) network-based deep learning approach. Content images from a dataset provide geometric contours via higher-level VGG layers, while textures from style images are extracted using the lower-level layers. These are fused to create texture-transferred images, where the image quality assessment (IQA) metrics SSIM and PSNR are used to evaluate texture-enhanced images. Finally, texture mapping integrates the enhanced textures with the mesh model, improving the scene representation with enhanced texture. The method presented in this paper surpassed a LiDAR-based reconstruction approach by 20% in terms of point cloud density and number of model facets, while the hardware cost was only 1% of that associated with LiDAR. Full article
Show Figures

Figure 1

Figure 1
<p>Samples from Dataset 1 (Source: <a href="https://github.com/Abhishek-Aditya-bs/MultiView-3D-Reconstruction/tree/main/Datasets" target="_blank">https://github.com/Abhishek-Aditya-bs/MultiView-3D-Reconstruction/tree/main/Datasets</a> accessed on 18 November 2024) and samples from Dataset 2.</p>
Full article ">Figure 2
<p>Demonstration of Dataset 3.</p>
Full article ">Figure 3
<p>Diagram of SFM algorithm.</p>
Full article ">Figure 4
<p>Camera imaging model.</p>
Full article ">Figure 5
<p>Coplanarity condition of photogrammetry.</p>
Full article ">Figure 6
<p>Process of surface reconstruction.</p>
Full article ">Figure 7
<p>Demonstration of isosurface.</p>
Full article ">Figure 8
<p>Demonstration of VGG network.</p>
Full article ">Figure 9
<p>Demonstration of Gram matrix.</p>
Full article ">Figure 10
<p>Style transformation architecture.</p>
Full article ">Figure 11
<p>Texture mapping process.</p>
Full article ">Figure 12
<p>Demonstration of the three kinds of feature descriptors used on Dataset 1 and Dataset 2.</p>
Full article ">Figure 13
<p>Matching rate fitting of three kinds of image descriptors.</p>
Full article ">Figure 14
<p>SIFT point matching for <span class="html-italic">CNC1</span> object under different thresholds.</p>
Full article ">Figure 15
<p>SIFT point matching for <span class="html-italic">Fountain</span> object under different thresholds.</p>
Full article ">Figure 16
<p>Matching result of Dataset 2 using RANSAC method.</p>
Full article ">Figure 17
<p>Triangulation presentation of feature points obtained from objects in Dataset 1.</p>
Full article ">Figure 18
<p>Triangulation presentation of feature points obtained from objects in Dataset 2.</p>
Full article ">Figure 19
<p>Point cloud data of objects in Dataset 1.</p>
Full article ">Figure 20
<p>Point cloud data of objects in Dataset 2.</p>
Full article ">Figure 21
<p>Normal vector presentation of the points set obtained from objects in Dataset 1.</p>
Full article ">Figure 22
<p>Normal vector of the points set obtained from objects in Dataset 2.</p>
Full article ">Figure 23
<p>Poisson surface reconstruction results of objects in Dataset 1.</p>
Full article ">Figure 24
<p>Poisson surface reconstruction results of objects in Dataset 2.</p>
Full article ">Figure 25
<p>Style transfer result of <span class="html-italic">Statue</span> object.</p>
Full article ">Figure 26
<p>Style transfer result of <span class="html-italic">Fountain</span> object.</p>
Full article ">Figure 27
<p>Style transfer result of <span class="html-italic">Castle</span> object.</p>
Full article ">Figure 28
<p>Style transfer result of <span class="html-italic">CNC1</span> object.</p>
Full article ">Figure 29
<p>Style transfer result of <span class="html-italic">CNC2</span> object.</p>
Full article ">Figure 30
<p>Style transfer result of <span class="html-italic">Robot</span> object.</p>
Full article ">Figure 31
<p>Training loss in style transfer for <b>CNC1</b> object.</p>
Full article ">Figure 32
<p>IQA assessment for <b>CNC1</b> images after style transfer.</p>
Full article ">Figure 33
<p>Results of texture mapping for Dataset 1.</p>
Full article ">Figure 34
<p>Results of texture mapping for Dataset 2.</p>
Full article ">Figure A1
<p>Results of Camera calibration.</p>
Full article ">
19 pages, 7807 KiB  
Article
Harnessing Risks with Data: A Leakage Assessment Framework for WDN Using Multi-Attention Mechanisms and Conditional GAN-Based Data Balancing
by Wenhong Wu, Jiahao Zhang, Yunkai Kang, Zhengju Tang, Xinyu Pan and Ning Liu
Water 2024, 16(22), 3329; https://doi.org/10.3390/w16223329 - 19 Nov 2024
Viewed by 487
Abstract
Assessing leakage risks in water distribution networks (WDNs) and implementing preventive monitoring for high-risk pipelines has become a widely accepted approach for leakage control. However, existing methods face significant data barriers between Geographic Information System (GIS) and leakage prediction systems. These barriers hinder [...] Read more.
Assessing leakage risks in water distribution networks (WDNs) and implementing preventive monitoring for high-risk pipelines has become a widely accepted approach for leakage control. However, existing methods face significant data barriers between Geographic Information System (GIS) and leakage prediction systems. These barriers hinder traditional pipeline risk assessment methods, particularly when addressing challenges such as data imbalance, poor model interpretability, and lack of intuitive prediction results. To overcome these limitations, this study proposes a leakage assessment framework for water distribution networks based on multiple attention mechanisms and a generative model-based data balancing method. Extensive comparative experiments were conducted using water distribution network data from B2 and B3 District Metered Areas in Zhengzhou. The results show that the proposed model, optimized with a balanced data method, achieved a 40.76% improvement in the recall rate for leakage segment assessments, outperforming the second-best model using the same strategy by 1.7%. Furthermore, the strategy effectively enhanced the performance of all models, further proving that incorporating more valid data contributes to improved assessment results. This study comprehensively demonstrates the application of data-driven models in the field of “smart water management”, providing practical guidance and reference cases for advancing the development of intelligent water infrastructure. Full article
Show Figures

Figure 1

Figure 1
<p>Framework.</p>
Full article ">Figure 2
<p>Data Enhancement Program.</p>
Full article ">Figure 3
<p>Pipeline Risk Prediction Modeling Framework.</p>
Full article ">Figure 4
<p>Enhanced sample balance results for leakage sample data: (<b>a</b>) Original Data; (<b>b</b>) SMOTE Data Enhancement Method; (<b>c</b>) Conditional GAN Data Augmentation Method. (The horizontal and vertical axes represent the two-dimensional vector values obtained by dimensionality reduction of the high-dimensional representation of the samples, serving only as markers).</p>
Full article ">Figure 5
<p>SHAP analysis results: Overall Ranking Analysis of SHAP Risk Factors.</p>
Full article ">Figure 6
<p>SHAP analysis results: SHAP Single Case Analysis-Case 10850.</p>
Full article ">Figure 7
<p>SHAP analysis results: SHAP Single Example Analysis-Case 486.</p>
Full article ">Figure 8
<p>Pipeline age leakage rate analysis: (<b>a</b>) Age distribution of pipelines assessed as a level of risk, (<b>b</b>) Leakage ratios for pipelines of different ages.</p>
Full article ">Figure 9
<p>Visualization Platform: (<b>a</b>) Leakage Point; (<b>b</b>) Pipeline Location; (<b>c</b>) B2, B3 DMA location; (<b>d</b>) Total Pipeline leakage risk status statistics; (<b>e</b>) Classified pipeline leakage risk status statistics and positioning; (<b>f</b>) Individual pipeline leakage risk status information; (<b>g</b>) Region-specific leakage risk status statistics.</p>
Full article ">
18 pages, 3406 KiB  
Article
Design and Visual Implementation of a Regional Energy Risk Superposition Model for Oil Tank Farms
by Yufeng Yang, Xixiang Zhang, Shuyi Xie, Shanqi Qu, Haotian Chen, Qiming Xu and Guohua Chen
Energies 2024, 17(22), 5775; https://doi.org/10.3390/en17225775 - 19 Nov 2024
Viewed by 457
Abstract
Ensuring the safety of oil tank farms is essential to maintaining energy security and minimizing the impact of potential accidents. This paper develops a quantitative regional risk model designed to assess both individual and societal risks in oil tank farms, with particular attention [...] Read more.
Ensuring the safety of oil tank farms is essential to maintaining energy security and minimizing the impact of potential accidents. This paper develops a quantitative regional risk model designed to assess both individual and societal risks in oil tank farms, with particular attention to energy-related risks such as leaks, fires, and explosions. The model integrates factors like day–night operational variations, weather conditions, and risk superposition to provide a comprehensive and accurate evaluation of regional risks. By considering the cumulative effects of multiple hazards, including those tied to energy dynamics, and the stability and validity of the model are researched through Monte Carlo simulations and case application. The results show that the model enhances the reliability of traditional risk assessment methods, making it more applicable to oil tank farm safety concerns. Furthermore, this study introduces a practical tool that simplifies the risk assessment process, allowing operators and decision-makers to evaluate risks without requiring in-depth technical expertise. The methodology improves the ability to safeguard oil tank farms, ensuring the stability of energy supply chains and contributing to broader energy security efforts. This study provides a valuable method for researchers and engineers seeking to enhance regional risk calculation efficiency, with a specific focus on energy risks. Full article
(This article belongs to the Special Issue Advances in the Development of Geoenergy: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Procedure for calculating individual risk IR at grid points.</p>
Full article ">Figure 2
<p>Interpolation between radiation ellipses and the corresponding probabilities of lethality.</p>
Full article ">Figure 3
<p><span class="html-italic">F</span>-<span class="html-italic">N</span> curve plotting process.</p>
Full article ">Figure 4
<p>Individual risk matrix calculation.</p>
Full article ">Figure 5
<p>Societal risk matrix calculation.</p>
Full article ">Figure 6
<p>Calculation interface and contour distribution of individual risk for an oil tank farm.</p>
Full article ">Figure 7
<p>Calculation interface and <span class="html-italic">F</span>-<span class="html-italic">N</span> curve of societal risk for an oil tank farm.</p>
Full article ">Figure 8
<p>CDF of maximum individual values (1000 samples).</p>
Full article ">Figure 9
<p>Individual risk outcomes in different spill scenarios. (<b>a</b>) 75 mm hole diameter leak scenarios; (<b>b</b>) 100 mm hole diameter leak scenarios.</p>
Full article ">Figure 10
<p>Social risk outcomes in different spill scenarios.</p>
Full article ">
18 pages, 1211 KiB  
Article
Unleashing the Power of AI for Intraoperative Neuromonitoring During Carotid Endarterectomy
by Roaa Hindi and George Pappas
Electronics 2024, 13(22), 4542; https://doi.org/10.3390/electronics13224542 - 19 Nov 2024
Viewed by 476
Abstract
This research investigates the use of a 1D Convolutional Neural Network (CNN) to classify electroencephalography (EEG) signals into four categories of ischemia severity: normal, mild, moderate, and severe. The model’s accuracy was lower in moderate instances (75%) and severe cases (65%) compared to [...] Read more.
This research investigates the use of a 1D Convolutional Neural Network (CNN) to classify electroencephalography (EEG) signals into four categories of ischemia severity: normal, mild, moderate, and severe. The model’s accuracy was lower in moderate instances (75%) and severe cases (65%) compared to normal cases (95%) and mild cases (85%). The preprocessing pipeline now incorporates Power Spectral Density (PSD) analysis, and segment lengths of 32, 64, and 128 s are thoroughly examined. The work highlights the potential of the model to identify ischemia in real time during carotid endarterectomy (CEA) to prevent perioperative stroke. The 1D-CNN effectively captures both temporal and spatial EEG signals, providing a combination of processing efficiency and accuracy when compared to existing approaches. In order to enhance the identification of moderate and severe instances of ischemia, future studies should prioritize the integration of more complex datasets, specifically for severe ischemia, as well as increasing the current dataset. Our contributions in this study are implementing a novel 1D-CNN model to achieve a classification accuracy of over 93%, improving feature extraction by utilizing Power Spectral Density (PSD), automating the ischemia detection procedure, and enhancing model performance using a well-balanced dataset. Full article
Show Figures

Figure 1

Figure 1
<p>1D-CNN architecture.</p>
Full article ">Figure 2
<p>Neural network process for EEG ischemic stroke detection during CEA.</p>
Full article ">Figure 3
<p>Classes accuracy.</p>
Full article ">Figure 4
<p>Power Spectral Density (PSD) examples.</p>
Full article ">
Back to TopTop