[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (334)

Search Parameters:
Keywords = node labeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 800 KiB  
Article
Open-World Semi-Supervised Learning for fMRI Analysis to Diagnose Psychiatric Disease
by Chang Hu, Yihong Dong, Shoubo Peng and Yuehan Wu
Information 2025, 16(3), 171; https://doi.org/10.3390/info16030171 - 25 Feb 2025
Viewed by 271
Abstract
Due to the incomplete nature of cognitive testing data and human subjective biases, accurately diagnosing mental disease using functional magnetic resonance imaging (fMRI) data poses a challenging task. In the clinical diagnosis of mental disorders, there often arises a problem of limited labeled [...] Read more.
Due to the incomplete nature of cognitive testing data and human subjective biases, accurately diagnosing mental disease using functional magnetic resonance imaging (fMRI) data poses a challenging task. In the clinical diagnosis of mental disorders, there often arises a problem of limited labeled data due to factors such as large data volumes and cumbersome labeling processes, leading to the emergence of unlabeled data with new classes, which can result in misdiagnosis. In the context of graph-based mental disorder classification, open-world semi-supervised learning for node classification aims to classify unlabeled nodes into known classes or potentially new classes, presenting a practical yet underexplored issue within the graph community. To improve open-world semi-supervised representation learning and classification in fMRI under low-label settings, we propose a novel open-world semi-supervised learning approach tailored for functional magnetic resonance imaging analysis, termed Open-World Semi-Supervised Learning for fMRI Analysis (OpenfMA). Specifically, we employ spectral augmentation self-supervised learning and dynamic concept contrastive learning to achieve open-world graph learning guided by pseudo-labels, and construct hard positive sample pairs to enhance the network’s focus on potential positive pairs. Experiments conducted on public datasets validate the superior performance of this method in the open-world psychiatric disease diagnosis domain. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An overview of the proposed OpenfMA model.</p>
Full article ">Figure 2
<p>Parameter sensitivity analysis of OpenfMA. (<b>a</b>) Accuracy of OpenfMA under different embedding dimensions. (<b>b</b>) Accuracy of OpenfMA under different values of <math display="inline"><semantics> <mi>α</mi> </semantics></math>. (<b>c</b>) Accuracy of OpenfMA under different values of <math display="inline"><semantics> <mi>β</mi> </semantics></math>. (<b>d</b>) Accuracy of OpenfMA under different values of <math display="inline"><semantics> <mi>η</mi> </semantics></math>. (<b>e</b>) Accuracy of OpenfMA under different values of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. (<b>f</b>) Accuracy of OpenfMA under different values of <math display="inline"><semantics> <mi>μ</mi> </semantics></math>.</p>
Full article ">
22 pages, 29613 KiB  
Article
Self-Supervised Three-Dimensional Ocean Bottom Node Seismic Data Shear Wave Leakage Suppression Based on a Dual Encoder Network
by Zhaolin Zhu, Zhihao Chen, Bangyu Wu and Lin Chen
Sensors 2025, 25(3), 682; https://doi.org/10.3390/s25030682 - 23 Jan 2025
Viewed by 539
Abstract
Ocean Bottom Node (OBN) is a seismic data acquisition technique, comprising a hydrophone and a three-component geophone. In practice, the vertical component is susceptible to high-amplitude, low-velocity, and low-frequency shear wave noise, which negatively impacts the subsequent processing of dual-sensor data. The most [...] Read more.
Ocean Bottom Node (OBN) is a seismic data acquisition technique, comprising a hydrophone and a three-component geophone. In practice, the vertical component is susceptible to high-amplitude, low-velocity, and low-frequency shear wave noise, which negatively impacts the subsequent processing of dual-sensor data. The most commonly used method is adaptive matching subtraction, which estimates shear wave noise in the vertical component by solving an optimization problem. Neural networks, as robust nonlinear fitting tools, offer superior performance in resolving nonlinear mapping relationship and exhibit computational efficiency. In this paper, we introduce a self-supervised shear wave suppression approach for 3D OBN seismic data, using a neural network in place of the traditional adaptive matching subtraction operator. This method adopts the horizontal components as the input to the neural network, and measures the output and the noisy vertical component to establish a loss function for the network training. The network output is the predicted shear wave noise. To better balance signal leakage and noise suppression, the method incorporates a local normalized cross-correlation regularization term in the loss function. As a self-supervised method, it does not need clean data to serve as labels, thereby negating the tedious work of obtaining clean field data. Extensive experiments on both synthetic and field data demonstrate the effectiveness of the proposed method on shear wave noise suppression for 3D OBN seismic data. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Figure 1
<p>Field data demonstration of the shear wave suppression problem. (<b>a</b>) Noisy vertical component. (<b>b</b>) Denoised vertical component. (<b>c</b>) Predicted shear wave noise. (<b>d</b>) Horizontal component.</p>
Full article ">Figure 2
<p>A dual-encoder -based network structure was used for the shear wave suppression problem, which consists of a downsample block, upsample block, and convolution block attention module (CBAM). The CBAM contains CAM and SAM. The inputs to the network were two horizontal components, i.e., the X- and Y-components. The output was the predicted noise.</p>
Full article ">Figure 3
<p>3D synthetic data example. (<b>a</b>) Clean P-wave signal. (<b>b</b>) Shear wave noise predicted by traditional method. (<b>c</b>) Noisy P-wave signal.</p>
Full article ">Figure 4
<p>3D synthetic data shear wave suppression results. (<b>a</b>) Clean data. Denoised data using the (<b>b</b>) <span class="html-italic">Unet</span>, (<b>e</b>) <span class="html-italic">Unet-LNCC</span>, (<b>h</b>) <span class="html-italic">Unet-CBAM</span> and (<b>k</b>) proposed method. Removed shear wave noise using the (<b>c</b>) <span class="html-italic">Unet</span>, (<b>f</b>) <span class="html-italic">Unet-LNCC</span>, (<b>i</b>) <span class="html-italic">Unet-CBAM</span> and (<b>l</b>) proposed method. (<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>) correspond to the error of the denoised and clean data.</p>
Full article ">Figure 5
<p>Synthetic data crossline profile shear wave suppression results. (<b>a</b>) Clean data. Denoised data using the (<b>b</b>) <span class="html-italic">Unet</span>, (<b>c</b>) <span class="html-italic">Unet-LNCC</span>, (<b>d</b>) <span class="html-italic">Unet-CBAM</span> and (<b>e</b>) proposed method. (<b>f</b>) Field shear wave noise. Removed shear wave noise using the (<b>g</b>) <span class="html-italic">Unet</span>, (<b>h</b>) <span class="html-italic">Unet-LNCC</span>, (<b>i</b>) <span class="html-italic">Unet-CBAM</span> and (<b>j</b>) proposed method. (<b>k</b>–<b>n</b>) correspond to the difference of the denoised and clean data.</p>
Full article ">Figure 6
<p>Synthetic data inline profile shear wave suppression results. (<b>a</b>) Clean data. Denoised data using the (<b>b</b>) <span class="html-italic">Unet</span>, (<b>c</b>) <span class="html-italic">Unet-LNCC</span>, (<b>d</b>) <span class="html-italic">Unet-CBAM</span> and (<b>e</b>) proposed method. (<b>f</b>) Field shear wave noise. Shear wave noise removed using the (<b>g</b>) <span class="html-italic">Unet</span>, (<b>h</b>) <span class="html-italic">Unet-LNCC</span>, (<b>i</b>) <span class="html-italic">Unet-CBAM</span> and (<b>j</b>) proposed method. (<b>k</b>–<b>n</b>) correspond to the difference between the denoised and clean data.</p>
Full article ">Figure 7
<p>3D field data. (<b>a</b>) Vertical Z-component data. (<b>b</b>) Horizontal X-component data. (<b>c</b>) Horizontal Y-component data.</p>
Full article ">Figure 8
<p>3D field data shear wave suppression results. (<b>a</b>) Noisy data. Denoised data using the (<b>b</b>) commercial software, (<b>c</b>) <span class="html-italic">Unet</span>, (<b>g</b>) <span class="html-italic">Unet-LNCC</span>, (<b>h</b>) <span class="html-italic">Unet-CBAM</span>, and (<b>i</b>) proposed method. (<b>d</b>) Raw horizontal XY-component. Removed shear wave noise using the (<b>e</b>) commercial software (<b>f</b>) <span class="html-italic">Unet</span>, (<b>j</b>) <span class="html-italic">Unet-LNCC</span>, (<b>k</b>) <span class="html-italic">Unet-CBAM</span>, and (<b>l</b>) proposed method.</p>
Full article ">Figure 9
<p>Field data crossline profile shear wave suppression results. (<b>a</b>) Noisy data. Denoised data using the (<b>b</b>) commercial software, (<b>c</b>) <span class="html-italic">Unet</span>, (<b>g</b>) <span class="html-italic">Unet-LNCC</span>, (<b>h</b>) <span class="html-italic">Unet-CBAM</span>, and (<b>i</b>) proposed method. (<b>d</b>) Raw horizontal XY-component. Shear wave noise removed using the (<b>e</b>) commercial software (<b>f</b>) <span class="html-italic">Unet</span>, (<b>j</b>) <span class="html-italic">Unet-LNCC</span>, (<b>k</b>) <span class="html-italic">Unet-CBAM</span>, and (<b>l</b>) proposed method.</p>
Full article ">Figure 10
<p>Field data inline profile shear wave suppression results. (<b>a</b>) Noisy data. Denoised data using the (<b>b</b>) commercial software, (<b>c</b>) <span class="html-italic">Unet</span>, (<b>g</b>) <span class="html-italic">Unet-LNCC</span>, (<b>h</b>) <span class="html-italic">Unet-CBAM</span>, and (<b>i</b>) proposed method. (<b>d</b>) Raw horizontal XY-component. Shear wave noise removed using the (<b>e</b>) commercial software (<b>f</b>) <span class="html-italic">Unet</span>, (<b>j</b>) <span class="html-italic">Unet-LNCC</span>, (<b>k</b>) <span class="html-italic">Unet-CBAM</span>, and (<b>l</b>) proposed method.</p>
Full article ">Figure 11
<p>3D field data. The top row presents a map showing the local similarities between the crossline profile-predicted noise and the horizontal XY-component, and the bottom row shows a map of local similarities between the inline profile-predicted noise and the horizontal XY-component. (<b>a</b>,<b>f</b>) Data denoised using the commercial method; (<b>b</b>,<b>g</b>) data denoised using the <span class="html-italic">Unet</span>; (<b>c</b>,<b>h</b>) data denoised using the <span class="html-italic">Unet-LNCC</span>; (<b>d</b>,<b>i</b>) data denoised using the <span class="html-italic">Unet-CBAM</span>; (<b>e</b>,<b>j</b>) data denoised using the proposed method.</p>
Full article ">Figure 12
<p>Two-dimensional field data shear wave suppression results. (<b>a</b>) Noisy data. Data denoised using the (<b>b</b>) dual network, (<b>c</b>) dual channel, and (<b>d</b>) dual encoder. Shear wave noise removed using the (<b>e</b>) dual network, (<b>f</b>) dual channel, and (<b>g</b>) dual encoder. The computation times required for the dual-network, dual-channel and dual-encoder methods were 710.128 s, 551.148 s, and 447.314 s, respectively.</p>
Full article ">Figure 13
<p>Loss function curves in network training. (<b>a</b>) Synthetic data (<math display="inline"><semantics> <mi>λ</mi> </semantics></math> = <math display="inline"><semantics> <mrow> <mn>1</mn> <mi>e</mi> <mo>−</mo> <mn>3</mn> </mrow> </semantics></math>). (<b>b</b>) Field data (<math display="inline"><semantics> <mi>λ</mi> </semantics></math> = 1 × 10<sup>−6</sup>).</p>
Full article ">
16 pages, 302 KiB  
Review
Nuclear Medicine and Molecular Imaging in Urothelial Cancer: Current Status and Future Directions
by Sam McDonald, Kevin G. Keane, Richard Gauci and Dickon Hayne
Cancers 2025, 17(2), 232; https://doi.org/10.3390/cancers17020232 - 13 Jan 2025
Viewed by 1053
Abstract
Background: The role of molecular imaging in urothelial cancer is less defined than other cancers, and its utility remains controversial due to limitations such as high urinary tracer excretion, complicating primary tumour assessment in the bladder and upper urinary tract. This review [...] Read more.
Background: The role of molecular imaging in urothelial cancer is less defined than other cancers, and its utility remains controversial due to limitations such as high urinary tracer excretion, complicating primary tumour assessment in the bladder and upper urinary tract. This review explores the current landscape of PET imaging in the clinical management of urothelial cancer, with a special emphasis on potential future advancements including emerging novel non-18F FDG PET agents, PET radiopharmaceuticals, and PET-MRI applications. Methods: We conducted a comprehensive literature search in the PubMed database, using keywords such as “PET”, “PET-CT”, “PET-MRI”, “FDG PET”, “Urothelial Cancer”, and “Theranostics”. Studies were screened for relevance, focusing on imaging modalities and advances in PET tracers for urothelial carcinoma. Non-English language, off-topic papers, and case reports were excluded, resulting in 80 articles being selected for discussion. Results: 18F FDG PET-CT has demonstrated superior sensitivity over conventional imaging, such as contrast-enhanced CT and MRI, for detecting lymph node metastasis and distant disease. Despite these advantages, FDG PET-CT is limited for T-staging of primary urothelial tumours due to high urinary excretion of the tracer. Emerging evidence supports the role of PETC-CT in assessing response to neoadjuvant chemotherapy and in identifying recurrence, with a high diagnostic accuracy reported in several studies. Novel PET tracers, such as 68Ga-labelled FAPI, have shown promising results in targeting cancer-associated fibroblasts, providing higher tumour-to-background ratios and detecting lesions missed by traditional imaging. Antibody-based PET tracers, like those targeting Nectin-4, CAIX, and uPAR, are under investigation for their diagnostic and theranostic potential, and initial studies indicate that these agents may offer advantages over conventional imaging and FDG PET. Conclusions: Molecular imaging is a rapidly evolving field in urothelial cancer, offering improved diagnostic and prognostic capabilities. While 18F FDG PET-CT has shown utility in staging, further prospective research is needed to establish and refine standardised protocols and validate new tracers. Advances in theranostics and precision imaging may revolutionise urothelial cancer management, enhancing the ability to tailor treatments and improve patient outcomes. Full article
(This article belongs to the Special Issue Advances in Management of Urothelial Cancer)
13 pages, 4901 KiB  
Article
A New Deep Learning-Based Method for Automated Identification of Thoracic Lymph Node Stations in Endobronchial Ultrasound (EBUS): A Proof-of-Concept Study
by Øyvind Ervik, Mia Rødde, Erlend Fagertun Hofstad, Ingrid Tveten, Thomas Langø, Håkon O. Leira, Tore Amundsen and Hanne Sorger
J. Imaging 2025, 11(1), 10; https://doi.org/10.3390/jimaging11010010 - 5 Jan 2025
Viewed by 933
Abstract
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a cornerstone in minimally invasive thoracic lymph node sampling. In lung cancer staging, precise assessment of lymph node position is crucial for clinical decision-making. This study aimed to demonstrate a new deep learning method to classify [...] Read more.
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a cornerstone in minimally invasive thoracic lymph node sampling. In lung cancer staging, precise assessment of lymph node position is crucial for clinical decision-making. This study aimed to demonstrate a new deep learning method to classify thoracic lymph nodes based on their anatomical location using EBUS images. Bronchoscopists labeled lymph node stations in real-time according to the Mountain Dressler nomenclature. EBUS images were then used to train and test a deep neural network (DNN) model, with intraoperative labels as ground truth. In total, 28,134 EBUS images were acquired from 56 patients. The model achieved an overall classification accuracy of 59.5 ± 5.2%. The highest precision, sensitivity, and F1 score were observed in station 4L, 77.6 ± 13.1%, 77.6 ± 15.4%, and 77.6 ± 15.4%, respectively. The lowest precision, sensitivity, and F1 score were observed in station 10L. The average processing and prediction time for a sequence of ten images was 0.65 ± 0.04 s, demonstrating the feasibility of real-time applications. In conclusion, the new DNN-based model could be used to classify lymph node stations from EBUS images. The method performance was promising with a potential for clinical use. Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Regional mediastinal and hilar lymph node stations, including major blood vessels (<b>left</b>) and examples of EBUS images from the eight lymph node stations included in this study (<b>right</b>) [<a href="#B52-jimaging-11-00010" class="html-bibr">52</a>].</p>
Full article ">Figure 2
<p>Intraoperative setup with the laptop computer used for video recording and labeling during EBUS-TBNA (<b>left</b>). A schematic diagram of the postoperative steps, including a patient-wise split of the dataset, preprocessing, and the training and evaluation of the DNN (<b>right</b>).</p>
Full article ">Figure 3
<p>The number of images for each lymph node station indicates the balance of stations in the dataset.</p>
Full article ">Figure 4
<p>Confusion matrices for the classification model in stateless (<b>a</b>) and stateful modes (<b>b</b>) illustrate the results for the iteration that achieved the highest performance on its test set.</p>
Full article ">
15 pages, 664 KiB  
Article
Few-Shot Graph Anomaly Detection via Dual-Level Knowledge Distillation
by Xuan Li, Dejie Cheng, Luheng Zhang, Chengfang Zhang and Ziliang Feng
Entropy 2025, 27(1), 28; https://doi.org/10.3390/e27010028 - 1 Jan 2025
Viewed by 796
Abstract
Graph anomaly detection is crucial in many high-impact applications across diverse fields. In anomaly detection tasks, collecting plenty of annotated data tends to be costly and laborious. As a result, few-shot learning has been explored to address the issue by requiring only a [...] Read more.
Graph anomaly detection is crucial in many high-impact applications across diverse fields. In anomaly detection tasks, collecting plenty of annotated data tends to be costly and laborious. As a result, few-shot learning has been explored to address the issue by requiring only a few labeled samples to achieve good performance. However, conventional few-shot models may not fully exploit the information within auxiliary sets, leading to suboptimal performance. To tackle these limitations, we propose a dual-level knowledge distillation-based approach for graph anomaly detection, DualKD, which leverages two distinct distillation losses to improve generalization capabilities. In our approach, we initially train a teacher model to generate prediction distributions as soft labels, capturing the entropy of uncertainty in the data. These soft labels are then employed to construct the corresponding loss for training a student model, which can capture more detailed node features. In addition, we introduce two representation distillation losses—short and long representation distillation—to effectively transfer knowledge from the auxiliary set to the target set. Comprehensive experiments conducted on four datasets verify that DualKD remarkably outperforms the advanced baselines, highlighting its effectiveness in enhancing identification performance. Full article
(This article belongs to the Special Issue Robustness of Graph Neural Networks)
Show Figures

Figure 1

Figure 1
<p>Overall framework of DualKD.</p>
Full article ">Figure 2
<p>Detection performance of DualKD and its variants.</p>
Full article ">Figure 3
<p>Sensitivity analysis for the number of auxiliary networks <span class="html-italic">P</span> and the weight <math display="inline"><semantics> <mi>α</mi> </semantics></math>, <math display="inline"><semantics> <mi>β</mi> </semantics></math>, and <math display="inline"><semantics> <mi>γ</mi> </semantics></math>.</p>
Full article ">
21 pages, 10348 KiB  
Article
A Learning Resource Recommendation Method Based on Graph Contrastive Learning
by Jiu Yong, Jianguo Wei, Xiaomei Lei, Jianwu Dang, Wenhuan Lu and Meijuan Cheng
Electronics 2025, 14(1), 142; https://doi.org/10.3390/electronics14010142 - 1 Jan 2025
Viewed by 697
Abstract
The existing learning resource recommendation systems suffer from data sparsity and missing data labels, leading to the insufficient mining of the correlation between users and courses. To address these issues, we propose a learning resource recommendation method based on graph contrastive learning, which [...] Read more.
The existing learning resource recommendation systems suffer from data sparsity and missing data labels, leading to the insufficient mining of the correlation between users and courses. To address these issues, we propose a learning resource recommendation method based on graph contrastive learning, which uses graph contrastive learning to construct an auxiliary recommendation task combined with a main recommendation task, achieving the joint recommendation of learning resources. Firstly, the interaction bipartite graph between the user and the course is input into a lightweight graph convolutional network, and the embedded representation of each node in the graph is obtained after compilation. Then, for the input user–course interaction bipartite graph, noise vectors are randomly added to each node in the embedding space to perturb the embedding of graph encoder node, forming a perturbation embedding representation of the node to enhance the data. Subsequently, the graph contrastive learning method is used to construct auxiliary recommendation tasks. Finally, the main task of recommendation supervision and the constructed auxiliary task of graph contrastive learning are jointly learned to alleviate data sparsity. The experimental results show that the proposed method in this paper has improved the Recall@5 by 5.7% and 11.2% and the NDCG@5 by 0.1% and 6.4%, respectively, on the MOOCCube and Amazon-Book datasets compared with the node enhancement methods. Therefore, the proposed method can significantly improve the mining level of users and courses by using a graph comparison method in the auxiliary recommendation task and has better noise immunity and robustness. Full article
Show Figures

Figure 1

Figure 1
<p>Sequential data masking.</p>
Full article ">Figure 2
<p>Sequence data deletion.</p>
Full article ">Figure 3
<p>Sequence data rearrangement.</p>
Full article ">Figure 4
<p>Sequence data replacement.</p>
Full article ">Figure 5
<p>Sequence data addition.</p>
Full article ">Figure 6
<p>Deleting edges/nodes.</p>
Full article ">Figure 7
<p>Overall process framework of learning resource recommendation based on graph contrastive learning.</p>
Full article ">Figure 8
<p>Flowchart of recommendation supervision main task.</p>
Full article ">Figure 9
<p>Several groups of user interaction diagrams.</p>
Full article ">Figure 10
<p>Lightweight graph convolutional network for aggregating users and course neighbors.</p>
Full article ">Figure 11
<p>Process of graph contrastive learning auxiliary recommendation.</p>
Full article ">Figure 12
<p>Noise disturbance process.</p>
Full article ">Figure 13
<p>Connectivity analysis on MOOCCube dataset.</p>
Full article ">Figure 14
<p>Connectivity analysis on Amazon-Book dataset.</p>
Full article ">Figure 15
<p>Analysis of noise immunity—<span class="html-italic">Recall</span>@5.</p>
Full article ">Figure 16
<p>Analysis of noise immunity—<span class="html-italic">NDCG</span>@5.</p>
Full article ">
24 pages, 5160 KiB  
Article
Payload State Prediction Based on Real-Time IoT Network Traffic Using Hierarchical Clustering with Iterative Optimization
by Hao Zhang, Jing Wang, Xuanyuan Wang, Kai Lu, Hao Zhang, Tong Xu and Yan Zhou
Sensors 2025, 25(1), 73; https://doi.org/10.3390/s25010073 - 26 Dec 2024
Viewed by 530
Abstract
IoT (Internet of Things) networks are vulnerable to network viruses and botnets, while facing serious network security issues. The prediction of payload states in IoT networks can detect network attacks and achieve early warning and rapid response to prevent potential threats. Due to [...] Read more.
IoT (Internet of Things) networks are vulnerable to network viruses and botnets, while facing serious network security issues. The prediction of payload states in IoT networks can detect network attacks and achieve early warning and rapid response to prevent potential threats. Due to the instability and packet loss of communications between victim network nodes, the constructed protocol state machines of existing state prediction schemes are inaccurate. In this paper, we propose a network payload predictor called IoTGuard, which can predict the payload states in IoT networks based on real-time IoT network traffic. The steps of IoTGuard are briefly as follows: Firstly, the application-layer payloads between different nodes are extracted through a module of network payload separation. Secondly, the classification of payload state within network flows is obtained via a payload extraction module. Finally, the predictor of payload state in a network is trained on a payload set, and these payloads have state labels. Experimental results on the Mozi botnet dataset show that IoTGuard can predict the state of payloads in IoT networks more accurately while ensuring execution efficiency. IoTGuard achieves an accuracy of 86% in network payload prediction, which is 8% higher than the state-of-the-art method NetZob, and the training time is reduced by 52.8%. Full article
(This article belongs to the Special Issue IoT Network Security (Second Edition))
Show Figures

Figure 1

Figure 1
<p>TCP message state machine.</p>
Full article ">Figure 2
<p>The two clusters formed after payload state clustering.</p>
Full article ">Figure 3
<p>Timing sequence of UDP datagrams from a pair of Mozi zombie nodes. The load type marked in red are out-of-order fields affected by network jitter.</p>
Full article ">Figure 4
<p>Wireshark parses the Mozi communication payload. The red box is the datagram type UDP packet.</p>
Full article ">Figure 5
<p>Communication process between Mozi nodes when the network is poor.</p>
Full article ">Figure 6
<p>State machine generated by bad network conditions. The serial numbers in the diagram label the state transitions in chronological order.</p>
Full article ">Figure 7
<p>Mozi communication payload clustering results.</p>
Full article ">Figure 8
<p>P2P network payload status prediction scheme framework. The serial numbers in the figure mark the steps in chronological order.</p>
Full article ">Figure 9
<p>Application layer network payload extraction process.</p>
Full article ">Figure 10
<p>Payload status extraction process. The serial numbers in the figure mark the steps in chronological order.</p>
Full article ">Figure 11
<p>Payload clustering process.</p>
Full article ">Figure 12
<p>Process of generating payload status predictor. The serial numbers in the figure mark the steps in chronological order.</p>
Full article ">Figure 13
<p>Clustering effect of adjusting the relationship between hierarchical clustering parameters and ARI coefficients.</p>
Full article ">Figure 14
<p>Corresponding training time using autoencoders with different layers.</p>
Full article ">Figure 15
<p>The effect of feature vector extraction using autoencoders with different layers.</p>
Full article ">Figure 16
<p>Relationship between initial parameters of hierarchical clustering and ARI coefficient.</p>
Full article ">
19 pages, 33216 KiB  
Article
System Design for a Prototype Acoustic Network to Deter Avian Pests in Agriculture Fields
by Destiny Kwabla Amenyedzi, Micheline Kazeneza, Ipyana Issah Mwaisekwa, Frederic Nzanywayingoma, Philibert Nsengiyumva, Peace Bamurigire, Emmanuel Ndashimye and Anthony Vodacek
Agriculture 2025, 15(1), 10; https://doi.org/10.3390/agriculture15010010 - 24 Dec 2024
Viewed by 1061
Abstract
Crop damage attributed to pest birds is an important problem, particularly in low-income countries. This paper describes a prototype system for pest bird detection using a Conv1D neural network model followed by scaring actions to reduce the presence of pest birds on farms. [...] Read more.
Crop damage attributed to pest birds is an important problem, particularly in low-income countries. This paper describes a prototype system for pest bird detection using a Conv1D neural network model followed by scaring actions to reduce the presence of pest birds on farms. Acoustic recorders were deployed on farms for data collection, supplemented by acoustic libraries. The sounds of pest bird species were identified and labeled. The labeled data were used in Edge Impulse to train a tinyML Conv1D model to detect birds of interest. The model was deployed on Arduino Nano 33 BLE Sense (nodes) and XIAO (Base station) microcontrollers to detect the pest birds, and based on the detection, scaring sounds were played to deter the birds. The model achieved an accuracy of 96.1% during training and 92.99% during testing. The testing F1 score was 0.94, and the ROC score was 0.99, signifying a good discriminatory ability of the model. The prototype was able to make inferences in 53 ms using only 14.8 k of peak RAM and only 43.8 K of flash memory to store the model. Results from the prototype deployment in the field demonstrated successful detection and triggering actions and SMS messaging notifications. Further development of this novel integrated and sustainable solution will add another tool for dealing with pest birds. Full article
(This article belongs to the Special Issue Smart Agriculture Sensors and Monitoring Systems for Field Detection)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Farmhands rattling sounds to repel pest birds from the beans farm at the University of Rwanda-Busogo campus (1°33′42.9″ S, 29°33′12.0″ E). (<b>b</b>) Acoustic monitoring deployment. Photos: D.K. Amenyedzi.</p>
Full article ">Figure 2
<p>Spectrograms illustrating species and environmental sounds. Panels (<b>A</b>–<b>G</b>) are spectrograms for several pest species, namely Chubb’s cisticola, common bulbul, common waxbill, red-billed quelea, village weaver, white-browed robin-chat, and yellow-fronted canary, respectively. Panel (<b>H</b>) is a beneficial bird species, hadada ibis, and panels (<b>I</b>–<b>K</b>) are examples of ambient noise, i.e., car horn, children talking, and rattling sounds, respectively.</p>
Full article ">Figure 3
<p>Visual representation of the same audio in the three feature selection techniques. (<b>a</b>) Spectrogram feature. (<b>b</b>) MFCC feature. (<b>c</b>) MFE feature.</p>
Full article ">Figure 4
<p>Conv1D network architecture.</p>
Full article ">Figure 5
<p>(<b>a</b>) The prototype setup on the PCB board minus speaker. (<b>b</b>) Deployment in the field with solar power to recharge the battery.</p>
Full article ">Figure 6
<p>Prototype system flowchart.</p>
Full article ">Figure 7
<p>Confusion matrix describing the performance of the MFE feature with the best Conv1D model.</p>
Full article ">Figure 8
<p>ROC curve for the MFE feature with the best Conv1D model.</p>
Full article ">Figure 9
<p>On-device result displayed on Arduino IDE serial monitor.</p>
Full article ">Figure 10
<p>Screenshots from a smartphone of SMS messages delivered from the base station and nodes. (<b>a</b>) SMS to the farmer from base station A indicating bird detections. (<b>b</b>) SMS from node B indicating bird detection. (<b>c</b>) SMS to the farmer from node C indicating a security threat.</p>
Full article ">
17 pages, 6175 KiB  
Article
Multivariate, Automatic Diagnostics Based on Insights into Sensor Technology
by Astrid Marie Skålvik, Ranveig N. Bjørk, Enoc Martínez, Kjell-Eivind Frøysa and Camilla Saetre
J. Mar. Sci. Eng. 2024, 12(12), 2367; https://doi.org/10.3390/jmse12122367 - 23 Dec 2024
Viewed by 578
Abstract
With the rapid development of smart sensor technology and the Internet of things, ensuring data accuracy and system reliability is paramount. As the number of sensors increases with demand for high-resolution, high-quality input to decision-making systems, models and digital twins, manual quality control [...] Read more.
With the rapid development of smart sensor technology and the Internet of things, ensuring data accuracy and system reliability is paramount. As the number of sensors increases with demand for high-resolution, high-quality input to decision-making systems, models and digital twins, manual quality control of sensor data is no longer an option. In this paper, we leverage insights into sensor technology, environmental dynamics and the correlation between data from different sensors for automatic diagnostics of a sensor node. We propose a method for combining results of automatic quality control of individual sensors with tests for detecting simultaneous anomalies across sensors. Building on both sensor and application knowledge, we develop a diagnostic logic that can automatically explain and diagnose instead of just labeling the individual sensor data as “good” or “bad”. This approach enables us to provide diagnostics that offer a deeper understanding of the data and their quality and of the health and reliability of the measurement system. Our algorithms are adapted for real time and in situ operation on the sensor node. We demonstrate the diagnostic power of the algorithms on high-resolution measurements of temperature and conductivity from the OBSEA observatory about 50 km south of Barcelona, Spain. Full article
(This article belongs to the Special Issue Progress in Sensor Technology for Ocean Sciences)
Show Figures

Figure 1

Figure 1
<p>Schematic overview of the proposed method.</p>
Full article ">Figure 2
<p>Algorithm for detecting symptoms in individual variables. <math display="inline"><semantics> <msub> <mi>y</mi> <mi>i</mi> </msub> </semantics></math> refers to the measurement of variable <span class="html-italic">y</span> at timestep <span class="html-italic">i</span>. <span class="html-italic">N</span> refers to the length of the running window. <span class="html-italic">k</span> is a multiplier used to set dynamic thresholds based on the standard deviation. <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>i</mi> <mi>l</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> <mi>y</mi> </mrow> </semantics></math> refers to the <span class="html-italic">N</span> recent data points evaluated as valid and used for calculating statistics such as the mean <math display="inline"><semantics> <msub> <mi>y</mi> <mrow> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </semantics></math> and the standard deviation <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>N</mi> </msub> </semantics></math>. <math display="inline"><semantics> <msub> <mi>y</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>f</mi> <mi>f</mi> <mo>,</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> refers to the absolute threshold set for detecting high rates of change. <math display="inline"><semantics> <msub> <mi>σ</mi> <mrow> <mi>n</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </semantics></math> is the minimum standard deviation that should be accepted, due to natural variations in the environment the sensor is located in.</p>
Full article ">Figure 3
<p>Schematic overview of how different symptoms detected in individual variables can be combined with tests for covariance between the variables, into different diagnoses, through a diagnostic logic module.</p>
Full article ">Figure 4
<p>Photo of the SeaBird SBE37SMP sensor node upon installation at the OBSEA observatory.</p>
Full article ">Figure 5
<p>Absolute sensitivity coefficients for (<b>a</b>) salinity and (<b>b</b>) conductivity with respect to temperature. Calculated for every 1000th data point (approximately every 3 h) in the OBSEA CTD data for year 2020. The dashed lines indicate the 99 percent confidence levels of the predicted sensitivity coefficients using a quadratic (<b>a</b>) and linear (<b>b</b>) model built on the calculated sensitivity coefficients, excluding extreme outliers.</p>
Full article ">Figure 6
<p>Rolling co-variation between temperature and conductivity over 15 min, 1 h and 2 h, for 5–6 August 2020.</p>
Full article ">Figure 7
<p>Diagnostic plot for OBSEA CTD temperature, conductivity and salinity data 5–6 August 2020. (<b>a</b>–<b>c</b>) Result of the rate of change and spike tests from each sensor individually. (<b>d</b>) shows the running covariance between temperature and conductivity. (<b>e</b>) Salinity data, diagnosed based on the combination of (<b>a</b>–<b>d</b>) and the logic described in <a href="#jmse-12-02367-t002" class="html-table">Table 2</a>. Threshold for high rate of change for temperature: 0.05 °C. Threshold for high rate of change for conductivity and salinity: calculated from temperature thresholds and sensitivity coefficients. Running window for statistics: 30 min. Number of standard deviations for detecting spikes: 4. The spike detection thresholds are indicated as blue-gray dotted lines in (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 8
<p>Distribution of diagnoses per month for the OBSEA CTD temperature, conductivity and salinity data for the year 2020.</p>
Full article ">Figure 9
<p>Cross-correlation calculated for the differences in temperature and conductivity for each timestep in the year 2020. A weak shift towards positive lags is observed, suggesting a delay in conductivity compared with temperature measurement data.</p>
Full article ">
18 pages, 1899 KiB  
Article
Adaptive Centroid-Connected Structure Matching Network Based on Semi-Supervised Heterogeneous Domain
by Zhoubao Sun, Yanan Tang, Xin Zhang and Xiaodong Zhang
Mathematics 2024, 12(24), 3986; https://doi.org/10.3390/math12243986 - 18 Dec 2024
Viewed by 594
Abstract
Heterogeneous domain adaptation (HDA) utilizes the knowledge of the source domain to model the target domain. Although the two domains are semantically related, the problem of feature and distribution differences in heterogeneous data still needs to be solved. Most of the existing HDA [...] Read more.
Heterogeneous domain adaptation (HDA) utilizes the knowledge of the source domain to model the target domain. Although the two domains are semantically related, the problem of feature and distribution differences in heterogeneous data still needs to be solved. Most of the existing HDA methods only consider the feature or distribution problem but do not consider the geometric semantic information similarity between the domain structures, which leads to a weakened adaptive performance. In order to solve the problem, a centroid connected structure matching network (CCSMN) approach is proposed, which firstly maps the heterogeneous data into a shared public feature subspace to solve the problem of feature differences. Secondly, it promotes the overlap of domain centers and nodes of the same category between domains to reduce the positional distribution differences in the internal structure of data. Then, the supervised information is utilized to generate target domain nodes, and the geometric structural and semantic information are utilized to construct a centroid-connected structure with a reasonable inter-class distance. During the training process, a progressive and integrated pseudo-labeling is utilized to select samples with high-confidence labels and improve the classification accuracy for the target domain. Extensive experiments are conducted in text-to-image and image-to-image HDA tasks, and the results show that the CCSMN outperforms several state-of-the-art baseline methods. Compared with state-of-the-art HDA methods, in the text-to-image transfer task, the efficiency has increased by 8.05%; and in the image-to-image transfer task, the efficiency has increased by about 2%, which suggests that the CCSMN benefits more from domain geometric semantic information similarity. Full article
Show Figures

Figure 1

Figure 1
<p>Example of heterogeneous domain adaptation.</p>
Full article ">Figure 2
<p>CCSMN framework.</p>
Full article ">Figure 3
<p>The results of parameter sensitivity experiments.</p>
Full article ">Figure 4
<p>The t-SNE visualization of NN, CCSMN on the <math display="inline"><semantics> <mrow> <mi>W</mi> <mfenced> <mrow> <mi>S</mi> <mi>U</mi> <mi>R</mi> <mi>F</mi> </mrow> </mfenced> <mo>→</mo> <mi>C</mi> <mfenced> <mrow> <mi>D</mi> <mi>e</mi> <mi>C</mi> <mi>A</mi> <mi>F</mi> <mn>6</mn> </mrow> </mfenced> </mrow> </semantics></math> task.</p>
Full article ">
18 pages, 6698 KiB  
Article
Microbial Network Complexity Helps to Reduce the Deep Migration of Chemical Fertilizer Nitrogen Under the Combined Application of Varying Irrigation Amounts and Multiple Nitrogen Sources
by Taotao Chen, Erping Cui, Yanbo Zhang, Ge Gao, Hao You, Yurun Tian, Chao Hu, Yuan Liu, Tao Fan and Xiangyang Fan
Agriculture 2024, 14(12), 2311; https://doi.org/10.3390/agriculture14122311 - 17 Dec 2024
Viewed by 683
Abstract
The deep migration of soil nitrogen (N) poses a significant risk of N leaching, contributing to non-point-source pollution. This study examines the influence of microbial networks on the deep migration of chemical fertilizer N under varying irrigation management and multiple N fertilizer sources. [...] Read more.
The deep migration of soil nitrogen (N) poses a significant risk of N leaching, contributing to non-point-source pollution. This study examines the influence of microbial networks on the deep migration of chemical fertilizer N under varying irrigation management and multiple N fertilizer sources. A soil column experiment with eight treatments was conducted, utilizing 15N isotope labeling and metagenomic sequencing technology. The findings revealed that reduced irrigation significantly curbs the deep migration of chemical fertilizer N, and straw returning also mitigates this migration under conventional irrigation. Microbial network complexity and stability were markedly higher under reduced irrigation compared to conventional practices. Notably, network node count, average degree, and modularity exhibited significant negative correlations with the deep migration of chemical fertilizer N. The network topology indices, including node count, average clustering coefficient, average degree, modularity, and edge count, were found to be relatively more important for the deep migration of chemical fertilizer N. In conclusion, microbial networks play an important role in reducing the deep migration of chemical fertilizer N. Full article
Show Figures

Figure 1

Figure 1
<p>The proportion of residual <sup>15</sup>N from chemical fertilizer (<b>A</b>) and the <sup>15</sup>N residual amount (<b>B</b>) in different treatments. S: straw returning; M: manure substitution; W1: reduced irrigation; W2: conventional irrigation. No uppercase S or M represents straw non-returning or no manure substitution. Different lowercase letters above the columns indicate significant differences between treatments within the same soil layer (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 2
<p>NO<sub>3</sub><sup>−</sup>-N content in different soil layers in each period. S: straw returning; M: manure substitution; W1: reduced irrigation; W2: conventional irrigation. No uppercase S or M represents straw non-returning or no manure substitution. Different lowercase letters above the columns indicate significant differences between treatments within the same soil layer (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 3
<p>NH<sub>4</sub><sup>+</sup>-N content in different soil layers in each period. S: straw returning; M: manure substitution; W1: reduced irrigation; W2: conventional irrigation. No uppercase S or M represents straw non-returning or no manure substitution. Different lowercase letters above the columns indicate significant differences between treatments within the same soil layer (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 4
<p>TN content in different soil layers in each period. S: straw returning; M: manure substitution; W1: reduced irrigation; W2: conventional irrigation. No uppercase S or M represents straw non-returning or no manure substitution. Different lowercase letters above the columns indicate significant differences between treatments within the same soil layer (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 5
<p>Visualization of soil microbial networks on the two levels under the three factors of irrigation, straw, and manure at the phylum level. W1: reduced irrigation; W2: conventional irrigation; S0: straw non-returning; S1: straw returning; M0: no manure substitution; M1: manure substitution. The uppercase letter N denotes the number of network nodes, and E signifies the number of network edges. Large modules with Top 4 are illustrated in different colors, and smaller modules are displayed in gray. Colors are not consistent across different networks.</p>
Full article ">Figure 6
<p>Visualization of soil microbial networks on the two levels under the three factors of irrigation, straw, and manure at the genus level. W1: reduced irrigation; W2: conventional irrigation; S0: straw non-returning; S1: straw returning; M0: no manure substitution; M1: manure substitution. The uppercase letter N denotes the number of network nodes, and E signifies the number of network edges. Large modules with Top 4 are illustrated in different colors, and smaller modules are displayed in gray. Colors are not consistent across different networks.</p>
Full article ">Figure 7
<p>Visualization of soil microbial networks of each treatment at the phylum level. S: straw returning; M: manure substitution; W1: reduced irrigation; W2: conventional irrigation. No uppercase S or M represents straw non-returning or no manure substitution. The uppercase letter N denotes the number of network nodes, and E signifies the number of network edges. Large modules with Top 4 are illustrated in different colors, and smaller modules are displayed in gray. Colors are not consistent across different networks.</p>
Full article ">Figure 8
<p>Visualization of soil microbial networks of each treatment at the genus level. S: straw returning; M: manure substitution; W1: reduced irrigation; W2: conventional irrigation. No uppercase S or M represents straw non-returning or no manure substitution. The uppercase letter N denotes the number of network nodes, and E signifies the number of network edges. Large modules with Top 4 are illustrated in different colors, and smaller modules are displayed in gray. Colors are not consistent across different networks.</p>
Full article ">Figure 9
<p>The visualization of correlation analysis between chemical fertilizer N deep migration and the topological indices of microbial networks (<b>A</b>) and the random forest analysis for evaluating variables’ relative importance for fertilizer N deep migration (<b>B</b>). *, **, and *** represent significant differences at <span class="html-italic">p</span> &lt; 0.05, <span class="html-italic">p</span> &lt; 0.01, and <span class="html-italic">p</span> &lt; 0.001, respectively.</p>
Full article ">
21 pages, 10733 KiB  
Article
CNN-Based Multi-Object Detection and Segmentation in 3D LiDAR Data for Dynamic Industrial Environments
by Danilo Giacomin Schneider  and Marcelo Ricardo Stemmer
Robotics 2024, 13(12), 174; https://doi.org/10.3390/robotics13120174 - 9 Dec 2024
Viewed by 1097
Abstract
Autonomous navigation in dynamic environments presents a significant challenge for mobile robotic systems. This paper proposes a novel approach utilizing Convolutional Neural Networks (CNNs) for multi-object detection in 3D space and 2D segmentation using bird’s eye view (BEV) maps derived from 3D Light [...] Read more.
Autonomous navigation in dynamic environments presents a significant challenge for mobile robotic systems. This paper proposes a novel approach utilizing Convolutional Neural Networks (CNNs) for multi-object detection in 3D space and 2D segmentation using bird’s eye view (BEV) maps derived from 3D Light Detection and Ranging (LiDAR) data. Our method aims to enable mobile robots to localize movable objects and their occupancy, which is crucial for safe and efficient navigation. To address the scarcity of labeled real-world datasets, a synthetic dataset based on a simulation environment is generated to train and evaluate our model. Additionally, we employ a subset of the NVIDIA r2b dataset for evaluation in the real world. Furthermore, we integrate our CNN-based detection and segmentation model into a Robot Operating System 2 (ROS2) framework, facilitating communication between mobile robots and a centralized node for data aggregation and map creation. Our experimental results demonstrate promising performance, showcasing the potential applicability of our approach in future assembly systems. While further validation with real-world data is warranted, our work contributes to advancing perception systems by proposing a solution for multi-source, multi-object tracking and mapping. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

Figure 1
<p>Architecture of Keypoint Feature Pyramid Network and Fully Convolutional Network with a ResNet-50 encoder backbone. To ease output visualization, segmentation is colored and 3D detection is represented with 2D-oriented bounding boxes. Source: Authors.</p>
Full article ">Figure 2
<p>Design of system nodes for data aggregation and map building. Source: authors.</p>
Full article ">Figure 3
<p>First and second frames predictions. RGB image with 3D bounding boxes visualization ((<b>a</b>,<b>b</b>), respectively), Bird’s Eye View (BEV) input with oriented bounding boxes ((<b>c</b>,<b>e</b>), respectively), and semantic segmentation with oriented bounding boxes ((<b>d</b>,<b>f</b>), respectively). Different colors represent different classes and orientations are signed with red edges. Source: Authors.</p>
Full article ">Figure 4
<p>Qualitative inference comparison on oriented bounding boxes predictions, ground truth on the left column, RTMDET-R-l inferences on the middle and our network’s inferences on the right column. Different colors represent different classes. Source: Authors.</p>
Full article ">Figure 5
<p>Qualitative inference comparison on semantic segmentation predictions. Different colors represent different classes. Source: Authors.</p>
Full article ">Figure 6
<p>First and second frames predictions on r2b dataset. RGB image with 3D bounding boxes visualization ((<b>a</b>,<b>b</b>), respectively), BEV input with oriented bounding boxes ((<b>c</b>,<b>e</b>), respectively), and semantic segmentation with oriented bounding boxes ((<b>d</b>,<b>f</b>), respectively). Yellow color represents ”person” class and orientations are signed with red edges. Source: Authors.</p>
Full article ">Figure 7
<p>Absolute localization errors for first and second simulated scenarios ((<b>a</b>,<b>b</b>), respectively). Source: Authors.</p>
Full article ">Figure 8
<p>First and second simulated scenarios ((<b>a</b>,<b>b</b>), respectively). Semantic occupancy map generated by the central node for scene one (<b>c</b>) and two (<b>d</b>). Semantic occupancy map ground truth for scene one (<b>e</b>) and two (<b>f</b>). Different colors represent different classes. Source: Authors.</p>
Full article ">Figure 9
<p>Synthetic dataset BEV sample and its automatically generated segmentation ground truth ((<b>a</b>,<b>b</b>), respectively). BEV sample of r2b dataset (<b>c</b>), segmentation obtained with the polygon (contour in red) though manually clicked vertices; blind sides were annotated in a non-convex manner. Source: Authors.</p>
Full article ">
17 pages, 4345 KiB  
Article
A Container Escape Detection Method Based on a Dependency Graph
by Kai Chen, Yufei Zhao, Jing Guo, Zhimin Gu, Longxi Han and Keyi Tang
Electronics 2024, 13(23), 4773; https://doi.org/10.3390/electronics13234773 - 3 Dec 2024
Viewed by 786
Abstract
With the rapid advancement in edge computing, container technology has gained widespread adoption. This is due to its lightweight isolation mechanisms, high portability, and fast deployment capabilities. Despite these advantages, container technology also introduces significant security risks. One of the most critical is [...] Read more.
With the rapid advancement in edge computing, container technology has gained widespread adoption. This is due to its lightweight isolation mechanisms, high portability, and fast deployment capabilities. Despite these advantages, container technology also introduces significant security risks. One of the most critical is container escape. However, current detection research is incomplete. Many methods lack comprehensive detection coverage or fail to fully reconstruct the attack process. To address these gaps, this paper proposes a container escape detection method based on a dependency graph. The method uses various nodes and edges to describe diverse system behaviors. This approach enables the detection of a broader range of attack types. It also effectively captures the contextual relationships between system events, facilitating attack traceability and reconstruction. We design a method to identify container processes on the dependency graph through label generation and propagation. Based on this, container escape detection is implemented using file access control within the graph. Experimental results demonstrate the effectiveness of the proposed method in detecting container escapes. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>The Overall Architecture Diagram.</p>
Full article ">Figure 2
<p>Container Startup Process.</p>
Full article ">Figure 3
<p>The Container Attribute Label Generation and Propagation Flowchart.</p>
Full article ">Figure 4
<p>Container Escape Model.</p>
Full article ">Figure 5
<p>Container Escape Detection Model.</p>
Full article ">Figure 6
<p>Container Startup Process.</p>
Full article ">Figure 7
<p>Provenance Graph for Privileged Container Escape Detection. (<b>a</b>) Full Provenance Graph; (<b>b</b>) Subgraph of Container Events.</p>
Full article ">Figure 8
<p>Core Steps of Privileged Container Escape in the Provenance Graph.</p>
Full article ">Figure 9
<p>Provenance Graph for CVE-2019-5736 Escape Detection. (<b>a</b>) Full Provenance Graph; (<b>b</b>) Subgraph of Container Events.</p>
Full article ">Figure 10
<p>Core Steps of CVE-2019-5736 Escape in the Provenance Graph.</p>
Full article ">Figure 11
<p>Provenance Graph for CVE-2022-0847 Escape Detection. (<b>a</b>) Full Provenance Graph; (<b>b</b>) Subgraph of Container Events.</p>
Full article ">Figure 12
<p>Core Steps of CVE-2022-0847 Escape in the Provenance Graph.</p>
Full article ">
14 pages, 11663 KiB  
Article
Integrated SERS-Microfluidic Sensor Based on Nano-Micro Hierarchical Cactus-like Array Substrates for the Early Diagnosis of Prostate Cancer
by Huakun Jia, Weiyang Meng, Rongke Gao, Yeru Wang, Changbiao Zhan, Yiyue Yu, Haojie Cong and Liandong Yu
Biosensors 2024, 14(12), 579; https://doi.org/10.3390/bios14120579 - 28 Nov 2024
Viewed by 1027
Abstract
The detection and analysis of cancer cell exosomes with high sensitivity and precision are pivotal for the early diagnosis and treatment strategies of prostate cancer. To this end, a microfluidic chip, equipped with a cactus-like array substrate (CAS) based on surface-enhanced Raman spectroscopy [...] Read more.
The detection and analysis of cancer cell exosomes with high sensitivity and precision are pivotal for the early diagnosis and treatment strategies of prostate cancer. To this end, a microfluidic chip, equipped with a cactus-like array substrate (CAS) based on surface-enhanced Raman spectroscopy (SERS) was designed and fabricated for the detection of exosome concentrations in Lymph Node Carcinoma of the Prostate (LNCaP). Double layers of polystyrene (PS) microspheres were self-assembled onto a polyethylene terephthalate (PET) film to form an ordered cactus-like nanoarray for detection and analysis. By combining EpCAM aptamer-labeled SERS nanoprobes and a CD63 aptamer-labeled CAS, a ‘sandwich’ structure was formed and applied to the microfluidic chips, further enhancing the Raman scattering signal of Raman reporter molecules. The results indicate that the integrated microfluidic sensor exhibits a good linear response within the detection concentration range of 105 particles μL−1 to 1 particle μL−1. The detection limit of exosomes in cancer cells can reach 1 particle μL−1. Therefore, we believed that the CAS integrated microfluidic sensor offers a superior solution for the early diagnosis and therapeutic intervention of prostate cancer. Full article
(This article belongs to the Special Issue State-of-the-Art Biosensors in China (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Formation steps of the ‘sandwich’ immunocomplexes on CAS. (<b>b</b>) Schematic illustration of the SERS-based microfluidic aptamer chip for exosome detection ((i) The mixing channel of each sample; (ii) Rectangular detection chamber embedded with the CAS substrate).</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic of the fabrication process of the cactus-like array substrate (Step I: Preparation of the DCC template via stacking two SCC templates. Step II: ICP etching of the DCC template. Step III: Evaporation of the gold film). (<b>b</b>) SEM images of the CAS fabrication process. (<b>c</b>) UV-vis absorption spectra of the substrates with different conditions. (<b>d</b>) The SEM images of the CAS (i), EDS elemental maps of Au (ii) and the Au/O/C/N elemental maps on the CAS (iii,iv).</p>
Full article ">Figure 3
<p>(<b>a</b>) SEM images of the DCC template on PET film with different etching times (30, 60, 90, 120, 150, 180 s). (<b>b</b>) The SERS spectra of 4-ATP obtained on the CAS with different etching times (30, 60, 90, 120, 150, 180 s). (<b>c</b>) 20 SERS spectra randomly measured on the CAS and (<b>d</b>) their RSD of the peak intensity @ 1076 cm<sup>−1</sup>.</p>
Full article ">Figure 4
<p>Local FDTD simulation of the CAS. (<b>a</b>,<b>b</b>) A representative SEM image and schematic of the CAS. (<b>c</b>,<b>d</b>) FDTD simulation of the electric field distribution on the CAS.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) Feasibility test of exosome detection via sandwich immunocomplexes. (<b>c</b>) Fluorescent image of the entire microfluidic channels. The white arrows indicated the line profile locations. (<b>d</b>) Corresponding fluorescence intensity profiles at four selected locations.</p>
Full article ">Figure 6
<p>(<b>a</b>) The SERS intensity variations with the increase in exosome concentrations. (<b>b</b>) Corresponding calibration curves for the exosomes. Inset was the plot of the SERS intensity at 1076 cm<sup>−1</sup> versus the logarithm of the exosome concentrations. The error bars represent the standard deviations of five individual substrates.</p>
Full article ">
24 pages, 1736 KiB  
Article
Multi-Label Feature Selection with Feature–Label Subgraph Association and Graph Representation Learning
by Jinghou Ruan, Mingwei Wang, Deqing Liu, Maolin Chen and Xianjun Gao
Entropy 2024, 26(11), 992; https://doi.org/10.3390/e26110992 - 18 Nov 2024
Viewed by 802
Abstract
In multi-label data, a sample is associated with multiple labels at the same time, and the computational complexity is manifested in the high-dimensional feature space as well as the interdependence and unbalanced distribution of labels, which leads to challenges regarding feature selection. As [...] Read more.
In multi-label data, a sample is associated with multiple labels at the same time, and the computational complexity is manifested in the high-dimensional feature space as well as the interdependence and unbalanced distribution of labels, which leads to challenges regarding feature selection. As a result, a multi-label feature selection method based on feature–label subgraph association with graph representation learning (SAGRL) is proposed to represent the complex correlations of features and labels, especially the relationships between features and labels. Specifically, features and labels are mapped to nodes in the graph structure, and the connections between nodes are established to form feature and label sets, respectively, which increase intra-class correlation and decrease inter-class correlation. Further, feature–label subgraphs are constructed by feature and label sets to provide abundant feature combinations. The relationship between each subgraph is adjusted by graph representation learning, the crucial features in different label sets are selected, and the optimal feature subset is obtained by ranking. Experimental studies on 11 datasets show the superior performance of the proposed method with six evaluation metrics over some state-of-the-art multi-label feature selection methods. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Bipartite graph.</p>
Full article ">Figure 2
<p>Feature–label subgraph construction.</p>
Full article ">Figure 3
<p>Feature–label subgraph association updating.</p>
Full article ">Figure 4
<p>Feature–label subgraph matching.</p>
Full article ">Figure 5
<p>Results on Arts dataset with different numbers of features.</p>
Full article ">Figure 6
<p>Results on Bibtex dataset with different numbers of features.</p>
Full article ">Figure 7
<p>Results on Corel5k dataset with different numbers of features.</p>
Full article ">Figure 8
<p>Results on Education dataset with different numbers of features.</p>
Full article ">Figure 9
<p>Results on Emotions dataset with different numbers of features.</p>
Full article ">Figure 10
<p>Results on Enron dataset with different numbers of features.</p>
Full article ">Figure 11
<p>Results on Image dataset with different numbers of features.</p>
Full article ">Figure 12
<p>Results on Medical dataset with different numbers of features.</p>
Full article ">Figure 13
<p>Results on Scene dataset with different numbers of features.</p>
Full article ">Figure 14
<p>Results on Social dataset with different numbers of features.</p>
Full article ">Figure 15
<p>Results on Yeast dataset with different numbers of features.</p>
Full article ">Figure 16
<p>Average Hamming Loss with different pairs of <math display="inline"><semantics> <mrow> <mi>k</mi> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>k</mi> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">
Back to TopTop