[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (262)

Search Parameters:
Keywords = one-class

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4968 KiB  
Article
PE-DOCC: A Novel Periodicity-Enhanced Deep One-Class Classification Framework for Electricity Theft Detection
by Zhijie Wu and Yufeng Wang
Appl. Sci. 2025, 15(4), 2193; https://doi.org/10.3390/app15042193 - 19 Feb 2025
Viewed by 258
Abstract
Electricity theft, emerging as one of the severe cyberattacks in smart grids, causes significant economic losses. Due to the powerful expressive ability of deep neural networks (DNN), supervised and unsupervised DNN-based electricity theft detection (ETD) schemes have experienced widespread deployment. However, existing works [...] Read more.
Electricity theft, emerging as one of the severe cyberattacks in smart grids, causes significant economic losses. Due to the powerful expressive ability of deep neural networks (DNN), supervised and unsupervised DNN-based electricity theft detection (ETD) schemes have experienced widespread deployment. However, existing works have the following weak points: Supervised DNN-based schemes require abundant labeled anomalous samples for training, and even worse, cannot detect unseen theft patterns. To avoid the extensively labor-consuming activity of labeling anomalous samples, unsupervised DNNs-based schemes aim to learn the normality of time-series and infer an anomaly score for each data instance, but they fail to capture periodic features effectively. To address these challenges, this paper proposes a novel periodicity-enhanced deep one-class classification framework (PE-DOCC) based on a periodicity-enhanced transformer encoder, named Periodicformer encoder. Specifically, within the encoder, a novel criss-cross periodic attention is proposed to capture both horizontal and vertical periodic features. The Periodicformer encoder is pre-trained by reconstructing partially masked input sequences, and the learned latent representations are then fed into a one-class classification for anomaly detection. Extensive experiments on real-world datasets demonstrate that our proposed PE-DOCC framework outperforms state-of-the-art unsupervised ETD methods. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

Figure 1
<p>Proposed framework integrating unsupervised representation learning with one-class classification.</p>
Full article ">Figure 2
<p>The overall architecture of the proposed Periodicformer encoder.</p>
Full article ">Figure 3
<p>Criss-cross periodic attention (<b>left</b>) and the corresponding row and column autocorrelation for a single head <span class="html-italic">h</span> (<b>right</b>).</p>
Full article ">Figure 4
<p>Autocorrelation of a normal sample and an abnormal sample. (<b>a</b>) A normal sample. (<b>b</b>) An abnormal sample.</p>
Full article ">Figure 5
<p>Example of daily electricity consumption and its tampered data corresponding to six types of FDI attacks.</p>
Full article ">Figure 6
<p>Confusion matrices of different methods: (<b>a</b>) OCSVM; (<b>b</b>) autoencoder + OCSVM; (<b>c</b>) autoencoder + iForest; (<b>d</b>) autoencoder + LOF; (<b>e</b>) Periodicformer encoder + OCSVM; (<b>f</b>) Periodicformer encoder + iForest; (<b>g</b>) Periodicformer encoder + LOF.</p>
Full article ">Figure 7
<p>Comparison of training time with and without GPU acceleration.</p>
Full article ">Figure 8
<p>ROC curves of the proposed method and other methods and the value of the area under each curve AUC.</p>
Full article ">Figure 9
<p>Performance of proposed method and its variants. (<b>a</b>) F1 scores of PE-DOCC and its variants. (<b>b</b>) AUC of PE-DOCC and its variants. (<b>c</b>) Recall of PE-DOCC and its variants. (<b>d</b>) FPR of PE-DOCC and its variants.</p>
Full article ">
13 pages, 1142 KiB  
Technical Note
Clustering and Vectorizing Acoustic Emission Events of Large Infrastructures’ Normal Operation
by Theocharis Tsenis and Vassilios Kappatos
Infrastructures 2025, 10(2), 38; https://doi.org/10.3390/infrastructures10020038 - 11 Feb 2025
Viewed by 392
Abstract
The detection of acoustic emission events from various failing mechanisms, such as plastic deformations, is a critical element in the monitoring and timely detection of structural failures in infrastructures. This study focuses on the detection of such failures in metal gates at rivers’ [...] Read more.
The detection of acoustic emission events from various failing mechanisms, such as plastic deformations, is a critical element in the monitoring and timely detection of structural failures in infrastructures. This study focuses on the detection of such failures in metal gates at rivers’ lifting dams aiming to increase the reliability of river transport compared to the current situation, thereby, increasing the resilience of transport corridors. During our study, we used lifting dams in both France and Italy where river transport is thriving. A methodology was developed, processing corresponding acoustic emission recordings originating from lifting dams’ metal gates, using advanced denoising—preprocessing, various decompositions, and spectral embeddings associated with various latest nonlinear processing clustering techniques—thus providing a detailed cluster label morphology and profile of water gates’ normal operating area. Latest machine learning outlier detection algorithms, like One-Class Support Vector Machine, Variational Auto-Encoder, and others, were incorporated, producing a vector of confidence on upcoming out-of-the-normal gate operation and failure prediction, achieving detection contrast enhancement on out-of-the-normal operation points up to 400%. Full article
Show Figures

Figure 1

Figure 1
<p>Our methodology layout from accepting the AE signals up to feature extraction and representation in a multidimensional space.</p>
Full article ">Figure 2
<p>Starting from the referenced and inspected multidimensional space producing the OPV.</p>
Full article ">Figure 3
<p>PZT sensors used by the Mistras micro-SHM system, from top to bottom with 60 kHz (R6a), 150 kHz (R15a), and 400 kHz (wideband) useful bandwidth.</p>
Full article ">Figure 4
<p>Placement of sensors (the used AE logger Micro-SHM in the top left corner) at a sample waterlock gate at Conda di Baricetta waterlock.</p>
Full article ">Figure 5
<p>LLE dimension reduction: (<b>left</b>) silhouette values and (<b>right</b>) clustering (800 clusters) 2D projection, with x-axis the 1st feature (dimension) of the new reduced space due to LLE dimensionality reduction and y-axis the 2nd feature dimension.</p>
Full article ">Figure 6
<p>Deviation in the outlier profiles in comparison to the original gate’s normal operation, with the horizontal line displaying the 27 indices as described in <a href="#sec2dot2-infrastructures-10-00038" class="html-sec">Section 2.2</a>.</p>
Full article ">
23 pages, 555 KiB  
Article
On the Application of a Sparse Data Observers (SDOs) Outlier Detection Algorithm to Mitigate Poisoning Attacks in UltraWideBand (UWB) Line-of-Sight (LOS)/Non-Line-of-Sight (NLOS) Classification
by Gianmarco Baldini
Future Internet 2025, 17(2), 60; https://doi.org/10.3390/fi17020060 - 3 Feb 2025
Viewed by 579
Abstract
The classification of the wireless propagation channel between Line-of-Sight (LOS) or Non-Line-of-Sight (NLOS) is useful in the operation of wireless communication systems. The research community has increasingly investigated the application of machine learning (ML) to LOS/NLOS classification and this paper is part of [...] Read more.
The classification of the wireless propagation channel between Line-of-Sight (LOS) or Non-Line-of-Sight (NLOS) is useful in the operation of wireless communication systems. The research community has increasingly investigated the application of machine learning (ML) to LOS/NLOS classification and this paper is part of this trend, but not all the different aspects of ML have been analyzed. In the general ML domain, poisoning and adversarial attacks and the related mitigation techniques are an active area of research. Such attacks aim to hamper the ML classification process by poisoning the data set. Mitigation techniques are designed to counter this threat using different approaches. Poisoning attacks in LOS/NLOS classification have not received significant attention by the wireless communication community and this paper aims to address this gap by proposing the application of a specific mitigation technique based on outlier detection algorithms. The rationale is that poisoned samples can be identified as outliers from legitimate samples. In particular, the study described in this paper proposes a recent outlier detection algorithm, which has low computing complexity: the sparse data observers (SDOs) algorithm. The study proposes a comprehensive analysis of both conventional and novel types of attacks and related mitigation techniques based on outlier detection algorithms for UltraWideBand (UWB) channel classification. The proposed techniques are applied to two data sets: the public eWINE data set with seven different UWB LOS/NLOS different environments and a radar data set with the LOS/NLOS condition. The results show that the SDO algorithm outperforms other outlier detection algorithms for attack detection like the isolation forest (iForest) algorithm and the one-class support vector machine (OCSVM) in most of the scenarios and attacks, and it is quite competitive in the task of increasing the UWB LOS/NLOS classification accuracy through sanitation in comparison to the poisoned model. Full article
Show Figures

Figure 1

Figure 1
<p>Set of procedures composing the workflow of the proposed approach.</p>
Full article ">Figure 2
<p>Comparison of the OD algorithms for the detection rate of poisoned samples with scenario 1 (Office 1) and <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>. The y-axis provides the percentage of poisoned samples correctly identified as such by the OD algorithm. On the y-axis, the value of the percentage <math display="inline"><semantics> <msub> <mi>T</mi> <mi>P</mi> </msub> </semantics></math> of poisoned samples over the overall samples is given.</p>
Full article ">Figure 3
<p>Impact of the <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> parameter on the detection rate of poisoned samples with first scenario (Office 1) and the SDO algorithm. The y-axis provides the percentage of poisoned samples correctly identified. The x-axis indicates the value of the percentage <math display="inline"><semantics> <msub> <mi>T</mi> <mi>P</mi> </msub> </semantics></math> of the poisoned samples.</p>
Full article ">Figure 4
<p>Detection rate of poisoned samples with the SDO algorithm and the random forest classifier for the different scenarios of the eWINE data set; <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply). <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>80</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Accuracy obtained with the different OD algorithms and the random forest classifier for the different attacks for scenario 1 (Office 1); <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply). <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>80</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Improvement of the accuracy <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>c</mi> <mi>c</mi> <mi>u</mi> <mi>r</mi> <mi>a</mi> <mi>c</mi> <msub> <mi>y</mi> <mi>I</mi> </msub> </mrow> </semantics></math> with the SDO algorithm and the random forest classifier for the different attacks within scenario 1 (Office 1) with <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and different values of <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>Improvement of the precision <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mi>i</mi> <mi>s</mi> <mi>i</mi> <mi>o</mi> <msub> <mi>n</mi> <mi>I</mi> </msub> </mrow> </semantics></math> with the SDO algorithm and the random forest classifier for the different attacks for scenario 1 (Office 1); <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and different values of <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Improvement of the recall <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <msub> <mi>l</mi> <mi>I</mi> </msub> </mrow> </semantics></math> with the SDO algorithm and the random forest classifier for the different attacks for scenario 1 (Office 1); <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and different values of <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>Improvement of the accuracy <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>c</mi> <mi>c</mi> <mi>u</mi> <mi>r</mi> <mi>a</mi> <mi>c</mi> <msub> <mi>y</mi> <mi>I</mi> </msub> </mrow> </semantics></math> with the SDO algorithm and the random forest classifier for the different scenarios of the eWINE data set; <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>A</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mn>80</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Improvement of the precision <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mi>i</mi> <mi>s</mi> <mi>i</mi> <mi>o</mi> <msub> <mi>n</mi> <mi>I</mi> </msub> </mrow> </semantics></math> with the SDO algorithm and the random forest classifier for the different scenarios of the eWINE data set; <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>A</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mn>80</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Improvement of the recall <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <msub> <mi>l</mi> <mi>I</mi> </msub> </mrow> </semantics></math> with the SDO algorithm and the random forest classifier for the different scenarios of the eWINE data set; <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>A</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mn>80</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Comparison of the OD algorithms for the detection rate of poisoned samples with <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> and the Radar data set. The y-axis provides the percentage of poisoned samples correctly identified as such by the OD algorithm. On the y-axis, the value of percentage <math display="inline"><semantics> <msub> <mi>T</mi> <mi>P</mi> </msub> </semantics></math> of poisoned samples on the overall samples is given.</p>
Full article ">Figure 13
<p>Accuracy obtained with the different OD algorithms and the random forest classifier for the different attacks with <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> and the Radar data set for the FSP and TFP attacks (for the LF attacks, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) with <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>80</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Improvement of the accuracy <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>c</mi> <mi>c</mi> <mi>u</mi> <mi>r</mi> <mi>a</mi> <mi>c</mi> <msub> <mi>y</mi> <mi>I</mi> </msub> </mrow> </semantics></math> for the Radar data set with the SDO algorithm and the random forest classifier for the different attacks within scenario 1 (Office 1); <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and different values of <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 15
<p>Improvement of the precision <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mi>i</mi> <mi>s</mi> <mi>i</mi> <mi>o</mi> <msub> <mi>n</mi> <mi>I</mi> </msub> </mrow> </semantics></math> for the Radar data set with the SDO algorithm and the random forest classifier for the different attacks within scenario 1 (Office1) where <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and different values of <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 16
<p>Improvement of the recall <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <msub> <mi>l</mi> <mi>I</mi> </msub> </mrow> </semantics></math> for the Radar data set with the SDO algorithm and the random forest classifier for the different attacks within scenario 1 (Office 1) where <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>P</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> for the FSP and TFP attacks (for the LF attack, <math display="inline"><semantics> <msub> <mi>S</mi> <mi>P</mi> </msub> </semantics></math> does not apply) and different values of <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>S</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">
26 pages, 19209 KiB  
Article
Image Segmentation Framework for Detecting Adversarial Attacks for Autonomous Driving Cars
by Ahmad Fakhr Aldeen Sattout, Ali Chehab, Ammar Mohanna and Razane Tajeddine
Appl. Sci. 2025, 15(3), 1328; https://doi.org/10.3390/app15031328 - 27 Jan 2025
Viewed by 657
Abstract
The widespread deployment of deep neural networks (DNNs) in critical real-time applications has spurred significant research into their security and robustness. A key vulnerability identified is that DNN decisions can be maliciously altered by introducing carefully crafted noise into the input data, leading [...] Read more.
The widespread deployment of deep neural networks (DNNs) in critical real-time applications has spurred significant research into their security and robustness. A key vulnerability identified is that DNN decisions can be maliciously altered by introducing carefully crafted noise into the input data, leading to erroneous predictions. This is known as an adversarial attack. In this paper, we propose a novel detection framework leveraging segmentation masks and image segmentation techniques to identify adversarial attacks on DNNs, particularly in the context of autonomous driving systems. Our defense technique considers two levels of adversarial detection. The first level mainly detects adversarial inputs with large perturbations using the U-net model and one-class support vector machine (SVM). The second level of defense proposes a dynamic segmentation algorithm based on the k-means algorithm and a verifier model that controls the final prediction of the input image. To evaluate our approach, we comprehensively compare our method to the state-of-the-art feature squeeze method under a white-box attack, using eleven distinct adversarial attacks across three benchmark and heterogeneous data sets. The experimental results demonstrate the efficacy of our framework, achieving overall detection rates exceeding 96% across all adversarial techniques and data sets studied. It is worth mentioning that our method enhances the detection rates of FGSM and BIM attacks, reaching average detection rates of 95.65% as opposed to 62.63% in feature squeezing across the three data sets. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security: Trends and Challenges)
Show Figures

Figure 1

Figure 1
<p>The flow of creating dummy images. The first two columns, <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>, are clean random images from different classes. The third and fourth columns, <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>s</mi> <mn>2</mn> </msub> </semantics></math>, are the intermediate transformations of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The last column is the resultant dummy image generated by the DIG algorithm.</p>
Full article ">Figure 2
<p>The proposed pipeline of the adversarial detection framework. The first line of defense starts with the U-net model and ends with the output prediction of the OCSVM model. The components of the second line of defense are the masking operation with KAS algorithm and the verifier model. The final decision of the pipeline for the input image is taken either immediately from the first line of defense or by comparing the predictions of the verifier model and the target (main) model.</p>
Full article ">Figure 3
<p>This bar chart shows the Structure Similarity Index (SSIM) of the test set images for each class of the MNIST, CIFAR-10, and GTSRB-8 data sets before and after applying the KAS algorithm at <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>35</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>SSIM index of the test set images of all classes of MNIST, CIFAR-10, and GTSRB-8 data sets before and after k-means procedure at 10 values of <span class="html-italic">k</span>.</p>
Full article ">Figure 5
<p>The detection rates of our proposed method as compared to the best feature squeezer for the FGSM attack on the GTSRB-8 data set with <math display="inline"><semantics> <mi>ε</mi> </semantics></math> values <math display="inline"><semantics> <mrow> <mn>0.01</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math>. This figure shows the superiority of our method for all <math display="inline"><semantics> <mi>ε</mi> </semantics></math> values.</p>
Full article ">Figure 6
<p>The flow of image processing steps in the proposed framework. The first line of each data set represents a clean sample of each class. The second line is the corresponding segmentation mask by the U-net model. The third and the fourth lines are the segmented and the inverted versions yielded by the KAS algorithm of the masked images.</p>
Full article ">Figure 7
<p>Samples of distorted images from the GTSRB-8 data set that are omitted from the train and test sets in our experiments.</p>
Full article ">Figure 8
<p>A visual comparison between clean images in the first row and their adversarial counterparts in the second row detected by the OCSVM model’s first line of defense in the adversarial detection framework. The third and the fourth rows are generated masks by the OCSVM for the clean and adversarial images, respectively.</p>
Full article ">Figure 9
<p>A visual comparison between clean images in the first row and their adversarial counterparts in the second row detected by the second line of defense, the verifier model, in our adversarial detection framework. The third and fourth rows represent the segmented versions yielded by the KAS Algorithm 1 processing the clean and adversarial images, respectively. The last two rows are the inverted version.</p>
Full article ">
23 pages, 6814 KiB  
Article
Advancing Data Quality Assurance with Machine Learning: A Case Study on Wind Vane Stalling Detection
by Vincent S. de Feiter, Jessica M. I. Strickland and Irene Garcia-Marti
Atmosphere 2025, 16(2), 129; https://doi.org/10.3390/atmos16020129 - 25 Jan 2025
Viewed by 560
Abstract
High-quality observational datasets are essential for climate research and models, but validating and filtering decades of meteorological measurements is an enormous task. Advances in machine learning provide opportunities to expedite and improve quality control while offering insight into non-linear interactions between the meteorological [...] Read more.
High-quality observational datasets are essential for climate research and models, but validating and filtering decades of meteorological measurements is an enormous task. Advances in machine learning provide opportunities to expedite and improve quality control while offering insight into non-linear interactions between the meteorological variables. The Cabauw Experimental Site for Atmospheric Research in the Netherlands, known for its 213 m observation mast, has provided in situ observations for over 50 years. Despite high-quality instrumentation, measurement errors or non-representative data are inevitable. We explore machine-learning-assisted quality control, focusing on wind vane stalling at 10 m height. Wind vane stalling is treated as a binary classification problem as we evaluate five supervised methods (Logistic Regression, K-Nearest Neighbour, Random Forest, Gaussian Naive Bayes, Support Vector Machine) and one semi-supervised method (One-Class Support Vector Machine). Our analysis determines that wind vane stalling occurred 4.54% of the time annually over 20 years, often during stably stratified nocturnal conditions. The K-Nearest Neighbour and Random Forest methods performed the best, identifying stalling with approximately 75% accuracy, while others were more affected by data imbalance (more non-stalling than stalling data points). The semi-supervised method, avoiding the effects of the inherent data imbalance, also yielded promising results towards advancing data quality assurance. Full article
(This article belongs to the Special Issue Atmospheric Boundary Layer Observation and Meteorology)
Show Figures

Figure 1

Figure 1
<p>Overview of the Cabauw Experimental Site for Atmospheric Research, located in Lopik, the Netherlands. (<b>a</b>) Map indicating location of Cabauw site, courtesy of Knoop et al. 2021 [<a href="#B14-atmosphere-16-00129" class="html-bibr">14</a>]. (<b>b</b>) Side-view of the A-mast (main tower) indicating the levels where measurements are conducted: 10, 20, 40, 80, 140, and 200 m. (<b>c</b>) Top-view of sub-sites: Baseline Surface Radiation (BSRN), Profiling Site (PS), Automatic Weather Station (AWS), Energy Balance (EB) terrain, and Remote Sensing. Satellite imagery sourced from Google Earth. (<b>d</b>) Photo of the installed KNMI-manufactured wind vane and cup anemometer combination.</p>
Full article ">Figure 2
<p>(<b>a</b>) Ten-minute averaged wind speed (U, blue) and wind direction (<math display="inline"><semantics> <msub> <mi mathvariant="normal">W</mi> <mi>dir</mi> </msub> </semantics></math>, green) in situ measurements at 10 m during a wind vane stalling event (red) on 13–15 June 2012. The dashed horizontal line indicates low wind speeds (&lt;0.5 m s<sup>−1</sup>). (<b>b</b>) Synpotic situation above Europe during the wind vane stalling event, showcasing the extended high-pressure system over Central Europe and the approaching low-pressure system above the British Isles. The red/white circular marker indicates the location of the Netherlands, where Cabauw is located (refer to <a href="#atmosphere-16-00129-f001" class="html-fig">Figure 1</a>a).</p>
Full article ">Figure 3
<p>The frequency of wind vane stalling at Cabauw. (<b>a</b>) Average percentage (markers) and standard deviation (shaded) of time the stalling at each vertical level from 2001 to 2022. Average relative time the wind vane at 10 m stalled each (<b>b</b>) year, (<b>c</b>) month, and (<b>d</b>) hourly. Figures (<b>a</b>,<b>c</b>,<b>d</b>) exclude the outlier years: 2011, 2012, and 2022. Mean (<math display="inline"><semantics> <mi>μ</mi> </semantics></math>) and standard deviation (<math display="inline"><semantics> <mi>σ</mi> </semantics></math>) values of the yearly, monthly, and hourly distributions are displayed within the plots.</p>
Full article ">Figure 4
<p>Average vertical profile of meteorological variables at each height during (black) and six hours before (red) wind vane stalling events at 10 m throughout 2001–2022 (excluding 2011, 2012, 2022): (<b>a</b>) wind speed (<span class="html-italic">U</span>), (<b>b</b>) wind direction (<math display="inline"><semantics> <msub> <mi>W</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>r</mi> </mrow> </msub> </semantics></math>), (<b>c</b>) lapse rate of potential temperature (<math display="inline"><semantics> <msub> <mi mathvariant="normal">Γ</mi> <mi>θ</mi> </msub> </semantics></math>), (<b>d</b>) potential temperature (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>), (<b>e</b>) dew point depression (<math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>a</mi> </msub> <mo>−</mo> <msub> <mi>T</mi> <mi>d</mi> </msub> </mrow> </semantics></math>), and (<b>f</b>) Fog Stability Index (FSI). The grey shaded areas and red dotted lines indicate the standard deviation during and preceding the stalling event, respectively.</p>
Full article ">Figure 5
<p>Ten highest-ranked features based on calculated Mutual Information (MI) score. The features evaluated are those listed in <a href="#atmosphere-16-00129-t001" class="html-table">Table 1</a>. The MI-scores are presented alongside the mean, median, standard deviation, and inter-quartile range (Q3–Q1) of the MI-score of all evaluated features.</p>
Full article ">Figure 6
<p>(<b>a</b>) Ten-minute averaged wind speed (U) and wind direction (<math display="inline"><semantics> <msub> <mi>W</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>r</mi> </mrow> </msub> </semantics></math>) at 10 m on 9 July 2015, showcasing a wind vane stalling event (red). (<b>b</b>) Identified instances of wind vane stalling by five supervised multi-class machine-learning methods shown for incrementally balanced data by n-points before and after the wind vane stalling event. The five methods are Logistic Regression (LR), K-Nearest Neighbour (KNN), Gaussian Naive Bayes (GNB), Random Forest (RF), and Support Vector Machine (SVM).</p>
Full article ">Figure 7
<p>Performance metrics (Accuracy, Precision, Recall, and F1-score) of supervised multi-class machine-learning methods which are incrementally balanced by n-points before and after the wind vane stalling event at 10 m. The five methods include Logistic Regression (LR), K-Nearest Neighbour (KNN), Gaussian Naive Bayes (GNB), Random Forest (RF), and Support Vector Machine (SVM), which are evaluated based on their ability to classify (<b>a</b>) non-stalling and (<b>b</b>) stalling events. Additionally, the number of classified test cases (bars) is displayed on the right y-axis (×10<sup>3</sup>). The solid markers represents the average result while varying the training and test dataset sizes. The partitioning varies between 10 and 90%, and the range of the results is represented by the shaded area.</p>
Full article ">Figure 8
<p>Evaluation of the One-Class SVM (OCSVM) classifier, showing the relative classification error for the stalling data points depending on varied (<math display="inline"><semantics> <mi>ν</mi> </semantics></math>,<math display="inline"><semantics> <mi>γ</mi> </semantics></math>)-parameter combinations. The shaded area represents the range resulting from different training sizes between 10 and 90%, while the solid line represents the mean error.</p>
Full article ">
27 pages, 9460 KiB  
Article
Data Uncertainty of Flood Susceptibility Using Non-Flood Samples
by Yayi Zhang, Yongqiang Wei, Rui Yao, Peng Sun, Na Zhen and Xue Xia
Remote Sens. 2025, 17(3), 375; https://doi.org/10.3390/rs17030375 - 23 Jan 2025
Viewed by 506
Abstract
Flood susceptibility provides scientific support for flood prevention planning and infrastructure development by identifying and assessing flood-prone areas. The uncertainty posed by non-flood sample datasets remains a key challenge in flood susceptibility mapping. Therefore, this study proposes a novel sampling method for non-flood [...] Read more.
Flood susceptibility provides scientific support for flood prevention planning and infrastructure development by identifying and assessing flood-prone areas. The uncertainty posed by non-flood sample datasets remains a key challenge in flood susceptibility mapping. Therefore, this study proposes a novel sampling method for non-flood points. A flood susceptibility model is constructed using a machine learning algorithm to examine the uncertainty in flood susceptibility due to non-flood point selection. The influencing factors of flood susceptibility are analyzed through interpretable models. Compared to non-flood datasets generated by random sampling with the buffer method, the non-flood dataset constructed using the spatial range identified by the frequency ratio model and sampling method of one-class support vector machine achieves higher accuracy. This significantly improves the simulation accuracy of the flood susceptibility model, with an accuracy increase of 24% in the ENSEMBLE model. (2) In constructing the flood susceptibility model using the optimal non-flood dataset, the ENSEMBLE learning algorithm demonstrates higher accuracy than other machine learning methods, with an AUC of 0.95. (3) The northern and southeastern regions of the Zijiang River Basin have extremely high flood susceptibility. Elevation and drainage density are identified as key factors causing high flood susceptibility in these areas, whereas the southwestern region exhibits low flood susceptibility due to higher elevation. (4) Elevation, slope, and drainage density are the three most important factors affecting flood susceptibility. Lower values of elevation and slope and higher drainage density correlate with higher flood susceptibility. This study offers a new approach to reducing uncertainty in flood susceptibility and provides technical support for flood prevention and disaster mitigation in the basin. Full article
(This article belongs to the Special Issue Remote Sensing in Hydrometeorology and Natural Hazards)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the flooded points in study area.</p>
Full article ">Figure 2
<p>Spatial variations of selected factors.</p>
Full article ">Figure 3
<p>Flow chart of the methodology.</p>
Full article ">Figure 4
<p>Procedure of stacking for ensemble learning.</p>
Full article ">Figure 5
<p>The result of the Pearson correlation coefficient calculation.</p>
Full article ">Figure 6
<p>The selection structure diagram of non-flood sample points (The yellow circle represent flood points screened by buffer method, the circle with number represent flood points screened by OC SVM method, and the green represent non-flood points.).</p>
Full article ">Figure 7
<p>Non-flood points: (<b>a</b>) ALL Dataset; (<b>b</b>) ALL OC SVM Dataset; (<b>c</b>) Dataset; (<b>d</b>) OC SVM Dataset.</p>
Full article ">Figure 8
<p>A comparison of the ROC curves and AUC values of different FSM models using four datasets: (<b>a</b>) Random Forest (RF); (<b>b</b>) Adaptive Boosting (ADBC); (<b>c</b>) Gradient Boosting (GBDT); (<b>d</b>) stacking (ENSEMBLE).</p>
Full article ">Figure 9
<p>The accuracy evaluation results of statistical indicators were used to input different negative sample datasets for different FSM models (negative sample datasets: Accuracy0, Precision0, Recall0, F1 Score0: ALL OC-SVM Dataset; Accuracy1, Precision1, Recall1, F1 Score1: OC-SVM Dataset; Accuracy2, Precision2, Recall2, F1 Score2: ALL Dataset; Accuracy3, Precision3, Recall3, F1 Score3: Dataset).</p>
Full article ">Figure 10
<p>OC SVM method for extracting non-flood datasets.</p>
Full article ">Figure 11
<p>FSM maps from different models of the Zijiang River Basin (the non-flood point dataset obtained by the same lower and lower susceptibility ranges obtained in the frequency ratio model and the OC SVM algorithm was used): (<b>a</b>) Random Forest (RF); (<b>b</b>) Adaptive Boosting (ADBC); (<b>c</b>) Gradient Boosting (GBDT); (<b>d</b>) stacking (ENSEMBLE).</p>
Full article ">Figure 12
<p>Percentage of flood susceptibility levels for different models: Random Forest (RF); Adaptive Boosting (ADBC); Gradient Boosting (GBDT); stacking (NSEMBLE).</p>
Full article ">Figure 13
<p>Summary and importance plot of the features derived from SHAP values for different models: (<b>a</b>) Random Forest (RF); (<b>b</b>) Adaptive Boosting (ADBC); (<b>c</b>) Gradient Boosting (GBDT); (<b>d</b>) stacking (NSEMBLE).</p>
Full article ">Figure 14
<p>Percentage of cropland and impervious in flood susceptibility levels according to ENSEMBLE.</p>
Full article ">Figure 15
<p>Non-flood points cloud and rain map based on predicted values.</p>
Full article ">Figure 16
<p>The box plots of non-flood points are based on elevation (<b>a</b>) values and drainage density (<b>b</b>).</p>
Full article ">Figure 17
<p>FSM map of Zijiang Basin from 1980 to 2000 and 2001 to 2020.</p>
Full article ">Figure 18
<p>Flood susceptibility risk level change map of the Zijiang River Basin from 1980 to 2000 and 2001 to 2020.</p>
Full article ">
18 pages, 3792 KiB  
Article
Dynamic Classifier Auditing by Unsupervised Anomaly Detection Methods: An Application in Packaging Industry Predictive Maintenance
by Fernando Mateo, Joan Vila-Francés, Emilio Soria-Olivas, Marcelino Martínez-Sober, Juan Gómez-Sanchis and Antonio José Serrano-López
Appl. Sci. 2025, 15(2), 882; https://doi.org/10.3390/app15020882 - 17 Jan 2025
Viewed by 504
Abstract
Predictive maintenance in manufacturing industry applications is a challenging research field. Packaging machines are widely used in a large number of logistic companies’ warehouses and must be working uninterruptedly. Traditionally, preventive maintenance strategies have been carried out to improve the performance of these [...] Read more.
Predictive maintenance in manufacturing industry applications is a challenging research field. Packaging machines are widely used in a large number of logistic companies’ warehouses and must be working uninterruptedly. Traditionally, preventive maintenance strategies have been carried out to improve the performance of these machines. However, these kinds of policies do not take into account the information provided by the sensors implemented in the machines. This paper presents an expert system for the automatic estimation of work orders to implement predictive maintenance policies for packaging machines. The central innovation lies in a two-stage process: a classifier generates a binary decision on whether a machine requires maintenance, and an unsupervised anomaly detection module subsequently audits the classifier’s probabilistic output to refine and interpret its predictions. By leveraging the classifier to condense sensor data and applying anomaly detection to its output, the system optimizes the decision reliability. Three anomaly detection methods were evaluated: One-Class Support Vector Machine (OCSVM), Minimum Covariance Determinant (MCD), and a majority (hard) voting ensemble of the two. All anomaly detection methods improved the baseline classifier’s performance, with the majority voting ensemble achieving the highest F1 score. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Industry)
Show Figures

Figure 1

Figure 1
<p>Automatic fixed wrapping machine F-2200 (Aranco, Valencia, Spain). Source: <a href="https://www.aranco.com/en/services/sie-wrappers/fixed-wrapping-machine-f-2200" target="_blank">https://www.aranco.com/en/services/sie-wrappers/fixed-wrapping-machine-f-2200</a> (accessed on 16 December 2024).</p>
Full article ">Figure 2
<p>Predictive maintenance expert system. The main contribution of this work is the improvement of the existing classification module by means of unsupervised AD techniques (AD module).</p>
Full article ">Figure 3
<p>Anomaly detection streaming procedure for online auditing of the classifier.</p>
Full article ">Figure 4
<p>Proposed architecture for the AD module.</p>
Full article ">Figure 5
<p>Auditing example for one of the machines using OCSVM. The output from the classifier and the AD model output are represented as different lines, and the shaded part of each signal indicates if the threshold to call for a work order has been exceeded.</p>
Full article ">Figure 6
<p>Box plots illustrating the differences between the F1 scores distributions obtained by the compared methods.</p>
Full article ">
25 pages, 2548 KiB  
Article
Efficient Real-Time Anomaly Detection in IoT Networks Using One-Class Autoencoder and Deep Neural Network
by Aya G. Ayad, Mostafa M. El-Gayar, Noha A. Hikal and Nehal A. Sakr
Electronics 2025, 14(1), 104; https://doi.org/10.3390/electronics14010104 - 30 Dec 2024
Viewed by 1148
Abstract
In the face of growing Internet of Things (IoT) security challenges, traditional Intrusion Detection Systems (IDSs) fall short due to IoT devices’ unique characteristics and constraints. This paper presents an effective, lightweight detection model that strengthens IoT security by addressing the high dimensionality [...] Read more.
In the face of growing Internet of Things (IoT) security challenges, traditional Intrusion Detection Systems (IDSs) fall short due to IoT devices’ unique characteristics and constraints. This paper presents an effective, lightweight detection model that strengthens IoT security by addressing the high dimensionality of IoT data. This model merges an asymmetric stacked autoencoder with a Deep Neural Network (DNN), applying one-class learning. It achieves a high detection rate with minimal false positives in a short time. Compared with state-of-the-art approaches based on the BoT-IoT dataset, it shows a higher detection rate of up to 96.27% in 0.27 s. Also, the model achieves an accuracy of 99.99%, precision of 99.21%, and f1 score of 97.69%. These results demonstrate the effectiveness and significance of the proposed model, confirming its potential for reliable deployment in real IoT security problems. Full article
(This article belongs to the Special Issue AI in Cybersecurity, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed model.</p>
Full article ">Figure 2
<p>Structure of autoencoder.</p>
Full article ">Figure 3
<p>Structure of asymmetric stacked autoencoder.</p>
Full article ">Figure 4
<p>Determination of the optimal threshold based on the intersection points between precision and recall curve.</p>
Full article ">Figure 5
<p>Example subset of the BoT-IoT dataset.</p>
Full article ">Figure 6
<p>Testbed setup for the Bot-IoT dataset.</p>
Full article ">Figure 7
<p>In-depth analysis of class-wise metrics for (<b>a</b>) DNN, (<b>b</b>) Sigmoid, (<b>c</b>) OCSVM, (<b>d</b>) IF.</p>
Full article ">Figure 8
<p>Roc curve for the proposed model using 20-dimensional feature space.</p>
Full article ">Figure 9
<p>Confusion matrix for the proposed model using 20-dimensional feature space.</p>
Full article ">
26 pages, 2059 KiB  
Article
Continual Semi-Supervised Malware Detection
by Matthew Chin and Roberto Corizzo
Mach. Learn. Knowl. Extr. 2024, 6(4), 2829-2854; https://doi.org/10.3390/make6040135 - 10 Dec 2024
Viewed by 1165
Abstract
Detecting malware has become extremely important with the increasing exposure of computational systems and mobile devices to online services. However, the rapidly evolving nature of malicious software makes this task particularly challenging. Despite the significant number of machine learning works for malware detection [...] Read more.
Detecting malware has become extremely important with the increasing exposure of computational systems and mobile devices to online services. However, the rapidly evolving nature of malicious software makes this task particularly challenging. Despite the significant number of machine learning works for malware detection proposed in the last few years, limited interest has been devoted to continual learning approaches, which could allow models to showcase effective performance in challenging and dynamic scenarios while being computationally efficient. Moreover, most of the research works proposed thus far adopt a fully supervised setting, which relies on fully labelled data and appears to be impractical in a rapidly evolving malware landscape. In this paper, we address malware detection from a continual semi-supervised one-class learning perspective, which only requires normal/benign data and empowers models with a greater degree of flexibility, allowing them to detect multiple malware types with different morphology. Specifically, we assess the effectiveness of two replay strategies on anomaly detection models and analyze their performance in continual learning scenarios with three popular malware detection datasets (CIC-AndMal2017, CIC-MalMem-2022, and CIC-Evasive-PDFMal2022). Our evaluation shows that replay-based strategies can achieve competitive performance in terms of continual ROC-AUC with respect to the considered baselines and bring new perspectives and insights on this topic. Full article
Show Figures

Figure 1

Figure 1
<p>Semi-supervised one-class continual malware detection workflow consisting of model training and evaluation phases. Training concepts <math display="inline"><semantics> <msub> <mi>C</mi> <mi>i</mi> </msub> </semantics></math> contain exclusively normal data, whereas evaluation concepts <math display="inline"><semantics> <msub> <mi>E</mi> <mi>i</mi> </msub> </semantics></math> contain normal and anomalous data points. The experience replay component updates the replay buffer <span class="html-italic">R</span> each time a new training concept <math display="inline"><semantics> <msub> <mi>C</mi> <mi>i</mi> </msub> </semantics></math> is presented, according to the budget <span class="html-italic">B</span>. Two strategies are used: <span class="html-italic">Random</span> and <span class="html-italic">Selective</span>.</p>
Full article ">Figure 2
<p>Example of model evaluation in continual malware detection: concept-level ROC-AUC as a heatmap (<b>a</b>) and as a line plot showing performance over time (<b>b</b>)—CIC-MalMem-2022 dataset—Strategy: Cumulative—Scenario: A—Model: LOF. The results show that learning a new concept without forgetting previous concepts leads to a more comprehensive and robust model, which results in a performance improvement on all other concepts.</p>
Full article ">Figure 3
<p>Continual ROC-AUC performance (CIC-MalMem-2022 dataset) with different strategies (naive: left; ER—Best-performing variant: center; cumulative: right) on single tasks/concepts after learning each task in two scenarios (A, C) with the best-performing one-class model (LOF).</p>
Full article ">Figure 4
<p>Continual ROC-AUC performance (CIC-Evasive-PDFMal2022 dataset) with different strategies (naive: left; ER—Best-performing variant: center; cumulative: right) on single tasks/concepts after learning each task in two scenarios (A, C) with the best-performing one-class model (ABOD).</p>
Full article ">Figure 5
<p>Continual ROC-AUC performance (CIC-AndMal2017 dataset) with different strategies (naive: left; ER—Best-performing variant: center; cumulative: right) on single tasks/concepts after learning each task in two scenarios (A, C) with the best-performing one-class models (ABOD and LOF, respectively).</p>
Full article ">Figure A1
<p>Visualization of extracted concepts via t-SNE: CIC-MalMem-2022 dataset. Normal class (<b>left</b>) and anomaly class (<b>right</b>).</p>
Full article ">Figure A2
<p>Visualization of extracted concepts via t-SNE: CIC-Evasive-PDFMal2022 dataset. Normal class (<b>left</b>) and anomaly class (<b>right</b>).</p>
Full article ">Figure A3
<p>Visualization of extracted concepts via t-SNE: CIC-AndMal-2017 dataset. Normal class (<b>left</b>) and anomaly class (<b>right</b>).</p>
Full article ">
21 pages, 4145 KiB  
Article
UniFlow: Unified Normalizing Flow for Unsupervised Multi-Class Anomaly Detection
by Jianmei Zhong and Yanzhi Song
Information 2024, 15(12), 791; https://doi.org/10.3390/info15120791 - 10 Dec 2024
Viewed by 887
Abstract
Multi-class anomaly detection is more efficient and less resource-consuming in industrial anomaly detection scenes that involve multiple categories or exhibit large intra-class diversity. However, most industrial image anomaly detection methods are developed for one-class anomaly detection, which typically suffer significant performance drops in [...] Read more.
Multi-class anomaly detection is more efficient and less resource-consuming in industrial anomaly detection scenes that involve multiple categories or exhibit large intra-class diversity. However, most industrial image anomaly detection methods are developed for one-class anomaly detection, which typically suffer significant performance drops in multi-class scenarios. Research specifically targeting multi-class anomaly detection remains relatively limited. In this work, we propose a powerful unified normalizing flow for multi-class anomaly detection, which we call UniFlow. A multi-cognitive visual adapter (Mona) is employed in our method as the feature adaptation layer to adapt image features for both the multi-class anomaly detection task and the normalizing flow model, facilitating the learning of general knowledge of normal images across multiple categories. We adopt multi-cognitive convolutional networks with high capacity to construct the coupling layers within the normalizing flow model for more effective multi-class distribution modeling. In addition, we employ a multi-scale feature fusion module to aggregate features from various levels, thereby obtaining fused features with enhanced expressive capabilities. UniFlow achieves a class-average image-level AUROC of 99.1% and a class-average pixel-level AUROC of 98.0% on MVTec AD, outperforming the SOTA multi-class anomaly detection methods. Extensive experiments on three benchmark datasets, MVTec AD, VisA, and BTAD, demonstrate the efficacy and superiority of our unified normalizing flow in multi-class anomaly detection. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>One-class anomaly detection versus multi-class anomaly detection.</p>
Full article ">Figure 2
<p><b>Overview of UniFlow.</b> Mona FA: Mona feature adaptation. Ds: Downsample. MC Affine Layer: Multi-cognitive affine coupling layer. MC Additive Layer: Multi-cognitive additive coupling layer. Two Mona feature adaptation layers are employed to adapt the features from two stages. We downsample the feature maps extracted in the second stage to half of their original size via average pooling. Notably, Gaussian noise is added to the features only during the training phase.</p>
Full article ">Figure 3
<p>(<b>a</b>) The architecture of Mona. (<b>b</b>) The architecture of multi-cognitive convolutional module. (<b>c</b>) Mona-tuning in each SwinBlock.</p>
Full article ">Figure 4
<p>The architecture of the multi-cognitive additive coupling layer. Multi-cognitive Conv: Multi-cognitive convolutional module in <a href="#information-15-00791-f003" class="html-fig">Figure 3</a>b. <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math> Conv: A convolutional layer with a kernel size of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math> and a stride of 1, where the output retains the same shape as the input. Global Affine: Further scaling and translation applied to the global output. Channel permute: Shuffling the order of the channel dimension.</p>
Full article ">Figure 5
<p>The architecture of the multi-cognitive affine coupling layer. Multi-cognitive Conv: Multi-cognitive convolutional module in <a href="#information-15-00791-f003" class="html-fig">Figure 3</a>b. <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> Conv: A convolutional layer with a kernel size of <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> and a stride of 1, where the output has the same spatial dimensions as the input but with twice the number of channels. Global Affine: Further scaling and translation applied to the global output. Channel permute: Shuffling the order of the channel dimension.</p>
Full article ">Figure 6
<p>Visualization results of anomaly localization for examples from MVTec AD [<a href="#B18-information-15-00791" class="html-bibr">18</a>] and BTAD [<a href="#B20-information-15-00791" class="html-bibr">20</a>].</p>
Full article ">Figure 7
<p>Visualization results of anomaly localization for examples from VisA [<a href="#B19-information-15-00791" class="html-bibr">19</a>].</p>
Full article ">Figure 8
<p>The impact of varying feature jittering probability on anomaly detection and localization performance on MVTec AD.</p>
Full article ">Figure 9
<p>Framework of the convolutional feature adaptation layer.</p>
Full article ">
23 pages, 9923 KiB  
Article
Application of Online Anomaly Detection Using One-Class Classification to the Z24 Bridge
by Amro Abdrabo
Sensors 2024, 24(23), 7866; https://doi.org/10.3390/s24237866 - 9 Dec 2024
Viewed by 763
Abstract
The usage of anomaly detection is of critical importance to numerous domains, including structural health monitoring (SHM). In this study, we examine an online setting for damage detection in the Z24 bridge. We evaluate and compare the performance of the elliptic envelope, incremental [...] Read more.
The usage of anomaly detection is of critical importance to numerous domains, including structural health monitoring (SHM). In this study, we examine an online setting for damage detection in the Z24 bridge. We evaluate and compare the performance of the elliptic envelope, incremental one-class support vector classification, local outlier factor, half-space trees, and entropy-guided envelopes. Our findings demonstrate that XGBoost exhibits enhanced performance in identifying a limited set of significant features. Additionally, we present a novel approach to manage drift through the application of entropy measures to structural state instances. The study is the first to assess the applicability of one-class classification for anomaly detection on the short-term structural health data of the Z24 bridge. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Reduction of a feature proportional to a stiffness that undergoes natural degradation. As stiffness is affected by damage, this is also a damage-sensitive feature (DSF).</p>
Full article ">Figure 2
<p>Wireframe of the Z24 bridge on ANSYS 2023 R1. The Z-axis defines the longitudinal direction. The Y-axis defines vertical, and the X-axis is transversal. Sensor R1 is at location B. Sensor R2 is at A, and sensor R3 isat C. Only these sensors are used, as they were the only sensors which were not moved during the short-term monitoring campaign.</p>
Full article ">Figure 3
<p>Reduction in natural frequency of second mode (near 5 Hz), from the power spectral density along R2T, visible due to damage progression.</p>
Full article ">Figure 4
<p>A t-SNE visualization on the training set. Labels correspond to the state from <a href="#sensors-24-07866-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>PCA applied over different feature subsets. (<b>a</b>) Features from <a href="#sensors-24-07866-t002" class="html-table">Table 2</a> applied to all channels (R1V, R2L, R2V, R2T, R3V). (<b>b</b>) Features from the PSD of the channels. (<b>c</b>) Features derived from the transmittance with respect to R1V of the remaining channels.</p>
Full article ">Figure 6
<p>Supervised feature selection framework. Models <span class="html-italic">M</span> are used for feature selection, and <math display="inline"><semantics> <msup> <mi>M</mi> <mo>′</mo> </msup> </semantics></math> are used for assessing the quality of found features. Here, <span class="html-italic">M</span> can be KNN, SVC, or XGBoost, where <math display="inline"><semantics> <msup> <mi>M</mi> <mo>′</mo> </msup> </semantics></math> is correspondingly relief feature selection, logistic regression, and XGBoost, respectively.</p>
Full article ">Figure 7
<p>Four most important features extracted by XGBoost in third step of <a href="#sensors-24-07866-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Four most important features extracted by logistic regression in third step of <a href="#sensors-24-07866-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 9
<p>Four most important features extracted by Relief in third step of <a href="#sensors-24-07866-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 10
<p>PCA of top four features of ReliefF from <a href="#sensors-24-07866-f009" class="html-fig">Figure 9</a>, XGBoost from <a href="#sensors-24-07866-f007" class="html-fig">Figure 7</a>, and logistic regression from <a href="#sensors-24-07866-f008" class="html-fig">Figure 8</a>. Damaged state points are in red, while healthy state points are in blue.</p>
Full article ">Figure 11
<p>CV-scores obtained from step 3 of <a href="#sensors-24-07866-f006" class="html-fig">Figure 6</a> for supervised classification methods. XGBoost achieves the highest CV-score for 4 features at 0.95.</p>
Full article ">Figure 12
<p>Standardization and PCA on data from first state.</p>
Full article ">Figure 13
<p>After PCA and standardization have been initialized in <a href="#sensors-24-07866-f012" class="html-fig">Figure 12</a>, the OCC model is trained using the procedure shown here. For cross-validation, the performance on the omitted healthy scenario is evaluated instead of the unseen damage set.</p>
Full article ">Figure 14
<p>Example of a half-space tree for two-dimensional data. Red and blue regions correspond to anomalous and non-anomalous regions, respectively.</p>
Full article ">Figure 15
<p>XGBoost demonstrated superior disentanglement with regard to PCA components. All PCA components approximately have a one-to-one mapping to original features. (<b>a</b>) Weight magnitudes of PCA eigenvectors, used in <math display="inline"><semantics> <mi>ψ</mi> </semantics></math>, on the original features, denoted <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>. (<b>b</b>) Correlation matrix of the original features.</p>
Full article ">Figure 16
<p>Projection of the decision surface for (<b>a</b>) elliptic envelope, (<b>b</b>) one-class SVC, (<b>c</b>) half-space trees, (<b>d</b>) local outlier factor, and (<b>e</b>) the last elliptic envelope formed in entropy-guided envelopes, respectively, onto two-dimensional PCA space. Negative regions are regions where the model outputs no anomaly, while positive regions are anomalous.</p>
Full article ">Figure 17
<p>Accuracy categorized by damage type. (<b>a</b>) Accuracy for concrete damage detection. (<b>b</b>) Accuracy for tendons and anchor damage detection. (<b>c</b>) Accuracy for landslide damage detection. (<b>d</b>) Accuracy for healthy domain (1-<math display="inline"><semantics> <mrow> <mi>F</mi> <mi>P</mi> <mi>R</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 18
<p>Anomaly scores for the different OCC techniques. Color intensity correlates with anomaly score. False predictions are shown with a cross. The blue region contains the points seen during training that are predicted as healthy/non-anomalous, and red corresponds to the damage evaluation set. For states, refer to <a href="#sensors-24-07866-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 19
<p>Z24 model in ANSYS 2023 R1. The outermost piers are embedded within the ground elevations and hence are not visible. Two of the four features used, listed in <a href="#sensors-24-07866-f007" class="html-fig">Figure 7</a>, are shown in the bottom right and bottom left figures. (<b>a</b>) Second mode shape of the Z24 bridge. (<b>b</b>) Fourth mode shape of the Z24 bridge. (<b>c</b>) Transmittance of R2L relative to R1V. In green are the windows from which we extract the relative position of the peaks. These constitute the first and last features of <a href="#sensors-24-07866-f007" class="html-fig">Figure 7</a>.</p>
Full article ">
14 pages, 2267 KiB  
Article
Aero-Engine Fault Detection with an LSTM Auto-Encoder Combined with a Self-Attention Mechanism
by Wenyou Du, Jingyi Zhang, Guanglei Meng and Haoran Zhang
Machines 2024, 12(12), 879; https://doi.org/10.3390/machines12120879 - 4 Dec 2024
Viewed by 728
Abstract
The safe operation of aero-engines is crucial for ensuring flight safety, and effective fault detection methods are fundamental to achieving this objective. In this paper, we propose a novel approach that integrates an auto-encoder with long short-term memory (LSTM) networks and a self-attention [...] Read more.
The safe operation of aero-engines is crucial for ensuring flight safety, and effective fault detection methods are fundamental to achieving this objective. In this paper, we propose a novel approach that integrates an auto-encoder with long short-term memory (LSTM) networks and a self-attention mechanism for the anomaly detection of aero-engine time-series data. The dataset utilized in this study was simulated from real data and injected with fault information. A fault detection model is developed utilizing normal data samples for training and faulty data samples for testing. The LSTM auto-encoder processes the time-series data through an encoder–decoder architecture, extracting latent representations and reconstructing the original inputs. Furthermore, the self-attention mechanism captures long-range dependencies and significant features within the sequences, thereby enhancing the detection accuracy of the model. Comparative analyses with the traditional LSTM auto-encoder, as well as one-class support vector machines (OC-SVM) and isolation forests (IF), reveal that the experimental results substantiate the feasibility and effectiveness of the proposed method, highlighting its potential value in engineering applications. Full article
Show Figures

Figure 1

Figure 1
<p>Auto-encoder.</p>
Full article ">Figure 2
<p>LSTM structure.</p>
Full article ">Figure 3
<p>Self-attention structure.</p>
Full article ">Figure 4
<p>SLAE fault detection process.</p>
Full article ">Figure 5
<p>Fault detection flow chart.</p>
Full article ">Figure 6
<p>Fault 1 raw data.</p>
Full article ">Figure 7
<p>Fault 2 raw data.</p>
Full article ">Figure 8
<p>Fault 3 raw data.</p>
Full article ">Figure 9
<p>Fault 1 dection. (<b>a</b>) SLAE fault dection; (<b>b</b>) LSTM fault dection; (<b>c</b>) OC-SVM fault dection; (<b>d</b>) IF fault dection.</p>
Full article ">Figure 10
<p>Fault 2 dection. (<b>a</b>) SLAE fault dection; (<b>b</b>) LSTM fault dection; (<b>c</b>) OC-SVM fault dection; (<b>d</b>) IF fault dection.</p>
Full article ">Figure 11
<p>Fault 3 dection. (<b>a</b>) SLAE fault dection; (<b>b</b>) LSTM fault dection; (<b>c</b>) OC-SVM fault dection; (<b>d</b>) IF fault dection.</p>
Full article ">
25 pages, 4000 KiB  
Article
CASSAD: Chroma-Augmented Semi-Supervised Anomaly Detection for Conveyor Belt Idlers
by Fahad Alharbi, Suhuai Luo, Abdullah Alsaedi, Sipei Zhao and Guang Yang
Sensors 2024, 24(23), 7569; https://doi.org/10.3390/s24237569 - 27 Nov 2024
Viewed by 748
Abstract
Idlers are essential to conveyor systems, as well as supporting and guiding belts to ensure production efficiency. Proper idler maintenance prevents failures, reduces downtime, cuts costs, and improves reliability. Most studies on idler fault detection rely on supervised methods, which depend on large [...] Read more.
Idlers are essential to conveyor systems, as well as supporting and guiding belts to ensure production efficiency. Proper idler maintenance prevents failures, reduces downtime, cuts costs, and improves reliability. Most studies on idler fault detection rely on supervised methods, which depend on large labelled datasets for training. However, acquiring such labelled data is often challenging in industrial environments due to the rarity of faults and the labour-intensive nature of the labelling process. To address this, we propose the chroma-augmented semi-supervised anomaly detection (CASSAD) method, designed to perform effectively with limited labelled data. At the core of CASSAD is the one-class SVM (OC-SVM), a model specifically developed for anomaly detection in cases where labelled anomalies are scarce. We also compare CASSAD’s performance with other common models like the local outlier factor (LOF) and isolation forest (iForest), evaluating each with the area under the curve (AUC) to assess their ability to distinguish between normal and anomalous data. CASSAD introduces chroma features, such as chroma energy normalised statistics (CENS), the constant-Q transform (CQT), and the chroma short-time Fourier transform (STFT), enhanced through filtering to capture rich harmonic information from idler sounds. To reduce feature complexity, we utilize the mean and standard deviation (std) across chroma features. The dataset is further augmented using additive white Gaussian noise (AWGN). Testing on an industrial dataset of idler sounds, CASSAD achieved an AUC of 96% and an accuracy of 91%, surpassing a baseline autoencoder and other traditional models. These results demonstrate the model’s robustness in detecting anomalies with minimal dependence on labelled data, offering a practical solution for industries with limited labelled datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Autoencoder structure.</p>
Full article ">Figure 2
<p>Overview of the proposed chroma-augmented semi-supervised anomaly detection (CASSAD) model.</p>
Full article ">Figure 3
<p>Analysis of different chroma features across four stages: original, harmonic, filtered, and smoothed, showing (<b>a</b>) STFT, (<b>b</b>) CQT, and (<b>c</b>) CENS.</p>
Full article ">Figure 4
<p>Distribution of mean and standard deviation (STD) of chroma features (CQT, CENS, and STFT) for normal and abnormal signals.</p>
Full article ">Figure 5
<p>ROC curves with the best AUC for LOF, isolation forest, and one-class SVM models.</p>
Full article ">Figure 6
<p>Analysis results from training the autoencoder on chroma features. (<b>Top Row</b>): Original and reconstructed spectrograms (<b>Left</b>) and the loss distribution for chroma_cens_features using the mean absolute error (MAE) function (<b>Right</b>). (<b>Bottom row</b>): Training and validation loss curves over 30 epochs showing minimal gap and no significant overfitting or underfitting.</p>
Full article ">Figure 7
<p>Isolation forest detection results for idler components (mean vs. standard deviation with/without PCA). This figure highlights the model’s performance across different chroma features and aggregation methods.</p>
Full article ">Figure 8
<p>LOF detection results for idler components (mean vs. standard deviation with/without PCA). This figure highlights the model’s performance across different chroma features and aggregation methods.</p>
Full article ">Figure 9
<p>Visualisation of the best results for LOF + PCA, isolation forest, and the proposed CASSAD model. (<b>Top row</b>): Confusion matrices showing classification results. (<b>Second row</b>): ROC curves, with the proposed CASSAD model reaching the highest AUC of 0.96. (<b>Third row</b>): Precision–recall curves, where the proposed CASSAD model without PCA shows the best balance. (<b>Bottom row</b>): t-SNE plots illustrating data separability, with the proposed CASSAD model achieving the clearest distinction.</p>
Full article ">Figure 10
<p>Comparison of AUC and accuracy metrics for the proposed model using all chroma features, both with and without noise filtering. The figure highlights the performance improvements achieved by the proposed model (CASSAD).</p>
Full article ">Figure 11
<p>Consumption time comparison across models and features.</p>
Full article ">
17 pages, 6729 KiB  
Article
Anomaly Detection Method for Harmonic Reducers with Only Healthy Data
by Yuqing Li, Linghui Zhu, Minqiang Xu and Yunzhao Jia
Sensors 2024, 24(23), 7435; https://doi.org/10.3390/s24237435 - 21 Nov 2024
Viewed by 618
Abstract
A harmonic reducer is an important component of industrial robots. In practical applications, it is difficult to obtain enough anomaly data from error cases for the supervised training of models. Whether the information contained in regular features is sensitive to anomaly detection is [...] Read more.
A harmonic reducer is an important component of industrial robots. In practical applications, it is difficult to obtain enough anomaly data from error cases for the supervised training of models. Whether the information contained in regular features is sensitive to anomaly detection is unknown. In this paper, we propose an anomaly detection frame for a harmonic reducer with only healthy data. We considered an auto-encoder trained using only healthy features, such as feature mapping, in which the difference between the output and the input constitutes a new high-dimensional feature space that retained information relevant only to anomalies. Compared to the original feature space, this space was more sensitive to abnormal data. The mapped features were then fed into the OCSVM to preserve the feature details of the abnormal information. The effectiveness of this method was validated by multiple sets of data collecting from harmonic reducers. Three different residual calculations and four different AE models were used, showing that the method outperforms an AE or an OCSVM alone. It is also verified that the method outperforms other typical anomaly detection methods. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Unit of LSTM.</p>
Full article ">Figure 2
<p>Structure of AE-OCSVM model.</p>
Full article ">Figure 3
<p>Harmonic reducer test bench.</p>
Full article ">Figure 4
<p>Ref. [<a href="#B7-sensors-24-07435" class="html-bibr">7</a>] faulty part: (<b>a</b>) circular spline gear crack fault; (<b>b</b>) flex–spline gear wear fault; (<b>c</b>) flex–spline gear crack fault; (<b>d</b>) flexible bearing outer race semi-crack fault; (<b>e</b>) flexible bearing inner race wear; (<b>f</b>) cross roller bearing outer race fault; (<b>g</b>) cross roller bearing inner race fault.</p>
Full article ">Figure 5
<p>Vibration signal after standardization: (<b>a</b>) normal; (<b>b</b>–<b>h</b>): F1–F7.</p>
Full article ">Figure 6
<p>Structure of AE.</p>
Full article ">Figure 7
<p>Results with different res.</p>
Full article ">Figure 8
<p>Part of the MSE of different models. (<b>a</b>–<b>f</b>) represents six different working conditions. The first row of images shows the MSE of each test data. The second row of images shows the probability distribution of the MSE. Blue represents the probability distribution of the training data, red represents the normal data in the test data, and yellow represents the abnormal data in the test data. The purple vertical line represents the threshold of the model.</p>
Full article ">Figure 9
<p>Features of different models after PCA. (<b>a</b>) original features; (<b>b</b>) SpAE-OCSVM; (<b>c</b>) LSTMAE-OCSVM; (<b>d</b>) StAE-OCSVM; (<b>e</b>) VAE-OCSVM.</p>
Full article ">Figure 10
<p>Results of AE-OCSVM, AE, and OCSVM.</p>
Full article ">Figure 11
<p>AUC of different methods.</p>
Full article ">
14 pages, 3986 KiB  
Article
Anomaly Detection Utilizing One-Class Classification—A Machine Learning Approach for the Analysis of Plant Fast Fluorescence Kinetics
by Nam Trung Tran
Stresses 2024, 4(4), 773-786; https://doi.org/10.3390/stresses4040051 - 18 Nov 2024
Viewed by 773
Abstract
The analysis of fast fluorescence kinetics, specifically through the JIP test, is a valuable tool for identifying and characterizing plant stress. However, interpreting OJIP data requires a comprehensive understanding of their underlying theory. This study proposes a Machine Learning-based approach using a One-Class [...] Read more.
The analysis of fast fluorescence kinetics, specifically through the JIP test, is a valuable tool for identifying and characterizing plant stress. However, interpreting OJIP data requires a comprehensive understanding of their underlying theory. This study proposes a Machine Learning-based approach using a One-Class Support Vector Machine anomaly detection model to effectively categorize OJIP measurements into “normal”, representing healthy plants, and “anomalies”. This approach was validated using a previously published dataset. A subgroup of the identified “anomalies” was clearly linked to stress-induced reductions in photosynthesis. Furthermore, the percentage of these “anomalies” showed a meaningful correlation with both the progression and severity of stress. The results highlight the still largely unexploited potential of Machine Learning in OJIP analysis. Full article
(This article belongs to the Collection Feature Papers in Plant and Photoautotrophic Stresses)
Show Figures

Figure 1

Figure 1
<p>Identification of 24 anomaly measurements (red) as those with significantly lower F<sub>V</sub>/F<sub>M</sub> values compared to the initial measurements. Outlier threshold was set at the latter’s mean minus three times the standard deviation.</p>
Full article ">Figure 2
<p>Fine-tuning of the classification model. Models were trained on the training dataset, with <span class="html-italic">nu</span> values ranging from 0.01 to 0.5, and tested on the fine-tuning dataset. Green: Type I error (“normal” misidentified as “anomalies”). Red: Type II error (“anomalies” misidentified as “normal”). The fractions show the number of misidentifications over the total number of predictions. The nu value of 0.05 (filled triangles) was used for all subsequent predictions.</p>
Full article ">Figure 3
<p>Comparison of the photosynthetic performance between the “anomaly” (red) and “normal” (blue) groups using standard JIP test. For a more detailed understanding of the presented OJIP parameters, please see Stirbet and Govindjee, 2011 [<a href="#B11-stresses-04-00051" class="html-bibr">11</a>]. The values are normalized, with values from “normal” samples set to 1. The small concentric circles on the right (with the red numbers) show the scales.</p>
Full article ">Figure 4
<p>Comparison between “anomalies” (A) and “normal” (N) measurements in three field experiments and in the greenhouse experiment. Two commonly used OJIP metrics are used for comparison: F<sub>V</sub>/F<sub>M</sub>—the maximum photochemical quantum yield of PS II; PI<sub>ABS</sub>—the performance index on energy absorption basis. Stars (*) denote statistically significant differences (<span class="html-italic">p</span> &lt; 0.05) between “anomalies” and “normal” measurements in each experiment.</p>
Full article ">Figure 5
<p>UMAP visualization of all measurements.</p>
Full article ">Figure 6
<p>The photosynthetic performance of the “normal” (blue), “anomaly” type 1 (red), and type 2 (green) groups was compared using the standard JIP test. For a more detailed understanding of the presented OJIP parameters, please see Stirbet and Govindjee, 2011 [<a href="#B11-stresses-04-00051" class="html-bibr">11</a>]. Box plots were used to display the comparison of two commonly used OJIP metrics: F<sub>V</sub>/F<sub>M</sub>, which represents the maximum photochemical quantum yield of PS II; and PI<sub>ABS</sub>, which represents the performance index on an energy absorption basis. The values are normalized, with values from “normal” samples set to 1. The letters (a–c) indicate groups that are statistically significantly different from each other. Samples with the same letters are not statistically different.</p>
Full article ">Figure 7
<p>The percentage of detected “anomaly” types 1 and 2 on each measurement day across the four experiments.</p>
Full article ">Figure 8
<p>Nine features extracted from OJIP curve for classification: baseline fluorescence intensity (F<sub>O</sub>); peak fluorescence intensity (F<sub>M</sub>); fluorescence intensities at five specific time marks (50 µs, 100 µs, 300 µs, 2 ms, and 30 ms—F1, F2, F3, F4, and F5); the time at which the maximum fluorescence value, F<sub>M</sub>, was reached (Tfm); and the area above the fluorescence curve between F<sub>O</sub> and F<sub>M</sub> (Area).</p>
Full article ">
Back to TopTop