[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (156)

Search Parameters:
Keywords = NSL-KDD

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2149 KiB  
Article
Enhanced Deep Autoencoder-Based Reinforcement Learning Model with Improved Flamingo Search Policy Selection for Attack Classification
by Dharani Kanta Roy and Hemanta Kumar Kalita
J. Cybersecur. Priv. 2025, 5(1), 3; https://doi.org/10.3390/jcp5010003 - 14 Jan 2025
Abstract
Intrusion detection has been a vast-surveyed topic for many decades as network attacks are tremendously growing. This has heightened the need for security in networks as web-based communication systems are advanced nowadays. The proposed work introduces an intelligent semi-supervised intrusion detection system based [...] Read more.
Intrusion detection has been a vast-surveyed topic for many decades as network attacks are tremendously growing. This has heightened the need for security in networks as web-based communication systems are advanced nowadays. The proposed work introduces an intelligent semi-supervised intrusion detection system based on different algorithms to classify the network attacks accurately. Initially, the pre-processing is accomplished using null value dropping and standard scaler normalization. After pre-processing, an enhanced Deep Reinforcement Learning (EDRL) model is employed to extract high-level representations and learn complex patterns from data by means of interaction with the environment. The enhancement of deep reinforcement learning is made by associating a deep autoencoder (AE) and an improved flamingo search algorithm (IFSA) to approximate the Q-function and optimal policy selection. After feature representations, a support vector machine (SVM) classifier, which discriminates the input into normal and attack instances, is employed for classification. The presented model is simulated in the Python platform and evaluated using the UNSW-NB15, CICIDS2017, and NSL-KDD datasets. The overall classification accuracy is 99.6%, 99.93%, and 99.42% using UNSW-NB15, CICIDS2017, and NSL-KDD datasets, which is higher than the existing detection frameworks. Full article
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the proposed attack classification and response scheme.</p>
Full article ">Figure 2
<p>Architecture of enhanced deep reinforcement learning.</p>
Full article ">Figure 3
<p>Semi-supervised workflow.</p>
Full article ">Figure 4
<p>Deep AE model used for approximating the Q-function.</p>
Full article ">Figure 5
<p>Model of SVM used for attack classification.</p>
Full article ">Figure 6
<p>Confusion matrix of the proposed work.</p>
Full article ">Figure 7
<p>Precision comparison of the proposed and existing works.</p>
Full article ">Figure 8
<p>Recall the comparison of the proposed and existing works.</p>
Full article ">Figure 9
<p>F1-score comparison of the proposed and existing works.</p>
Full article ">Figure 10
<p>Accuracy comparison of the proposed and existing works.</p>
Full article ">Figure 11
<p>FPR comparison of the proposed and existing works.</p>
Full article ">Figure 12
<p>Kappa score comparison of the proposed and existing works.</p>
Full article ">Figure 13
<p>MCC comparison of the proposed and existing works.</p>
Full article ">Figure 14
<p>RoC curve analysis of the proposed work.</p>
Full article ">
26 pages, 1535 KiB  
Article
Optimization Scheme of Collaborative Intrusion Detection System Based on Blockchain Technology
by Jiachen Huang, Yuling Chen, Xuewei Wang, Zhi Ouyang and Nisuo Du
Electronics 2025, 14(2), 261; https://doi.org/10.3390/electronics14020261 - 10 Jan 2025
Viewed by 365
Abstract
In light of the escalating complexity of the cyber threat environment, the role of Collaborative Intrusion Detection Systems (CIDSs) in reinforcing contemporary cybersecurity defenses is becoming ever more critical. This paper presents a Blockchain-based Collaborative Intrusion Detection Framework (BCIDF), an innovative methodology aimed [...] Read more.
In light of the escalating complexity of the cyber threat environment, the role of Collaborative Intrusion Detection Systems (CIDSs) in reinforcing contemporary cybersecurity defenses is becoming ever more critical. This paper presents a Blockchain-based Collaborative Intrusion Detection Framework (BCIDF), an innovative methodology aimed at enhancing the efficacy of threat detection and information dissemination. To address the issue of alert collisions during data exchange, an Alternating Random Assignment Selection Mechanism (ARASM) is proposed. This mechanism aims to optimize the selection process of domain leader nodes, thereby partitioning traffic and reducing the size of conflict domains. Unlike conventional CIDS approaches that typically rely on independent node-level detection, our framework incorporates a Weighted Random Forest (WRF) ensemble learning algorithm, enabling collaborative detection among nodes and significantly boosting the system’s overall detection capability. The viability of the BCIDF framework has been rigorously assessed through extensive experimentation utilizing the NSL-KDD dataset. The empirical findings indicate that BCIDF outperforms traditional intrusion detection systems in terms of detection precision, offering a robust and highly effective solution within the realm of cybersecurity. Full article
(This article belongs to the Special Issue Security and Privacy for AI)
Show Figures

Figure 1

Figure 1
<p>Categories of CIDSs.</p>
Full article ">Figure 2
<p>System framework.</p>
Full article ">Figure 3
<p>Node state transition.</p>
Full article ">Figure 4
<p>Term display.</p>
Full article ">Figure 5
<p>Examples for the PIF. (<b>a</b>) In the initial state, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>4</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>5</mn> </msub> </semantics></math> have not been updated to the latest logs. (<b>b</b>) The status of system changes after the next heartbeat. (<b>c</b>) In the initial state, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>4</mn> </msub> </semantics></math> have malfunctioned and are not responding to the leader node. (<b>d</b>) In the next heartbeat, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>4</mn> </msub> </semantics></math> still cannot respond to the leader.</p>
Full article ">Figure 6
<p>Selection of leader nodes.</p>
Full article ">Figure 7
<p>Influence of number of trees and maximum tree depth on accuracy, (<b>a</b>) <math display="inline"><semantics> <msub> <mi>I</mi> <mi>G</mi> </msub> </semantics></math> as impurity metric; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>I</mi> <mi>E</mi> </msub> </semantics></math> as impurity metric; (<b>c</b>) <math display="inline"><semantics> <msub> <mi>I</mi> <mi>G</mi> </msub> </semantics></math> as impurity metric; (<b>d</b>) <math display="inline"><semantics> <msub> <mi>I</mi> <mi>E</mi> </msub> </semantics></math> as impurity metric.</p>
Full article ">Figure 8
<p>Comparison of distribution time among different schemes.</p>
Full article ">Figure 9
<p>Comparison of the distribution time by different numbers.</p>
Full article ">
24 pages, 3385 KiB  
Article
An Improved Binary Simulated Annealing Algorithm and TPE-FL-LightGBM for Fast Network Intrusion Detection
by Yafei Luo, Ruihan Chen, Chuantao Li, Derong Yang, Kun Tang and Jing Su
Electronics 2025, 14(2), 231; https://doi.org/10.3390/electronics14020231 - 8 Jan 2025
Viewed by 283
Abstract
With the rapid proliferation of the Internet, network security issues that threaten users have become increasingly severe, despite the widespread benefits of Internet access. Most existing intrusion detection systems (IDS) suffer from suboptimal performance due to data imbalance and feature redundancy, while also [...] Read more.
With the rapid proliferation of the Internet, network security issues that threaten users have become increasingly severe, despite the widespread benefits of Internet access. Most existing intrusion detection systems (IDS) suffer from suboptimal performance due to data imbalance and feature redundancy, while also facing high computational complexity in areas such as feature selection and optimization. To address these challenges, this study proposes a novel network intrusion detection method based on an improved binary simulated annealing algorithm (IBSA) and TPE-FL-LightGBM. First, by integrating Focal Loss into the loss function of the LightGBM classifier, we introduce cost-sensitive learning, which effectively mitigates the impact of class imbalance on model performance and enhances the model’s ability to learn difficult-to-classify samples. Next, significant improvements are made to the simulated annealing algorithm, including adaptive adjustments of the initial temperature and Metropolis criterion, the incorporation of multi-neighborhood search strategies, and the integration of an S-shaped transfer function. These improvements enable the IBSA method to achieve efficient optimal feature selection with fewer iterations. Finally, the Tree-structured Parzen Estimator (TPE) algorithm is employed to optimize the structure of the FL-LightGBM classifier, further enhancing its performance. Through comprehensive visual analysis, ablation studies, and comparative experiments on the NSL-KDD and UNSW-NB15 datasets, the reliability of the proposed network intrusion detection method is validated. Full article
(This article belongs to the Special Issue Artificial Intelligence in Cyberspace Security)
Show Figures

Figure 1

Figure 1
<p>Optimization process of feature selection using IBSA: (<b>a</b>) NSLKDD dataset; (<b>b</b>) UNSW-NB15 dataset.</p>
Full article ">Figure 2
<p>ROC curves for the baseline and proposed methods on the NSLKDD dataset: (<b>a</b>) Original method; (<b>b</b>) Proposed method.</p>
Full article ">Figure 3
<p>Comparison of confusion matrices for predicting the test set on the NSLKDD dataset: (<b>a</b>) Original method; (<b>b</b>) Proposed method.</p>
Full article ">Figure 4
<p>ROC curves for the baseline and proposed methods on the UNSW-NB15 dataset: (<b>a</b>) Original method; (<b>b</b>) Proposed method.</p>
Full article ">Figure 5
<p>Comparison of confusion matrices for the test set on the UNSW-NB15 dataset: (<b>a</b>) Original method; (<b>b</b>) Proposed method.</p>
Full article ">
21 pages, 533 KiB  
Article
A Systematic Study of Adversarial Attacks Against Network Intrusion Detection Systems
by Sanidhya Sharma and Zesheng Chen
Electronics 2024, 13(24), 5030; https://doi.org/10.3390/electronics13245030 - 21 Dec 2024
Viewed by 769
Abstract
Network Intrusion Detection Systems (NIDSs) are vital for safeguarding Internet of Things (IoT) networks from malicious attacks. Modern NIDSs utilize Machine Learning (ML) techniques to combat evolving threats. This study systematically examined adversarial attacks originating from the image domain against ML-based NIDSs, while [...] Read more.
Network Intrusion Detection Systems (NIDSs) are vital for safeguarding Internet of Things (IoT) networks from malicious attacks. Modern NIDSs utilize Machine Learning (ML) techniques to combat evolving threats. This study systematically examined adversarial attacks originating from the image domain against ML-based NIDSs, while incorporating a diverse selection of ML models. Specifically, we evaluated both white-box and black-box attacks on nine commonly used ML-based NIDS models. We analyzed the Projected Gradient Descent (PGD) attack, which uses gradient descent on input features, transfer attacks, the score-based Zeroth-Order Optimization (ZOO) attack, and two decision-based attacks: Boundary and HopSkipJump. Using the NSL-KDD dataset, we assessed the accuracy of the ML models under attack and the success rate of the adversarial attacks. Our findings revealed that the black-box decision-based attacks were highly effective against most of the ML models, achieving an attack success rate exceeding 86% across eight models. Additionally, while the Logistic Regression and Multilayer Perceptron models were highly susceptible to all the attacks studied, the instance-based ML models, such as KNN and Label Spreading, exhibited resistance to these attacks. These insights will contribute to the development of more robust NIDSs against adversarial attacks in IoT environments. Full article
(This article belongs to the Special Issue Advancing Security and Privacy in the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Proportion of benign and malicious packets in NSL-KDD training and test datasets.</p>
Full article ">Figure 2
<p>The detection accuracy of the LGR<sub><span class="html-italic">s</span></sub> model under adversarial attacks with different perturbation budgets <math display="inline"><semantics> <mi>ε</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>The <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>L</mi> <mo>∞</mo> </msub> </semantics></math> of adversarial examples with different perturbation budgets <math display="inline"><semantics> <mi>ε</mi> </semantics></math> against the LGR<sub><span class="html-italic">s</span></sub> model.</p>
Full article ">Figure 4
<p>The detection accuracy of the MLP<sub><span class="html-italic">s</span></sub> model under adversarial attacks with different perturbation budgets <math display="inline"><semantics> <mi>ε</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>The <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>L</mi> <mo>∞</mo> </msub> </semantics></math> of adversarial examples with different perturbation budgets <math display="inline"><semantics> <mi>ε</mi> </semantics></math> against the MLP<sub><span class="html-italic">s</span></sub> model.</p>
Full article ">
16 pages, 2963 KiB  
Article
An Entropy-Based Clustering Algorithm for Real-Time High-Dimensional IoT Data Streams
by Ibrahim Mutambik
Sensors 2024, 24(22), 7412; https://doi.org/10.3390/s24227412 - 20 Nov 2024
Viewed by 687
Abstract
The rapid growth of data streams, propelled by the proliferation of sensors and Internet of Things (IoT) devices, presents significant challenges for real-time clustering of high-dimensional data. Traditional clustering algorithms struggle with high dimensionality, memory and time constraints, and adapting to dynamically evolving [...] Read more.
The rapid growth of data streams, propelled by the proliferation of sensors and Internet of Things (IoT) devices, presents significant challenges for real-time clustering of high-dimensional data. Traditional clustering algorithms struggle with high dimensionality, memory and time constraints, and adapting to dynamically evolving data. Existing dimensionality reduction methods often neglect feature ranking, leading to suboptimal clustering performance. To address these issues, we introduce E-Stream, a novel entropy-based clustering algorithm for high-dimensional data streams. E-Stream performs real-time feature ranking based on entropy within a sliding time window to identify the most informative features, which are then utilized with the DenStream algorithm for efficient clustering. We evaluated E-Stream using the NSL-KDD dataset, comparing it against DenStream, CluStream, and MR-Stream. The evaluation metrics included the average F-Measure, Jaccard Index, Fowlkes–Mallows Index, Purity, and Rand Index. The results show that E-Stream outperformed the baseline algorithms in both clustering accuracy and computational efficiency while effectively reducing dimensionality. E-Stream also demonstrated significantly less memory consumption and fewer computational requirements, highlighting its suitability for real-time processing of high-dimensional data streams. Despite its strengths, E-Stream requires manual parameter adjustment and assumes a consistent number of active features, which may limit its adaptability to diverse datasets. Future work will focus on developing a fully autonomous, parameter-free version of the algorithm, incorporating mechanisms to handle missing features and improving the management of evolving clusters to enhance robustness and adaptability in dynamic IoT environments. Full article
(This article belongs to the Special Issue Advances in Big Data and Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Comparative evaluation of E-Stream, DenStream, CluStream, and MR-Stream on the NSL-KDD dataset: F-Measure performance and dimensionality efficiency.</p>
Full article ">Figure 2
<p>Performance of E-Stream, DenStream, CluStream, and MR-Stream on the NSL-KDD dataset in terms of the Jaccard Index.</p>
Full article ">Figure 3
<p>Fowlkes–Mallows Index comparison for E-Stream, DenStream, CluStream, and MR-Stream on the NSL-KDD dataset.</p>
Full article ">Figure 4
<p>Comparative purity analysis for E-Stream, DenStream, CluStream, and MR-Stream on the NSL-KDD dataset.</p>
Full article ">Figure 5
<p>Rand Index comparison for E-Stream, DenStream, CluStream, and MR-Stream on the NSL-KDD dataset.</p>
Full article ">Figure 6
<p>Memory usage and computational power comparison of E-Stream, DenStream, CluStream, and MR-Stream on the NSL-KDD dataset.</p>
Full article ">Figure 7
<p>Breakdown of the evaluation metrics for E-Stream, DenStream, CluStream, and MR-Stream on the NSL-KDD dataset.</p>
Full article ">
27 pages, 573 KiB  
Article
Machine Learning-Based Methodologies for Cyber-Attacks and Network Traffic Monitoring: A Review and Insights
by Filippo Genuario, Giuseppe Santoro, Michele Giliberti, Stefania Bello, Elvira Zazzera and Donato Impedovo
Information 2024, 15(11), 741; https://doi.org/10.3390/info15110741 - 20 Nov 2024
Viewed by 1201
Abstract
The number of connected IoT devices is increasing significantly due to their many benefits, including automation, improved efficiency and quality of life, and reducing waste. However, these devices have several vulnerabilities that have led to the rapid growth in the number of attacks. [...] Read more.
The number of connected IoT devices is increasing significantly due to their many benefits, including automation, improved efficiency and quality of life, and reducing waste. However, these devices have several vulnerabilities that have led to the rapid growth in the number of attacks. Therefore, several machine learning-based intrusion detection system (IDS) tools have been developed to detect intrusions and suspicious activity to and from a host (HIDS—Host IDS) or, in general, within the traffic of a network (NIDS—Network IDS). The proposed work performs a comparative analysis and an ablative study among recent machine learning-based NIDSs to develop a benchmark of the different proposed strategies. The proposed work compares both shallow learning algorithms, such as decision trees, random forests, Naïve Bayes, logistic regression, XGBoost, and support vector machines, and deep learning algorithms, such as DNNs, CNNs, and LSTM, whose approach is relatively new in the literature. Also, the ensembles are tested. The algorithms are evaluated on the KDD-99, NSL-KDD, UNSW-NB15, IoT-23, and UNB-CIC IoT 2023 datasets. The results show that the NIDS tools based on deep learning approaches achieve better performance in detecting network anomalies than shallow learning approaches, and ensembles outperform all the other models. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) LSTM and (<b>b</b>) GRU units.</p>
Full article ">
18 pages, 2702 KiB  
Article
An AI-Driven Model to Enhance Sustainability for the Detection of Cyber Threats in IoT Environments
by Majid H. Alsulami
Sensors 2024, 24(22), 7179; https://doi.org/10.3390/s24227179 - 8 Nov 2024
Viewed by 927
Abstract
In the face of constantly changing cyber threats, a variety of actions, tools, and regulations must be considered to safeguard information assets and guarantee the confidentiality, reliability, and availability of digital resources. The purpose of this research is to create an artificial intelligence [...] Read more.
In the face of constantly changing cyber threats, a variety of actions, tools, and regulations must be considered to safeguard information assets and guarantee the confidentiality, reliability, and availability of digital resources. The purpose of this research is to create an artificial intelligence (AI)-driven system to enhance sustainability for cyber threat detection in Internet of Things (IoT) environments. This study proposes a modern technique named Artificial Fish Swarm-driven Weight-normalized Adaboost (AF-WAdaBoost) for optimizing accuracy and sustainability in identifying attacks, thus contributing to heightening security in IoT environments. CICIDS2017, NSL-KDD, and UNSW-NB15 were used in this study. Min-max normalization is employed to pre-process the obtained raw information. The proposed model AF-WAdaBoost dynamically adjusts classifiers, enhancing accuracy and resilience against evolving threats. Python is used for model implementation. The effectiveness of the suggested AF-WAdaBoost model in identifying different kinds of cyber-threats in IoT systems is examined through evaluation metrics like accuracy (98.69%), F-measure (94.86%), and precision (95.72%). The experimental results unequivocally demonstrate that the recommended model performed better than other traditional approaches, showing essential enhancements in accuracy and strength, particularly in a dynamic environment. Integrating AI-driven detection balances offers sustainability in cybersecurity, ensuring the confidentiality, reliability, and availability of information assets, and also helps in optimizing the accuracy of systems. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart for the proposed method.</p>
Full article ">Figure 2
<p>Flow chart of the AF.</p>
Full article ">Figure 3
<p>Flow diagram for AF-WAdaBoost.</p>
Full article ">Figure 4
<p>Training and validation accuracy.</p>
Full article ">Figure 5
<p>Comparison of accuracy.</p>
Full article ">Figure 6
<p>Comparison of precision.</p>
Full article ">Figure 7
<p>Comparison of F-measure.</p>
Full article ">Figure 8
<p>Comparison of classifier models.</p>
Full article ">
29 pages, 4937 KiB  
Article
Whale Optimization Algorithm-Enhanced Long Short-Term Memory Classifier with Novel Wrapped Feature Selection for Intrusion Detection
by Haider AL-Husseini, Mohammad Mehdi Hosseini, Ahmad Yousofi and Murtadha A. Alazzawi
J. Sens. Actuator Netw. 2024, 13(6), 73; https://doi.org/10.3390/jsan13060073 - 2 Nov 2024
Viewed by 1329
Abstract
Intrusion detection in network systems is a critical challenge due to the ever-increasing volume and complexity of cyber-attacks. Traditional methods often struggle with high-dimensional data and the need for real-time detection. This paper proposes a comprehensive intrusion detection method utilizing a novel wrapped [...] Read more.
Intrusion detection in network systems is a critical challenge due to the ever-increasing volume and complexity of cyber-attacks. Traditional methods often struggle with high-dimensional data and the need for real-time detection. This paper proposes a comprehensive intrusion detection method utilizing a novel wrapped feature selection approach combined with a long short-term memory classifier optimized with the whale optimization algorithm to address these challenges effectively. The proposed method introduces a novel feature selection technique using a multi-layer perceptron and a hybrid genetic algorithm-particle swarm optimization algorithm to select salient features from the input dataset, significantly reducing dimensionality while retaining critical information. The selected features are then used to train a long short-term memory network, optimized by the whale optimization algorithm to enhance its classification performance. The effectiveness of the proposed method is demonstrated through extensive simulations of intrusion detection tasks. The feature selection approach effectively reduced the feature set from 78 to 68 features, maintaining diversity and relevance. The proposed method achieved a remarkable accuracy of 99.62% in DDoS attack detection and 99.40% in FTP-Patator/SSH-Patator attack detection using the CICIDS-2017 dataset and an anomaly attack detection accuracy of 99.6% using the NSL-KDD dataset. These results highlight the potential of the proposed method in achieving high detection accuracy with reduced computational complexity, making it a viable solution for real-time intrusion detection. Full article
(This article belongs to the Section Big Data, Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The MLP flowchart [<a href="#B31-jsan-13-00073" class="html-bibr">31</a>].</p>
Full article ">Figure 2
<p>The genetic algorithm’s flowchart [<a href="#B32-jsan-13-00073" class="html-bibr">32</a>].</p>
Full article ">Figure 3
<p>The PSO flowchart [<a href="#B35-jsan-13-00073" class="html-bibr">35</a>].</p>
Full article ">Figure 4
<p>The LSTM flowchart [<a href="#B37-jsan-13-00073" class="html-bibr">37</a>].</p>
Full article ">Figure 5
<p>The flowchart of the WOA algorithm [<a href="#B39-jsan-13-00073" class="html-bibr">39</a>].</p>
Full article ">Figure 6
<p>Convergence curve of GA-PSO for feature selection.</p>
Full article ">Figure 7
<p>Mutual correlation between all pairs of selected features for the CICIDS-2017 dataset.</p>
Full article ">Figure 8
<p>The convergence curve of the WOA algorithm for optimizing LSTM’s hyperparameters.</p>
Full article ">Figure 9
<p>Evaluating the proposed method using the confusion matrix for the DDoS attack detection in the CICIDS-2017 dataset.</p>
Full article ">Figure 10
<p>Evaluating the proposed method using the confusion matrix for the FTP-Patator/SSH-Patator detection in the CICIDS-2017 dataset.</p>
Full article ">Figure 11
<p>Evaluating the proposed method using the confusion matrix for anomaly detection in the NSL-KDD dataset.</p>
Full article ">Figure 12
<p>Evaluating the proposed method using the ROC curve for the DDoS attack detection in the CICIDS-2017 dataset.</p>
Full article ">Figure 13
<p>Evaluating the proposed method using the ROC curve for the FTP-Patator/SSH-Patator detection in the CICIDS-2017 dataset.</p>
Full article ">Figure 14
<p>Evaluating the proposed method using the ROC curve for anomaly detection in the NSL-KDD dataset.</p>
Full article ">Figure 15
<p>Evaluating the proposed method using the evaluation metrics for the FTP-Patator/SSH-Patator detection.</p>
Full article ">Figure 16
<p>Box chart of evaluating the proposed method over 10 replications using the evaluation metrics in the NSL-KDD dataset.</p>
Full article ">
18 pages, 5170 KiB  
Article
An Efficient Detection Mechanism of Network Intrusions in IoT Environments Using Autoencoder and Data Partitioning
by Yiran Xiao, Yaokai Feng and Kouichi Sakurai
Computers 2024, 13(10), 269; https://doi.org/10.3390/computers13100269 - 14 Oct 2024
Viewed by 1386
Abstract
In recent years, with the development of the Internet of Things and distributed computing, the “server-edge device” architecture has been widely deployed. This study focuses on leveraging autoencoder technology to address the binary classification problem in network intrusion detection, aiming to develop a [...] Read more.
In recent years, with the development of the Internet of Things and distributed computing, the “server-edge device” architecture has been widely deployed. This study focuses on leveraging autoencoder technology to address the binary classification problem in network intrusion detection, aiming to develop a lightweight model suitable for edge devices. Traditional intrusion detection models face two main challenges when directly ported to edge devices: inadequate computational resources to support large-scale models and the need to improve the accuracy of simpler models. To tackle these issues, this research utilizes the Extreme Learning Machine for its efficient training speed and compact model size to implement autoencoders. Two improvements over the latest related work are proposed: First, to improve data purity and ultimately enhance detection performance, the data are partitioned into multiple regions based on the prediction results of these autoencoders. Second, autoencoder characteristics are leveraged to further investigate the data within each region. We used the public dataset NSL-KDD to test the behavior of the proposed mechanism. The experimental results show that when dealing with multi-class attacks, the model’s performance was significantly improved, and the accuracy and F1-Score were improved by 3.5% and 2.9%, respectively, maintaining its lightweight nature. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the autoencoder.</p>
Full article ">Figure 2
<p>ELM architecture diagram.</p>
Full article ">Figure 3
<p>Proposed architecture diagram.</p>
Full article ">Figure 4
<p>Naming conventions of the autoencoders.</p>
Full article ">Figure 5
<p>Four regions of data partitioning.</p>
Full article ">Figure 6
<p>Experimental flow chart.</p>
Full article ">Figure 7
<p>Precise detection in the quasi-attack region.</p>
Full article ">Figure 8
<p>Model accuracy and F1-Score comparison.</p>
Full article ">
17 pages, 3304 KiB  
Article
MTC-NET: A Multi-Channel Independent Anomaly Detection Method for Network Traffic
by Xiaoyong Zhao, Chengjin Huang and Lei Wang
Biomimetics 2024, 9(10), 615; https://doi.org/10.3390/biomimetics9100615 - 10 Oct 2024
Viewed by 2294
Abstract
In recent years, deep learning-based approaches, particularly those leveraging the Transformer architecture, have garnered widespread attention for network traffic anomaly detection. However, when dealing with noisy data sets, directly inputting network traffic sequences into Transformer networks often significantly degrades detection performance due to [...] Read more.
In recent years, deep learning-based approaches, particularly those leveraging the Transformer architecture, have garnered widespread attention for network traffic anomaly detection. However, when dealing with noisy data sets, directly inputting network traffic sequences into Transformer networks often significantly degrades detection performance due to interference and noise across dimensions. In this paper, we propose a novel multi-channel network traffic anomaly detection model, MTC-Net, which reduces computational complexity and enhances the model’s ability to capture long-distance dependencies. This is achieved by decomposing network traffic sequences into multiple unidimensional time sequences and introducing a patch-based strategy that enables each sub-sequence to retain local semantic information. A backbone network combining Transformer and CNN is employed to capture complex patterns, with information from all channels being fused at the final classification header in order to achieve modelling and detection of complex network traffic patterns. The experimental results demonstrate that MTC-Net outperforms existing state-of-the-art methods in several evaluation metrics, including accuracy, precision, recall, and F1 score, on four publicly available data sets: KDD Cup 99, NSL-KDD, UNSW-NB15, and CIC-IDS2017. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

Figure 1
<p>The MTC model comprises an input embedding layer, M parallel Transformer–CNN (TC) modules (M equals the number of channels), a fully connected layer classification header, and an output layer. The TC modules comprise a projection layer and position coding, a Transformer encoder, a convolutional neural network (CNN) layer, and a maximum pooling layer. The figure also illustrates a “Patch Strategy”, which divides a one-dimensional sequence of length, L, into patches of shape P × W based on a stride, S. The patches can overlap, while the example in the figure shows a case where the patches do not overlap. When the patches do not overlap, S = P.</p>
Full article ">Figure 2
<p>Confusion matrices for anomaly detection (binary classification) results. (<b>a</b>) KDD99 data set anomaly detection results. (<b>b</b>) NSL-KDD data set anomaly detection results. (<b>c</b>) UNSW-NB15 data set anomaly detection results. (<b>d</b>) CIC-IDS2017 data set anomaly detection results.</p>
Full article ">Figure 3
<p>Confusion matrix for multi-classification results. (<b>a</b>) KDD99 data set multi-classification results. (<b>b</b>) NSL-KDD data set multi-classification results. (<b>c</b>) UNSW-NB15 data set multi-classification results. (<b>d</b>) CIC-IDS2017 data set multi-classification results.</p>
Full article ">Figure 4
<p>Two comparative models for ablation experiments. (<b>a</b>) Direct input to the Transformer without splitting channels. (<b>b</b>) Replacement of the CNN of the transfer component block with a Flatten layer.</p>
Full article ">
45 pages, 3370 KiB  
Article
Adaptive Cybersecurity Neural Networks: An Evolutionary Approach for Enhanced Attack Detection and Classification
by Ahmad K. Al Hwaitat and Hussam N. Fakhouri
Appl. Sci. 2024, 14(19), 9142; https://doi.org/10.3390/app14199142 - 9 Oct 2024
Cited by 1 | Viewed by 1526
Abstract
The increasing sophistication and frequency of cyber threats necessitate the development of advanced techniques for detecting and mitigating attacks. This paper introduces a novel cybersecurity-focused Multi-Layer Perceptron (MLP) trainer that utilizes evolutionary computation methods, specifically tailored to improve the training process of neural [...] Read more.
The increasing sophistication and frequency of cyber threats necessitate the development of advanced techniques for detecting and mitigating attacks. This paper introduces a novel cybersecurity-focused Multi-Layer Perceptron (MLP) trainer that utilizes evolutionary computation methods, specifically tailored to improve the training process of neural networks in the cybersecurity domain. The proposed trainer dynamically optimizes the MLP’s weights and biases, enhancing its accuracy and robustness in defending against various attack vectors. To evaluate its effectiveness, the trainer was tested on five widely recognized security-related datasets: NSL-KDD, CICIDS2017, UNSW-NB15, Bot-IoT, and CSE-CIC-IDS2018. Its performance was compared with several state-of-the-art optimization algorithms, including Cybersecurity Chimp, CPO, ROA, WOA, MFO, WSO, SHIO, ZOA, DOA, and HHO. The results demonstrated that the proposed trainer consistently outperformed the other algorithms, achieving the lowest Mean Square Error (MSE) and highest classification accuracy across all datasets. Notably, the trainer reached a classification rate of 99.5% on the Bot-IoT dataset and 98.8% on the CSE-CIC-IDS2018 dataset, underscoring its effectiveness in detecting and classifying diverse cyber threats. Full article
Show Figures

Figure 1

Figure 1
<p>Analysis of convergence curves for CEC2022 benchmark functions (F1–F6).</p>
Full article ">Figure 2
<p>Convergence curve analysis over CEC2022 benchmark functions (F7–F12).</p>
Full article ">Figure 3
<p>Search history curve analysis over CEC2022 benchmark functions (F1–F6).</p>
Full article ">Figure 4
<p>Search history curve analysis over CEC2022 benchmark functions (F7–F12).</p>
Full article ">Figure 5
<p>Trajectory curve analysis over CEC2022 benchmark functions (F1–F6).</p>
Full article ">Figure 6
<p>Trajectory curve analysis over CEC2022 benchmark functions (F7–F12).</p>
Full article ">Figure 7
<p>Average fitness curve analysis over CEC2022 benchmark functions (F1–F6).</p>
Full article ">Figure 8
<p>Average fitness curve analysis over CEC2022 benchmark functions (F7–F12).</p>
Full article ">Figure 9
<p>Sensitivity Analysis Heatmaps over CEC2022 benchmark functions (F1–F6).</p>
Full article ">Figure 10
<p>Sensitivity Analysis Heatmaps over CEC2022 benchmark functions (F7–F12).</p>
Full article ">Figure 11
<p>Box Plots of CEC2022 benchmark functions (F1–F6).</p>
Full article ">Figure 12
<p>Box Plots of CEC2022 benchmark functions (F7–F12).</p>
Full article ">Figure 13
<p>Histogram analysis over CEC2022 (F1–F6).</p>
Full article ">Figure 14
<p>Histogram analysis over CEC2022 (F7–F12).</p>
Full article ">
14 pages, 3650 KiB  
Article
A Study on Network Anomaly Detection Using Fast Persistent Contrastive Divergence
by Jaeyeong Jeong, Seongmin Park, Joonhyung Lim, Jiwon Kang, Dongil Shin and Dongkyoo Shin
Symmetry 2024, 16(9), 1220; https://doi.org/10.3390/sym16091220 - 17 Sep 2024
Viewed by 836
Abstract
As network technology evolves, cyberattacks are not only increasing in frequency but also becoming more sophisticated. To proactively detect and prevent these cyberattacks, researchers are developing intrusion detection systems (IDSs) leveraging machine learning and deep learning techniques. However, a significant challenge with these [...] Read more.
As network technology evolves, cyberattacks are not only increasing in frequency but also becoming more sophisticated. To proactively detect and prevent these cyberattacks, researchers are developing intrusion detection systems (IDSs) leveraging machine learning and deep learning techniques. However, a significant challenge with these advanced models is the increased training time as model complexity grows, and the symmetry between performance and training time must be taken into account. To address this issue, this study proposes a fast-persistent-contrastive-divergence-based deep belief network (FPCD-DBN) that offers both high accuracy and rapid training times. This model combines the efficiency of contrastive divergence with the powerful feature extraction capabilities of deep belief networks. While traditional deep belief networks use a contrastive divergence (CD) algorithm, the FPCD algorithm improves the performance of the model by passing the results of each detection layer to the next layer. In addition, the mix of parameter updates using fast weights and continuous chains makes the model fast and accurate. The performance of the proposed FPCD-DBN model was evaluated on several benchmark datasets, including NSL-KDD, UNSW-NB15, and CIC-IDS-2017. As a result, the proposed method proved to be a viable solution as the model performed well with an accuracy of 89.4% and an F1 score of 89.7%. By achieving superior performance across multiple datasets, the approach shows great potential for enhancing network security and providing a robust defense against evolving cyber threats. Full article
(This article belongs to the Special Issue Information Security in AI)
Show Figures

Figure 1

Figure 1
<p>Structure of deep belief network.</p>
Full article ">Figure 2
<p>Visualization of the NSL-KDD dataset using t-SNE.</p>
Full article ">Figure 3
<p>Visualization of the UNSW-NB15 dataset using t-SNE.</p>
Full article ">Figure 4
<p>Visualization of the CIC-IDS-2017 dataset using t-SNE.</p>
Full article ">Figure 5
<p>AUROC values measured for the NSL-KDD dataset.</p>
Full article ">Figure 6
<p>AUROC values measured for the UNSW-NB15 dataset.</p>
Full article ">Figure 7
<p>AUROC values measured for the CIC-IDS-2017 dataset.</p>
Full article ">
16 pages, 2744 KiB  
Article
VGGIncepNet: Enhancing Network Intrusion Detection and Network Security through Non-Image-to-Image Conversion and Deep Learning
by Jialong Chen, Jingjing Xiao and Jiaxin Xu
Electronics 2024, 13(18), 3639; https://doi.org/10.3390/electronics13183639 - 12 Sep 2024
Viewed by 1260
Abstract
This paper presents an innovative model, VGGIncepNet, which integrates non-image-to-image conversion techniques with deep learning modules, specifically VGG16 and Inception, aiming to enhance performance in network intrusion detection and IoT security analysis. By converting non-image data into image data, the model leverages the [...] Read more.
This paper presents an innovative model, VGGIncepNet, which integrates non-image-to-image conversion techniques with deep learning modules, specifically VGG16 and Inception, aiming to enhance performance in network intrusion detection and IoT security analysis. By converting non-image data into image data, the model leverages the powerful feature extraction capabilities of convolutional neural networks, thereby improving the multi-class classification of network attacks. We conducted extensive experiments on the NSL-KDD and CICIoT2023 datasets, and the results demonstrate that VGGIncepNet outperforms existing models, including BERT, DistilBERT, XLNet, and T5, across evaluation metrics such as accuracy, precision, recall, and F1-Score. VGGIncepNet exhibits outstanding classification performance, particularly excelling in precision and F1-Score. The experimental results validate VGGIncepNet’s adaptability and robustness in complex network environments, providing an effective solution for the real-time detection of malicious activities in network systems. This study offers new methods and tools for network security and IoT security analysis, with broad application prospects. Full article
Show Figures

Figure 1

Figure 1
<p>Transformation from feature vector to feature matrix.</p>
Full article ">Figure 2
<p>The deep network architecture of VGGIncepNet model.</p>
Full article ">Figure 3
<p>The system architecture of VGGIncepNet model.</p>
Full article ">Figure 4
<p>Data preprocessing steps.</p>
Full article ">Figure 5
<p>Confusion matrix of the VGGIncepNet model in NSL-KDD dataset.</p>
Full article ">Figure 6
<p>Confusion matrix of the VGGIncepNet model in CICIoT2023 dataset.</p>
Full article ">Figure 7
<p>Classification results comparison in NSL-KDD dataset.</p>
Full article ">Figure 8
<p>Classification results comparison in CICIoT2023 dataset.</p>
Full article ">
17 pages, 1107 KiB  
Article
Explainable Deep Learning-Based Feature Selection and Intrusion Detection Method on the Internet of Things
by Xuejiao Chen, Minyao Liu, Zixuan Wang and Yun Wang
Sensors 2024, 24(16), 5223; https://doi.org/10.3390/s24165223 - 12 Aug 2024
Cited by 2 | Viewed by 1193
Abstract
With the rapid advancement of the Internet of Things, network security has garnered increasing attention from researchers. Applying deep learning (DL) has significantly enhanced the performance of Network Intrusion Detection Systems (NIDSs). However, due to its complexity and “black box” problem, deploying DL-based [...] Read more.
With the rapid advancement of the Internet of Things, network security has garnered increasing attention from researchers. Applying deep learning (DL) has significantly enhanced the performance of Network Intrusion Detection Systems (NIDSs). However, due to its complexity and “black box” problem, deploying DL-based NIDS models in practical scenarios poses several challenges, including model interpretability and being lightweight. Feature selection (FS) in DL models plays a crucial role in minimizing model parameters and decreasing computational overheads while enhancing NIDS performance. Hence, selecting effective features remains a pivotal concern for NIDSs. In light of this, this paper proposes an interpretable feature selection method for encrypted traffic intrusion detection based on SHAP and causality principles. This approach utilizes the results of model interpretation for feature selection to reduce feature count while ensuring model reliability. We evaluate and validate our proposed method on two public network traffic datasets, CICIDS2017 and NSL-KDD, employing both a CNN and a random forest (RF). Experimental results demonstrate superior performance achieved by our proposed method. Full article
(This article belongs to the Special Issue AI-Driven Cybersecurity in IoT-Based Systems)
Show Figures

Figure 1

Figure 1
<p>The structure of our explainable artificial intelligence and feature selection method.</p>
Full article ">Figure 2
<p>The proposed method structure.</p>
Full article ">Figure 3
<p>The validation method.</p>
Full article ">Figure 4
<p>The CNN model structure.</p>
Full article ">Figure 5
<p>Interpretation results of CNN model. (<b>a</b>) Interpretation results of CNN model on CICIDS2017. (<b>b</b>) Interpretation results of CNN model on NSL-KDD.</p>
Full article ">Figure 6
<p>Interpretation results of RF model. (<b>a</b>) Interpretation results of RF model on CICIDS2017. (<b>b</b>) Interpretation results of RF model on NSL-KDD.</p>
Full article ">Figure 7
<p>Detection accuracy on CICIDS2017.</p>
Full article ">Figure 8
<p>Important features of the datasets.</p>
Full article ">Figure 9
<p>Detection accuracy on NSL-KDD.</p>
Full article ">Figure 10
<p>Number of model parameters.</p>
Full article ">Figure 11
<p>Training and inferring times.</p>
Full article ">Figure 12
<p>Resource consumption.</p>
Full article ">Figure 13
<p>Trade-off between accuracy and efficiency of SHAP FS.</p>
Full article ">
16 pages, 2754 KiB  
Article
Comparative Analysis of Deep Convolutional Neural Network—Bidirectional Long Short-Term Memory and Machine Learning Methods in Intrusion Detection Systems
by Miracle Udurume, Vladimir Shakhov and Insoo Koo
Appl. Sci. 2024, 14(16), 6967; https://doi.org/10.3390/app14166967 - 8 Aug 2024
Cited by 3 | Viewed by 1746
Abstract
Particularly in Internet of Things (IoT) scenarios, the rapid growth and diversity of network traffic pose a growing challenge to network intrusion detection systems (NIDs). In this work, we perform a comparative analysis of lightweight machine learning models, such as logistic regression (LR) [...] Read more.
Particularly in Internet of Things (IoT) scenarios, the rapid growth and diversity of network traffic pose a growing challenge to network intrusion detection systems (NIDs). In this work, we perform a comparative analysis of lightweight machine learning models, such as logistic regression (LR) and k-nearest neighbors (KNNs), alongside other machine learning models, such as decision trees (DTs), support vector machines (SVMs), multilayer perceptron (MLP), and random forests (RFs) with deep learning architectures, specifically a convolutional neural network (CNN) coupled with bidirectional long short-term memory (BiLSTM), for intrusion detection. We assess these models’ scalability, performance, and robustness using the NSL-KDD and UNSW-NB15 benchmark datasets. We evaluate important metrics, such as accuracy, precision, recall, F1-score, and false alarm rate, to offer insights into the effectiveness of each model in securing network systems within IoT deployments. Notably, the study emphasizes the utilization of lightweight machine learning models, highlighting their efficiency in achieving high detection accuracy while maintaining lower computational costs. Furthermore, standard deviation metrics have been incorporated into the accuracy evaluations, enhancing the reliability and comprehensiveness of our results. Using the CNN-BiLSTM model, we achieved noteworthy accuracies of 99.89% and 98.95% on the NSL-KDD and UNSW-NB15 datasets, respectively. However, the CNN-BiLSTM model outperforms lightweight traditional machine learning methods by a margin ranging from 1.5% to 3.5%. This study contributes to the ongoing efforts to enhance network security in IoT scenarios by exploring a trade-off between traditional machine learning and deep learning techniques. Full article
(This article belongs to the Special Issue Network Intrusion Detection and Attack Identification)
Show Figures

Figure 1

Figure 1
<p>Overview of proposed network-based intrusion detection system.</p>
Full article ">Figure 2
<p>An example of a bidirectional LSTM cell structure.</p>
Full article ">Figure 3
<p>An example of a CNN architecture.</p>
Full article ">Figure 4
<p>Results for all binary classification models: (<b>a</b>) binary classification on NSL-KDD; (<b>b</b>) binary classification on UNSW-NB15.</p>
Full article ">Figure 5
<p>Results for the confusion matrix: (<b>a</b>) confusion matrix on NSL-KDD; (<b>b</b>) confusion matrix on UNSW-NB15. By combining the performance of different classification algorithms on the two datasets, we find that both the machine learning and deep learning models can achieve good results in attack detection accompanied by a low FAR.</p>
Full article ">Figure 5 Cont.
<p>Results for the confusion matrix: (<b>a</b>) confusion matrix on NSL-KDD; (<b>b</b>) confusion matrix on UNSW-NB15. By combining the performance of different classification algorithms on the two datasets, we find that both the machine learning and deep learning models can achieve good results in attack detection accompanied by a low FAR.</p>
Full article ">
Back to TopTop