[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3689930.3695209acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Free access

On the Feasibility of Detecting Model Poisoning Attacks in Real-time ML-based ICS

Published: 20 November 2024 Publication History

Abstract

Machine learning (ML) has seen a growing trend in deployment in Industrial control systems (ICS). Despite the benefits from reduced human effort, ML also brings new challenges in safety assurance. There is a critical need for efficient and reliable methods to ensure user privacy and data integrity. Federated learning (FL) has emerged as a promising solution, allowing multiple clients to collaboratively train a global model without sharing their local data. However, FL systems are vulnerable to model poisoning attacks, which can have significant security impacts. FLDetector has been proposed as a defense mechanism against such attacks, focusing on detecting and removing malicious clients based on the consistency of their model updates. This study evaluates the feasibility of using FLDetector in ICS settings. With a customized navigation system that calculates heat hazards with FL, we evaluate the attack success rate and power consumption of FLDetector. We explore the feasibility of deploying FLDetector in existing engineering stations (ES.

References

[1]
I. Aloran and S. Samet. 2023. Defending Federated Learning Against Model Poisoning Attacks. In 2023 IEEE International Conference on Big Data (BigData). IEEE, 3584--3589.
[2]
Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A Little Is Enough: Circumventing Defenses For Distributed Learning. In Advances in Neural Information Processing Systems (NeurIPS).
[3]
Pierre Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Jérémie Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems (NeurIPS).
[4]
X. Cao, J. Jia, Z. Zhang, and N. Z. Gong. 2023. Fedrecover: Recovering from poisoning attacks in federated learning using historical information. In 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 1366--1383.
[5]
M. Fang, X. Cao, J. Jia, and N. Z. Gong. 2020. Local model poisoning attacks to Byzantine-robust federated learning. In Proceedings of the 29th USENIX Security Symposium.
[6]
Y. Jiang, J. Shen, Z. Liu, C. W. Tan, and K. Y. Lam. 2024. Towards efficient and certified recovery from poisoning attacks in federated learning. arXiv preprint arXiv:2401.08216 (2024).
[7]
K. Li, J. Zheng, X. Yuan, W. Ni, O. B. Akan, and H. Vincent Poor. 2024. Data-agnostic model poisoning against federated learning: A graph autoencoder approach. IEEE Transactions on Information Forensics and Security (2024).
[8]
Princy M. Mammen. 2021. Federated learning: Opportunities and challenges. arXiv preprint arXiv:2101.05428 (2021).
[9]
S. Park, J. Lee, K. Kim, and J. Kim. 2023. Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey. arXiv preprint arXiv:2307.11982 (2023).
[10]
Z. Xing, Z. Zhang, Z. A. Zhang, J. Liu, L. Zhu, and G. Russello. 2024. No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning. arXiv preprint arXiv:2406.0108 (2024).
[11]
G. Yan, H. Wang, X. Yuan, and J. Li. 2023. DeFL: Defending Against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, 9 (2023), 10711--10719. https://doi.org/10.1609/aaai.v37i9.26271
[12]
Dong Yin, Yudong Chen, Raghunandan Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the 35th International Conference on Machine Learning (ICML).
[13]
Z. Zhang, X. Cao, J. Jia, and N. Z. Gong. 2022. Fldetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2545--2555.
[14]
J. Zheng, K. Li, X. Yuan, W. Ni, and E. Tovar. 2024. Detecting Poisoning Attacks on Federated Learning Using Gradient-Weighted Class Activation Mapping. In Companion Proceedings of the ACM on Web Conference 2024. ACM, 714--717.

Index Terms

  1. On the Feasibility of Detecting Model Poisoning Attacks in Real-time ML-based ICS

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    RICSS '24: Proceedings of the 2024 Workshop on Re-design Industrial Control Systems with Security
    November 2024
    102 pages
    ISBN:9798400712265
    DOI:10.1145/3689930
    • Program Chairs:
    • Ruimin Sun,
    • Mu Zhang
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 November 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. fl
    2. ics
    3. model poisoning

    Qualifiers

    • Research-article

    Conference

    CCS '24
    Sponsor:

    Upcoming Conference

    CCS '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 16
      Total Downloads
    • Downloads (Last 12 months)16
    • Downloads (Last 6 weeks)16
    Reflects downloads up to 13 Dec 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media