[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3488932.3517395acmconferencesArticle/Chapter ViewAbstractPublication Pagesasia-ccsConference Proceedingsconference-collections
research-article
Open access

FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations

Published: 30 May 2022 Publication History

Abstract

Federated learning (FL) has been shown vulnerable to a new class of adversarial attacks, known as model poisoning attacks (MPA), where one or more malicious clients try to poison the global model by sending carefully crafted local model updates to the central parameter server. Existing defenses that have been fixated on analyzing model parameters show limited effectiveness in detecting such carefully crafted poisonous models. In this work, we propose FLARE, a robust model aggregation mechanism for FL, which is resilient against state-of-the-art MPAs. Instead of solely depending on model parameters, FLARE leverages the penultimate layer representations (PLRs) of the model for characterizing the adversarial influence on each local model update. PLRs demonstrate a better capability to differentiate malicious models from benign ones than model parameter-based solutions. We further propose a trust evaluation method that estimates a trust score for each model update based on pairwise PLR discrepancies among all model updates. Under the assumption that honest clients make up the majority, FLARE assigns a trust score to each model update in a way that those far from the benign cluster are assigned low scores. FLARE then aggregates the model updates weighted by their trust scores and finally updates the global model. Extensive experimental results demonstrate the effectiveness of FLARE in defending FL against various MPAs, including semantic backdoor attacks, trojan backdoor attacks, and untargeted attacks, and safeguarding the accuracy of FL.

Supplementary Material

MP4 File (ASIA-CCS22-fp459.mp4)
Presentation video. In this video, we show the vulnerability of federated learning (FL) to model poisoning attacks (MPA) where one or more malicious clients try to poison the global model by submitting carefully crafted local model updates. We analyze the limited effectiveness of the legacy defenses in detecting such carefully crafted poisonous models. In this work, we propose a robust model aggregation mechanism for FL, namely FLARE. FLARE leverages the penultimate layer representations (PLRs) for characterizing the adversarial influence on each local model update. PLRs demonstrate a better capability to differentiate malicious models from benign ones than legacy defense. Extensive experimental results demonstrate the effectiveness of FLARE in defending FL against various MPAs, including semantic backdoor attacks, trojan backdoor attacks, and untargeted attacks.

References

[1]
Sebastien Andreina, Giorgia Azzurra Marson, Helen Möllering, and Ghassan Karame. 2021. Baffle: Backdoor detection via feedback-based federated learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS). IEEE, 852--863.
[2]
Sana Awan, Bo Luo, and Fengjun Li. 2021. Contra: Defending against poisoning attacks in federated learning. In European Symposium on Research in Computer Security. Springer, 455--475.
[3]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2938--2948.
[4]
Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning. PMLR, 634--643.
[5]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 118--128.
[6]
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. 2021 a. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. Network and Distributed Systems Security Symposium NDSS (2021).
[7]
Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2021 b. Provably Secure Federated Learning against Malicious Clients. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 6885--6893.
[8]
Yudong Chen, Lili Su, and Jiaming Xu. 2017. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems, Vol. 1, 2 (2017), 1--25.
[9]
El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Louis Alexandre Rouault. 2018. The Hidden Vulnerability of Distributed Learning in Byzantium. In International Conference on Machine Learning.
[10]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to Byzantine-robust federated learning. In 29th {USENIX} Security Symposium ({USENIX} Security 20). 1605--1622.
[11]
Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. 2018. Mitigating sybils in federated learning poisoning. arXiv preprint arXiv:1808.04866 (2018).
[12]
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. The Journal of Machine Learning Research, Vol. 13, 1 (2012), 723--773.
[13]
Rachid Guerraoui, Sébastien Rouault, et al. 2018. The hidden vulnerability of distributed learning in byzantium. In International Conference on Machine Learning. PMLR, 3521--3530.
[14]
Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Francc oise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. 2018. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018).
[15]
Peter J Huber. 2004. Robust statistics. Vol. 523. John Wiley & Sons.
[16]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. 2021. Advances and Open Problems in Federated Learning. Foundations and Trends® in Machine Learning, Vol. 14, 1 (2021). https://doi.org/10.1561/2200000083
[17]
Jakob Nikolas Kather, Cleo-Aron Weis, Francesco Bianconi, Susanne M Melchers, Lothar R Schad, Timo Gaiser, Alexander Marx, and Frank Gerrit Zöllner. 2016. Multi-class texture analysis in colorectal cancer histology. Scientific reports, Vol. 6 (2016), 27988.
[18]
Jakub Konevc nỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
[19]
Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images. Technical Report. Citeseer.
[20]
Suyi Li, Yong Cheng, Wei Wang, Yang Liu, and Tianjian Chen. 2020. Learning to Detect Malicious Clients for Robust Federated Learning. arXiv preprint arXiv:2002.00211 (2020).
[21]
Yunlong Mao, Xinyu Yuan, Xinyang Zhao, and Sheng Zhong. 2021. Romoa: Robust model aggregation for the resistance of federated learning to model poisoning attacks. In European Symposium on Research in Computer Security. Springer, 476--496.
[22]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Artificial Intelligence and Statistics (AISTATS 17). 1273--1282.
[23]
Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. 2019. When does label smoothing help?. In Advances in Neural Information Processing Systems. 4694--4703.
[24]
Thien Duc Nguyen, Phillip Rieger, Markus Miettinen, and Ahmad-Reza Sadeghi. 2020. Poisoning attacks on federated learning-based IoT intrusion detection system. In Proc. Workshop Decentralized IoT Syst. Secur.(DISS). 1--7.
[25]
Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, and Wojciech Samek. 2019. Robust and communication-efficient federated learning from non-iid data. IEEE transactions on neural networks and learning systems, Vol. 31, 9 (2019), 3400--3413.
[26]
Soroosh Shafieezadeh-Abadeh, Daniel Kuhn, and Peyman Mohajerin Esfahani. 2019. Regularization via mass transportation. Journal of Machine Learning Research, Vol. 20, 103 (2019), 1--68.
[27]
Shiqi Shen, Shruti Tople, and Prateek Saxena. 2016. Auror: Defending against poisoning attacks in collaborative deep learning systems. In Proceedings of the 32nd Annual Conference on Computer Security Applications. 508--519.
[28]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations.
[29]
Jinhyun So, Bacs ak Güler, and A Salman Avestimehr. 2020. Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications (2020).
[30]
Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. 2019. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019).
[31]
Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K Leung, Christian Makaya, Ting He, and Kevin Chan. 2019. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications, Vol. 37, 6 (2019), 1205--1221.
[32]
Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).
[33]
Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2020. Dba: Distributed backdoor attacks against federated learning. In International Conference on Learning Representations.
[34]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning. PMLR, 5650--5659.
[35]
Ying Zhao, Junjun Chen, Jiale Zhang, Di Wu, Jian Teng, and Shui Yu. 2019. PDGAN: A novel poisoning defense method in federated learning using generative adversarial network. In International Conference on Algorithms and Architectures for Parallel Processing. Springer, 595--609.

Cited By

View all
  • (2025)Reinforcement Learning-Based Personalized Differentially Private Federated LearningIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.351581420(465-477)Online publication date: 2025
  • (2025)Dual-domain based backdoor attack against federated learningNeurocomputing10.1016/j.neucom.2025.129424623(129424)Online publication date: Mar-2025
  • (2025)A survey of security threats in federated learningComplex & Intelligent Systems10.1007/s40747-024-01664-011:2Online publication date: 29-Jan-2025
  • Show More Cited By

Index Terms

  1. FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASIA CCS '22: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security
      May 2022
      1291 pages
      ISBN:9781450391405
      DOI:10.1145/3488932
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 30 May 2022

      Check for updates

      Author Tags

      1. defense
      2. federated learning
      3. model poisoning attack

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      ASIA CCS '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 418 of 2,322 submissions, 18%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)595
      • Downloads (Last 6 weeks)94
      Reflects downloads up to 25 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Reinforcement Learning-Based Personalized Differentially Private Federated LearningIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.351581420(465-477)Online publication date: 2025
      • (2025)Dual-domain based backdoor attack against federated learningNeurocomputing10.1016/j.neucom.2025.129424623(129424)Online publication date: Mar-2025
      • (2025)A survey of security threats in federated learningComplex & Intelligent Systems10.1007/s40747-024-01664-011:2Online publication date: 29-Jan-2025
      • (2024)Secure Gradient Aggregation With Sparsification for Resource-Limited Federated LearningIEEE Transactions on Communications10.1109/TCOMM.2024.340347572:11(6883-6899)Online publication date: Nov-2024
      • (2024)RADAR: Model Quality Assessment for Reputation-aware Collaborative Federated Learning2024 43rd International Symposium on Reliable Distributed Systems (SRDS)10.1109/SRDS64841.2024.00030(222-234)Online publication date: 30-Sep-2024
      • (2024)LayerDBA: Circumventing Similarity-Based Defenses in Federated Learning2024 IEEE Security and Privacy Workshops (SPW)10.1109/SPW63631.2024.10795458(299-305)Online publication date: 23-May-2024
      • (2024)A Secure Object Detection Technique for Intelligent Transportation SystemsIEEE Open Journal of Intelligent Transportation Systems10.1109/OJITS.2024.34408765(495-508)Online publication date: 2024
      • (2024)Trustworthy Federated Learning: A Comprehensive Review, Architecture, Key Challenges, and Future Research ProspectsIEEE Open Journal of the Communications Society10.1109/OJCOMS.2024.34382645(4920-4998)Online publication date: 2024
      • (2024)SupRTE: Suppressing Backdoor Injection in Federated Learning via Robust Trust EvaluationIEEE Intelligent Systems10.1109/MIS.2024.339233439:5(66-77)Online publication date: 1-Sep-2024
      • (2024)FedCL: Detecting Backdoor Attacks in Federated Learning with Confidence Levels2024 IEEE International Conference on Multimedia and Expo (ICME)10.1109/ICME57554.2024.10687918(1-6)Online publication date: 15-Jul-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media