[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3580305.3599346acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Free access

FedDefender: Client-Side Attack-Tolerant Federated Learning

Published: 04 August 2023 Publication History

Abstract

Federated learning enables learning from decentralized data sources without compromising privacy, which makes it a crucial technique. However, it is vulnerable to model poisoning attacks, where malicious clients interfere with the training process. Previous defense mechanisms have focused on the server-side by using careful model aggregation, but this may not be effective when the data is not identically distributed or when attackers can access the information of benign clients. In this paper, we propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models and avoid the adverse impact of malicious model updates from attackers, even when a server-side defense cannot identify or remove adversaries. Our method consists of two main components: (1) attack-tolerant local meta update and (2) attack-tolerant global knowledge distillation. These components are used to find noise-resilient model parameters while accurately extracting knowledge from a potentially corrupted global model. Our client-side defense strategy has a flexible structure and can work in conjunction with any existing server-side strategies. Evaluations of real-world scenarios across multiple datasets show that the proposed method enhances the robustness of federated learning against model poisoning attacks.

Supplementary Material

MP4 File (rtfp0652_2min_promo.mp4)
2min Promotion Video

References

[1]
Sana Awan, Bo Luo, and Fengjun Li. 2021. Contra: Defending against poisoning attacks in federated learning. In Proceedings of ESORICS. Springer, 455--475.
[2]
Robert Baldock, Hartmut Maennel, and Behnam Neyshabur. 2021. Deep learning through the lens of example difficulty. In Advances in NeurIPS, Vol. 34. 10876--10889.
[3]
Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A little is enough: Circumventing defenses for distributed learning. In Advances in NeurIPS, Vol. 32.
[4]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances in NeurIPS, Vol. 30.
[5]
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub KonečnỴ, Stefano Mazzocchi, Brendan McMahan, et al. 2019. Towards federated learning at scale: System design. In Proceedings of MLSys, Vol. 1. 374--388.
[6]
Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub KonečnỴ, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. 2018. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097 (2018).
[7]
Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. 2016. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981 (2016).
[8]
Yao Chen, Yijie Gui, Hong Lin, Wensheng Gan, and Yongdong Wu. 2022. Federated Learning Attacks and Defenses: A Survey. arXiv preprint arXiv:2211.14952 (2022).
[9]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to {Byzantine-Robust} federated learning. In Proceedings of USENIX Security. 1605--1622.
[10]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of ICML. 1126--1135.
[11]
Shuhao Fu, Chulin Xie, Bo Li, and Qifeng Chen. 2019. Attack-resistant federated learning with residual-based reweighting. arXiv preprint arXiv:1912.11464 (2019).
[12]
Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. 2020. The Limitations of Federated Learning in Sybil Settings. In Proceedings of RAID.
[13]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017).
[14]
Sungwon Han, Sungwon Park, Sungkyu Park, Sundong Kim, and Meeyoung Cha. 2020. Mitigating embedding and class assignment mismatch in unsupervised image classification. In Proceedings of ECCV. Springer, 768--784.
[15]
Sungwon Han, Sungwon Park, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xing Xie, and Meeyoung Cha. 2022. FedX: Unsupervised Federated Learning with Cross Knowledge Distillation. In Proceedings of ECCV. 691--707.
[16]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of CVPR. 770--778.
[17]
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 2, 7 (2015).
[18]
Sai Praneeth Karimireddy, Lie He, and Martin Jaggi. [n. d.]. Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing. In Proceedings of ICLR.
[19]
Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. 2020. Scaffold: Stochastic controlled averaging for federated learning. In Proceedings of ICML. 5132--5143.
[20]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
[21]
Souvik Kundu, Qirui Sun, Yao Fu, Massoud Pedram, and Peter Beerel. 2021. Analyzing the confidentiality of undistillable teachers in knowledge distillation. In Advances in NeurIPS, Vol. 34. 9181--9192.
[22]
Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. 2018. Visualizing the Loss Landscape of Neural Nets. In Advances in NeurIPS.
[23]
Junnan Li, Yongkang Wong, Qi Zhao, and Mohan S Kankanhalli. 2019. Learning to learn from noisy labeled data. In Proceedings of CVPR. 5051--5059.
[24]
Qinbin Li, Bingsheng He, and Dawn Song. 2021. Model-contrastive federated learning. In Proceedings of CVPR. 10713--10722.
[25]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems 2 (2020), 429--450.
[26]
Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, and Leandros Tassiulas. 2021. Cost-effective federated learning in mobile edge networks. IEEE Journal on Selected Areas in Communications 39, 12 (2021), 3606--3621.
[27]
Lingjuan Lyu, Han Yu, and Qiang Yang. 2020. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133 (2020).
[28]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of AISTATS. 1273--1282.
[29]
Sungwon Park, Sungwon Han, Sundong Kim, Danu Kim, Sungkyu Park, Seunghoon Hong, and Meeyoung Cha. 2021. Improving unsupervised image clustering with robust learning. In Proceedings of CVPR. 12278--12287.
[30]
Sungwon Park, Sundong Kim, and Meeyoung Cha. 2022. Knowledge sharing via domain adaptation in customs fraud detection. In Proceedings of AAAI, Vol. 36. 12062--12070.
[31]
Krishna Pillutla, Sham M Kakade, and Zaid Harchaoui. 2022. Robust aggregation for federated learning. IEEE Transactions on Signal Processing 70 (2022), 1142--1154.
[32]
Sachin Ravi and Hugo Larochelle. 2017. Optimization as a model for few-shot learning. In Proceedings of ICLR.
[33]
Nicola Rieke, Jonny Hancox, Wenqi Li, Fausto Milletari, Holger R Roth, Shadi Albarqouni, Spyridon Bakas, Mathieu N Galtier, Bennett A Landman, Klaus Maier-Hein, et al. 2020. The future of digital health with federated learning. NPJ digital medicine 3, 1 (2020), 1--7.
[34]
Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. 2018. Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances in NeurIPS, Vol. 31.
[35]
Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In Proceedings of NDSS Symposium.
[36]
Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage. 2022. Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning. In Proceedings of the IEEE Symposium on Security and Privacy. 1354--1371.
[37]
Hira Shahzadi Sikandar, Huda Waheed, Sibgha Tahir, Saif UR Malik, and Waqas Rafique. 2023. A Detailed Survey on Federated Learning Attacks and Defenses. Electronics 12, 2 (2023), 260.
[38]
Cory Stephenson, Suchismita Padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, and SueYeon Chung. 2021. On the geometry of generalization and memorization in deep neural networks. In Proceedings of ICLR.
[39]
Jingwei Sun, Ang Li, Louis DiValentin, Amin Hassanzadeh, Yiran Chen, and Hai Li. 2021. Fl-wbc: Enhancing robustness against model poisoning attacks in federated learning from a client perspective. Advances in NeurIPS 34 (2021), 12613--12624.
[40]
Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. 2019. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019).
[41]
Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. 2021. A field guide to federated optimization. arXiv preprint arXiv:2107.06917 (2021).
[42]
Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, and Xing Xie. 2022. Communication-efficient federated learning via knowledge distillation. Nature communications 13, 1 (2022), 1--8.
[43]
Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, and Xing Xie. 2022. FedCTR: Federated Native Ad CTR Prediction with Cross Platform User Behavior Data. ACM Transactions on Intelligent Systems and Technology (2022).
[44]
Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, and Xing Xie. 2022. FedAttack: Effective and covert poisoning attack on federated recommendation via hard sampling. In Proceedings of ACM SIGKDD.
[45]
Han Xiao, Huang Xiao, and Claudia Eckert. 2012. Adversarial label flips attack on support vector machines. In Proceedings of ECAI. IOS Press, 870--875.
[46]
Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2018. Generalized byzantine- tolerant sgd. arXiv preprint arXiv:1802.10116 (2018).
[47]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of ICML. 5650--5659.
[48]
Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. 2019. Be your own teacher: Improve the performance of convolutional neural networks via self distillation. In Proceedings of ICCV. 3713--3722.
[49]
Weiming Zhuang, Yonggang Wen, Xuesen Zhang, Xin Gan, Daiying Yin, Dongzhan Zhou, Shuai Zhang, and Shuai Yi. 2020. Performance optimization of federated person re-identification via benchmark analysis. In Proceedings of ACM MM. 955--963

Cited By

View all
  • (2025)SecDefender: Detecting low-quality models in multidomain federated learning systemsFuture Generation Computer Systems10.1016/j.future.2024.107587164(107587)Online publication date: Mar-2025
  • (2024)Federated Learning: A Comparative Study of Defenses Against Poisoning AttacksApplied Sciences10.3390/app14221070614:22(10706)Online publication date: 19-Nov-2024
  • (2024)Using Third-Party Auditor to Help Federated Learning: An Efficient Byzantine-Robust Federated LearningIEEE Transactions on Sustainable Computing10.1109/TSUSC.2024.33794409:6(848-861)Online publication date: Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
August 2023
5996 pages
ISBN:9798400701030
DOI:10.1145/3580305
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 August 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. client-side defense
  2. federated learning
  3. knowledge distillation
  4. meta learning
  5. model poisoning attack

Qualifiers

  • Research-article

Funding Sources

  • Institute for Basic Science
  • Microsoft Research Asia the Potential Individuals Global Training Program
  • IITP grant by the Ministry of Science and ICT in Korea

Conference

KDD '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Upcoming Conference

KDD '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)653
  • Downloads (Last 6 weeks)49
Reflects downloads up to 26 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2025)SecDefender: Detecting low-quality models in multidomain federated learning systemsFuture Generation Computer Systems10.1016/j.future.2024.107587164(107587)Online publication date: Mar-2025
  • (2024)Federated Learning: A Comparative Study of Defenses Against Poisoning AttacksApplied Sciences10.3390/app14221070614:22(10706)Online publication date: 19-Nov-2024
  • (2024)Using Third-Party Auditor to Help Federated Learning: An Efficient Byzantine-Robust Federated LearningIEEE Transactions on Sustainable Computing10.1109/TSUSC.2024.33794409:6(848-861)Online publication date: Nov-2024
  • (2024)Secure Model Aggregation Against Poisoning Attacks for Cross-Silo Federated Learning With Robustness and FairnessIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.341604219(6321-6336)Online publication date: 2024
  • (2024)Toward Byzantine-Resilient Secure AI: A Federated Learning Communication Framework for 6G Consumer ElectronicsIEEE Transactions on Consumer Electronics10.1109/TCE.2024.338501570:3(5719-5728)Online publication date: Aug-2024
  • (2024)Low dimensional secure federated learning framework against poisoning attacksFuture Generation Computer Systems10.1016/j.future.2024.04.017158:C(183-199)Online publication date: 1-Sep-2024
  • (2024)Emerging trends in federated learning: from model fusion to federated X learningInternational Journal of Machine Learning and Cybernetics10.1007/s13042-024-02119-115:9(3769-3790)Online publication date: 2-Apr-2024
  • (2023)Towards Attack-tolerant Federated Learning via Critical Parameter Analysis2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00461(4976-4985)Online publication date: 1-Oct-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media

Access Granted

The conference sponsors are committed to making content openly accessible in a timely manner.
This article is provided by ACM and the conference, through the ACM OpenTOC service.