[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3678890.3678915acmotherconferencesArticle/Chapter ViewAbstractPublication PagesraidConference Proceedingsconference-collections
research-article
Open access

Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning Periods

Published: 30 September 2024 Publication History

Abstract

Most existing model poisoning attacks in federated learning (FL) control a set of malicious clients and share a fixed number of malicious gradients with the server in each FL training round, to achieve a desired tradeoff between the attack impact and the attack budget. In this paper, we show that such a tradeoff is not fundamental and an adaptive attack budget not only improves the impact of attack <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 1</AltText><File name="raid2024-25-inline1" type="svg"/></Formula> but also makes it more resilient to defenses. However, adaptively determining the number of malicious clients that share malicious gradients with the central server in each FL training round has been less investigated. This is due to the fact that most existing model poisoning attacks mainly focus on FL optimization itself to maximize the damage to the global model, and largely ignore the impact of the underlying deep neural networks that are used to train FL models. Inspired by recent findings on critical learning periods (CLP), where small gradient errors have irrecoverable impact on model accuracy, we advocate CLP augmented model poisoning attacks <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 2</AltText><File name="raid2024-25-inline2" type="svg"/></Formula>-CLP in this paper. <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 3</AltText><File name="raid2024-25-inline3" type="svg"/></Formula>-CLP merely augments an existing model poisoning attack <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 4</AltText><File name="raid2024-25-inline4" type="svg"/></Formula> with an adaptive attack budget scheme. Specifically, <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 5</AltText><File name="raid2024-25-inline5" type="svg"/></Formula>-CLP inspects the changes in federated gradient norms to identify CLP and adaptively adjusts the number of malicious clients that share their malicious gradients with the server in each round, leading to dramatically improved attack impact compared to <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 6</AltText><File name="raid2024-25-inline6" type="svg"/></Formula> by up to 6.85 ×, with a smaller attack budget. This in turn improves the resilience of <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 7</AltText><File name="raid2024-25-inline7" type="svg"/></Formula> by up to 2 ×. Since <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 8</AltText><File name="raid2024-25-inline8" type="svg"/></Formula>-CLP is orthogonal to the attack <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 9</AltText><File name="raid2024-25-inline9" type="svg"/></Formula>, it also crafts malicious gradients by solving a difficult optimization problem. To tackle this challenge and based on our understandings of <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 10</AltText><File name="raid2024-25-inline10" type="svg"/></Formula>-CLP, we further relax the inner attack subroutine <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 11</AltText><File name="raid2024-25-inline11" type="svg"/></Formula> in <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 12</AltText><File name="raid2024-25-inline12" type="svg"/></Formula>-CLP and design GraSP, a lightweight CLP augmented similarity-based attack. We show that GraSP not only is more flexible but also achieves an improved attack impact compared to the strongest of existing model poisoning attacks.

References

[1]
Alessandro Achille, Matteo Rovere, and Stefano Soatto. 2019. Critical Learning Periods in Deep Networks. In Proc. of ICLR.
[2]
Dan Alistarh, Zeyuan Allen-Zhu, and Jerry Li. 2018. Byzantine stochastic gradient descent. In Proc. of NeurIPS.
[3]
Joshua Zhao Qiang Qiu Saurabh Bagchi Atul Sharma, Wei Chen and Somali Chaterji. 2023. FLAIR: Defense against Model Poisoning Attack in Federated Learning. In Proc. of ASIA CCS.
[4]
Eugene Bagdasaryan and Vitaly Shmatikov. 2021. Blind backdoors in deep learning models. In Proc. of USENIX Security Symposium.
[5]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In Proc. of AISTATS.
[6]
Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A little is enough: Circumventing defenses for distributed learning. In Proc. of NeurIPS.
[7]
Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In Proc. of ICML.
[8]
Battista Biggio, Luca Didaci, Giorgio Fumera, and Fabio Roli. 2013. Poisoning attacks to compromise face templates. In Proc. of IEEE International Conference on Biometrics (ICB).
[9]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012).
[10]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proc. of NeurIPS.
[11]
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečnỳ, Stefano Mazzocchi, Brendan McMahan, 2019. Towards federated learning at scale: System design. In Proc. of MLSys.
[12]
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. 2021. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. In Proc. of NDSS.
[13]
Xiaoyu Cao and Neil Zhenqiang Gong. 2022. MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients. arXiv preprint arXiv:2203.08669 (2022).
[14]
Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, and Neil Zhenqiang Gong. 2023. FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. In Proc. of IEEE S&P.
[15]
Zheyi Chen, Pu Tian, Weixian Liao, and Wei Yu. 2021. Towards multi-party targeted model poisoning attacks against federated learning systems. High-Confidence Computing (2021).
[16]
Tuo Zhang Duygu Nur Yaldiz and Salman Avestimehr. 2023. Secure Federated Learning against Model Poisoning Attacks via Client Filtering. In Proc of ICLR Workshop.
[17]
Mahdi El El Mhamdi, Rachid Guerraoui, and Sébastien Rouault. 2018. The hidden vulnerability of distributed learning in byzantium. In Proc. of ICML.
[18]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. In Proc. of USENIX Security.
[19]
Jonathan Frankle, David J Schwab, and Ari S Morcos. 2020. The Early Phase of Neural Network Training. In Proc. of ICLR.
[20]
Aditya Sharad Golatkar, Alessandro Achille, and Stefano Soatto. 2019. Time Matters in Regularizing Deep Networks: Weight Decay and Data Augmentation Affect Early Learning Dynamics, Matter Little Near Convergence. Proc. of NeurIPS (2019).
[21]
William C Guenther. 1978. Some remarks on the runs test and the use of the hypergeometric distribution. The American Statistician 32, 2 (1978), 71–73.
[22]
William L Harkness. 1965. Properties of the extended hypergeometric distribution. The Annals of Mathematical Statistics 36, 3 (1965), 938–945.
[23]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proc. of IEEE CVPR.
[24]
Lie He, Sai Praneeth Karimireddy, and Martin Jaggi. 2020. Byzantine-robust learning on heterogeneous datasets via resampling. (2020).
[25]
Kevin Hsieh, Amar Phanishayee, Onur Mutlu, and Phillip Gibbons. 2020. The non-iid data quagmire of decentralized machine learning. In Proc. of ICML.
[26]
Ahmed Imteaj, Khandaker Mamun Ahmed, Urmish Thakker, Shiqiang Wang, Jian Li, and M Hadi Amini. 2022. Federated learning for resource-constrained iot devices: Panoramas and state of the art. Federated and Transfer Learning (2022), 7–27.
[27]
Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. 2018. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In Proc. of IEEE S&P.
[28]
Stanislaw Jastrzebski, Devansh Arpit, Oliver Astrand, Giancarlo B Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, and Krzysztof J Geras. 2021. Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization. In Proc. of ICML.
[29]
Stanislaw Jastrzebski, Zachary Kenton, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos J Storkey. 2019. On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length. In Proc. of ICLR.
[30]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, 2019. Advances and Open Problems in Federated Learning. arXiv preprint arXiv:1912.04977 (2019).
[31]
Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Proc. of AAAI.
[32]
Alex Krizhevsky, Geoffrey Hinton, 2009. Learning Multiple Layers of Features from Tiny Images. (2009).
[33]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet Classification with Deep Convolutional Neural Networks. Proc. of NIPS (2012).
[34]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
[35]
Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. In Proc. of NIPS.
[36]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated Optimization in Heterogeneous Networks. In Proc. of MLSys.
[37]
Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2020. On the Convergence of FedAvg on Non-IID Data. In Proc. of ICLR.
[38]
Saeed Mahloujifar, Mohammad Mahmoody, and Ameer Mohammed. 2019. Universal multi-party poisoning attacks. In Proc. of ICML.
[39]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proc. of AISTATS.
[40]
Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 27–38.
[41]
Luis Muñoz-González, Kenneth T Co, and Emil C Lupu. 2019. Byzantine-robust federated machine learning through adaptive model averaging. arXiv preprint arXiv:1909.05125 (2019).
[42]
Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, and Hossein Fereidooni. 2022. FLAME: Taming backdoors in federated learning. In Proc. of USENIX Security Symposium.
[43]
Ashwinee Panda and Saeed Mahloujifar. 2022. SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification. In Proc. of AISTATS.
[44]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.
[45]
Milos Radovanović, Alexandros Nanopoulos, and Mirjana Ivanović. 2010. On the existence of obstinate results in vector space models. In Proc. of ACM SIGIR.
[46]
Benjamin IP Rubinstein, Blaine Nelson, Ling Huang, Anthony D Joseph, Shing-hon Lau, Satish Rao, Nina Taft, and J Doug Tygar. 2009. Antidote: understanding and defending against poisoning of anomaly detectors. In Proc. of ACM IMC.
[47]
Subhash Sagar, Chang-Tsun Li, Seng W. Loke, and Jinho Choi. 2023. Poisoning Attacks and Defenses in Federated Learning: A Survey. arXiv preprint arXiv:2301.05795 (2023).
[48]
Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In Proc. of NDSS.
[49]
Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage. 2022. Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning. In Proc. of IEEE S&P.
[50]
Lei Shi, Zhen Chen, Yucheng Shi, Lin Wei, Yongcai Tao, Mengyang He, Qingxian Wang, Yuan Zhou, and Yufei Gao. 2023. MPHM: Model poisoning attacks on federal learning using historical information momentum. Security and Safety (2023).
[51]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-scale Image Recognition. In Proc. of ICLR.
[52]
Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. 2019. Can you really backdoor federated learning?arXiv preprint arXiv:1911.07963 (2019).
[53]
Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. 2020. Attack of the tails: Yes, you really can backdoor federated learning. In Proc. of NeurIPS.
[54]
Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. 2020. Federated Learning with Matched Averaging. In Proc. of ICLR.
[55]
Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. 2020. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization. Proc. of NeurIPS (2020).
[56]
Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. 2015. Is feature selection secure against training data poisoning?. In Proc. of ICML.
[57]
Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li. 2021. CRFL:Certifiably Robust Federated Learning against Backdoor Attacks. In Proc. of ICML.
[58]
Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2019. Dba: Distributed backdoor attacks against federated learning. In Proc. of ICLR.
[59]
Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2018. Generalized byzantine-tolerant sgd. arXiv preprint arXiv:1802.10116 (2018).
[60]
Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2020. Fall of empires: Breaking byzantine-tolerant sgd by inner product manipulation. In Proc. of UAI.
[61]
Chulin Xie, Yunhui Long, Pin-Yu Chen, Qinbin Li, Sanmi Koyejo, and Bo Li. 2023. Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks. In Proc. of ACM CCS.
[62]
Gang Yan, Hao Wang, and Jian Li. 2022. Seizing Critical Learning Periods in Federated Learning. In Proc. of AAAI.
[63]
Gang Yan, Hao Wang, Xu Yuan, and Jian Li. 2023. Criticalfl: A critical learning periods augmented client selection framework for efficient federated learning. In Proc. of ACM SIGKDD.
[64]
Gang Yan, Hao Wang, Xu Yuan, and Jian Li. 2023. Defl: Defending against model poisoning attacks in federated learning via critical learning periods awareness. In Proc. of AAAI.
[65]
Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. 2017. Fake Co-visitation Injection Attacks to Recommender Systems. In Proc. of NDSS.
[66]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proc. of ICML.
[67]
Chaoyi Zhu, Stefanie Roos, and Lydia Y. Chen. 2023. LeadFL: Client Self-Defense against Model Poisoning in Federated Learning. In Proc. of ICML.
[68]
Haomin Zhuang, Mingxian Yu, Hao Wang, Yang Hua, Jian Li, and Xu Yuan. 2023. Backdoor Federated Learning by Poisoning Backdoor-Critical Layers. arXiv preprint arXiv:2308.04466 (2023).

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
RAID '24: Proceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses
September 2024
719 pages
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 September 2024

Check for updates

Author Tags

  1. Critical Learning Periods
  2. Federated Learning
  3. Model Poisoning Attacks

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

RAID '24

Acceptance Rates

RAID '24 Paper Acceptance Rate 43 of 173 submissions, 25%;
Overall Acceptance Rate 43 of 173 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 172
    Total Downloads
  • Downloads (Last 12 months)172
  • Downloads (Last 6 weeks)61
Reflects downloads up to 31 Dec 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media