[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3634737.3659428acmconferencesArticle/Chapter ViewAbstractPublication Pagesasia-ccsConference Proceedingsconference-collections
poster

Poster: Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning

Published: 01 July 2024 Publication History

Abstract

Federated learning (FL) enables multiple participants to train a global machine learning model without sharing their private training data. Peer-to-peer (P2P) FL advances existing centralized FL paradigms by eliminating the server that aggregates local models from participants and then updates the global model. However, P2P FL is vulnerable to (i) honest-but-curious participants whose objective is to infer private training data of other participants, and (ii) Byzantine participants who can transmit arbitrarily manipulated local models to corrupt the learning process. P2P FL schemes that simultaneously guarantee Byzantine resilience and preserve privacy have been less studied. In this paper, we develop Brave, a protocol that ensures Byzantine Resilience And priVacy-prEserving property for P2P FL in the presence of both types of adversaries. We show that Brave preserves privacy by establishing that any honest-but-curious adversary cannot infer other participants' private data by observing their models. We further prove that Brave is Byzantine-resilient, which guarantees that all benign participants converge to an identical model that deviates from a global model trained without Byzantine adversaries by a bounded distance. We evaluate Brave against three state-of-the-art adversaries on a P2P FL for image classification tasks on benchmark datasets CIFAR10 and MNIST. Our results show that global models learned with Brave in the presence of adversaries achieve comparable classification accuracy to global models trained in the absence of any adversary.

References

[1]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS '17). ACM, 1175--1191.
[2]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to Byzantine-robust federated learning. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, 1605--1622.
[3]
Antoine Joux, Andrew Odlyzko, and Cécile Pierrot. 2014. The past, evolving present, and future of the discrete logarithm. Open Problems in Mathematics and Computational Science (2014), 5--36.
[4]
Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Ph. D. Dissertation. University of Toronto.
[5]
Anusha Lalitha, Osman Cihan Kilinc, Tara Javidi, and Farinaz Koushanfar. 2019. Peer-to-peer federated learning on graphs. arXiv preprint arXiv:1901.11173 (2019).
[6]
Yann LeCun, Corinna Cortes, and Chris Burges. 2010. MNIST handwritten digit database. Available: http://yann.lecun.com/exdb/mnist.
[7]
Kevin S McCurley. 1990. The discrete logarithm problem. In Proc. of Symp. in Applied Math, Vol. 42. USA, AMS, 49--74.
[8]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Vol. 54. PMLR, 1273--1282.
[9]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 739--753.
[10]
Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. 2020. Data poisoning attacks against federated learning systems. In Computer Security - ESORICS 2020. Springer International Publishing, 480--501.
[11]
Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2019. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In Proceedings of the 36th International Conference on Machine Learning, Vol. 97. PMLR, 6893--6901.
[12]
Andrew C. Yao. 1982. Protocols for secure computations. In 23rd Annual Symposium on Foundations of Computer Science. IEEE, 160--164.
[13]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the 35th International Conference on Machine Learning, Vol. 80. PMLR, 5650--5659.

Index Terms

  1. Poster: Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASIA CCS '24: Proceedings of the 19th ACM Asia Conference on Computer and Communications Security
      July 2024
      1987 pages
      ISBN:9798400704826
      DOI:10.1145/3634737
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 01 July 2024

      Check for updates

      Qualifiers

      • Poster

      Funding Sources

      • AFOSR
      • NSF

      Conference

      ASIA CCS '24
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 418 of 2,322 submissions, 18%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 100
        Total Downloads
      • Downloads (Last 12 months)100
      • Downloads (Last 6 weeks)20
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media