[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

<italic>TAPFed:</italic> Threshold Secure Aggregation for Privacy-Preserving Federated Learning

Published: 05 January 2024 Publication History

Abstract

Federated learning is a computing paradigm that enhances privacy by enabling multiple parties to collaboratively train a machine learning model without revealing personal data. However, current research indicates that traditional federated learning platforms are unable to ensure privacy due to privacy leaks caused by the interchange of gradients. To achieve privacy-preserving federated learning, integrating secure aggregation mechanisms is essential. Unfortunately, existing solutions are vulnerable to recently demonstrated inference attacks such as the disaggregation attack. This article proposes <italic>TAPFed</italic>, an approach for achieving privacy-preserving federated learning in the context of multiple decentralized aggregators with malicious actors. <italic>TAPFed</italic> uses a proposed threshold functional encryption scheme and allows for a certain number of malicious aggregators while maintaining security and privacy. We provide formal security and privacy analyses of <italic>TAPFed</italic> and compare it to various baselines through experimental evaluation. Our results show that <italic>TAPFed</italic> offers equivalent performance in terms of model quality compared to state-of-the-art approaches while reducing transmission overhead by 29&#x0025;&#x2013;45&#x0025; across different model training scenarios. Most importantly, <italic>TAPFed</italic> can defend against recently demonstrated inference attacks caused by curious aggregators, which the majority of existing approaches are susceptible to.

References

[1]
J. Konečny, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2016, pp. 1–5.
[2]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Int. Conf. Artif. Intell. Statist., PMLR, 2017, pp. 1273–1282.
[3]
R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Privacy, 2017, pp. 3–18.
[4]
M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in Proc. IEEE Symp. Secur. Privacy, 2019, pp. 739–753.
[5]
N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song, “The secret sharer: Evaluating and testing unintended memorization in neural networks,” in Proc. 28th USENIX Conf. Secur. Symp., 2019, pp. 267–284.
[6]
K. Ganju, Q. Wang, W. Yang, C. A. Gunter, and N. Borisov, “Property inference attacks on fully connected neural networks using permutation invariant representations,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2018, pp. 619–633.
[7]
J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting gradients-how easy is it to break privacy in federated learning?,” in Proc. 34th Int. Conf. Neural Inf. Process. Syst., 2020, pp. 16937–16947.
[8]
L. Zhu and S. Han, “Deep leakage from gradients,” in Federated Learning, Berlin, Germany: Springer, 2020, pp. 17–31.
[9]
X. Jin, P. Y. Chen, C. Y. Hsu, C. M. Yu, and T. Chen, “CAFE: Catastrophic data leakage in federated learning,” in Proc. Adv. Neural Inf. Process. Syst., 2021, vol. 34, pp. 994–1006.
[10]
R. Xu, N. Baracaldo, Y. Zhou, A. Anwar, and H. Ludwig, “HybridAlpha: An efficient approach for privacy-preserving federated learning,” in Proc. 12th ACM Workshop Artif. Intell. Secur., 2019, pp. 13–23.
[11]
S. Truex et al., “A hybrid approach to privacy-preserving federated learning,” in Proc. 12th ACM Workshop Artif. Intell. Secur., 2019, pp. 1–11.
[12]
R. Xu, N. Baracaldo, Y. Zhou, A. Anwar, J. Joshi, and H. Ludwig, “FedV: Privacy-preserving federated learning over vertically partitioned data,” in Proc. 14th ACM Workshop Artif. Intell. Secur., 2021, pp. 181–192.
[13]
K. Bonawitz et al., “Practical secure aggregation for privacy-preserving machine learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2017, pp. 1175–1191.
[14]
S. Kadhe, N. Rajaraman, O. O. Koyluoglu, and K. Ramchandran, “Fastsecagg: Scalable secure aggregation for privacy-preserving federated learning,” in Proc. Int. Workshop Federated Learn. User Privacy Data Confidentiality Conjunction ICML, 2020, pp. 240–252.
[15]
J. So, B. Güler, and A. S. Avestimehr, “Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning,” IEEE J. Sel. Areas Inf. Theory, vol. 2, no. 1, pp. 479–489, Mar. 2021.
[16]
S. Asoodeh and F. Calmon, “Differentially private federated learning: An information-theoretic perspective,” in Proc. Int. Workshop Federated Learn. User Privacy Data Confidentiality Conjunction ICML, 2020, pp. 132–140.
[17]
C. Zhang, S. Li, J. Xia, W. Wang, F. Yan, and Y. Liu, “BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning,” in Proc. USENIX Conf. Usenix Annu. Tech. Conf., 2020, pp. 493–506.
[18]
R. Xu, N. Baracaldo, Y. Zhou, A. Anwar, S. Kadhe, and H. Ludwig, “DeTrust-FL: Privacy-preserving federated learning in decentralized trust setting,” in Proc. IEEE 15th Int. Conf. Cloud Comput., 2022, pp. 417–426.
[19]
N. Baracaldo and R. Xu, “Protecting against data leakage in federated learning: What approach should you choose?,” in Federated Learning, Berlin, Germany: Springer, 2022, pp. 281–312.
[20]
R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client level perspective,” 2017,.
[21]
M. Rathee, C. Shen, S. Wagh, and R. A. Popa, “ELSA: Secure aggregation for federated learning with malicious actors,” in Proc. IEEE Symp. Secur. Privacy, 2023, pp. 1961–1979.
[22]
H. Roth, M. Zephyr, and A. Harouni, “Federated learning with homomorphic encryption,” NVIDIA™ Developer Blog, Jun. 2021. [Online]. Available: https://developer.nvidia.com/blog/federated-learning-with-homomorphic-encryption/
[23]
J. H. Bell, K. A. Bonawitz, A. Gascón, T. Lepoint, and M. Raykova, “Secure single-server aggregation with (poly) logarithmic overhead,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2020, pp. 1253–1269.
[24]
A. Roy Chowdhury, C. Guo, S. Jha, and L. van der Maaten, “EIFFeL: Ensuring integrity for federated learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2022, pp. 2535–2549.
[25]
Y. Ma, J. Woods, S. Angel, A. Polychroniadou, and T. Rabin, “Flamingo: Multi-round single-server secure aggregation with applications to private federated learning,” in Proc. IEEE Symp. Secur. Privacy, 2023, pp. 477–496.
[26]
J. Bell et al., “ACORN: Input validation for secure aggregation,” in Proc. 32nd USENIX Conf. Secur. Symp., 2023, pp. 4805–4822.
[27]
H. Lycklama, L. Burkhalter, A. Viand, N. Küchler, and A. Hithnawi, “RoFL: Robustness of secure federated learning,” in Proc. IEEE Symp. Secur. Privacy, 2023, pp. 453–476.
[28]
Y. Zheng, S. Lai, Y. Liu, X. Yuan, X. Yi, and C. Wang, “Aggregation service for federated learning: An efficient, secure, and more resilient realization,” IEEE Trans. Dependable Secure Comput., vol. 20, no. 2, pp. 988–1001, Mar./Apr. 2023.
[29]
Z. Liu, J. Guo, K.-Y. Lam, and J. Zhao, “Efficient dropout-resilient aggregation for privacy-preserving machine learning,” IEEE Trans. Inf. Forensics Secur., vol. 18, pp. 1839–1854, 2022.
[30]
T. Eltaras, F. Sabry, W. Labda, K. Alzoubi, and Q. Malluhi, “Efficient verifiable protocol for privacy-preserving aggregation in federated learning,” IEEE Trans. Inf. Forensics Secur., vol. 18, pp. 2977–2990, 2023.
[31]
P.-C. Cheng et al., “Separation of powers in federated learning (poster paper),” in Proc. 1st Workshop Syst. Challenges Reliable Secure Federated Learn., 2021, pp. 16–18.
[32]
Y. Zhang, Z. Wang, J. Cao, R. Hou, and D. Meng, “ShuffleFL: Gradient-preserving federated learning using trusted execution environment,” in Proc. 18th ACM Int. Conf. Comput. Front., 2021, pp. 161–168.
[33]
X. Zhang et al., “Secure collaborative learning in mining pool via robust and efficient verification,” in Proc. IEEE 43rd Int. Conf. Distrib. Comput. Syst., 2023, pp. 794–805.
[34]
M. S. Jere, T. Farnan, and F. Koushanfar, “A taxonomy of attacks on federated learning,” IEEE Secur. Privacy, vol. 19, no. 2, pp. 20–28, Mar./Apr. 2021.
[35]
B. Pinkas, T. Schneider, N. P. Smart, and S. C. Williams, “Secure two-party computation is practical,” in Proc. Int. Conf. Theory Appl. Cryptol. Inf. Secur., Springer, 2009, pp. 250–267.
[36]
P. Mohassel, M. Rosulek, and Y. Zhang, “Fast and secure three-party computation: The garbled circuit approach,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2015, pp. 591–602.
[37]
X. Wang, S. Ranellucci, and J. Katz, “Global-scale secure multiparty computation,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2017, pp. 39–56.
[38]
D. L. Chaum, “Untraceable electronic mail, return addresses, and digital pseudonyms,” Commun. ACM, vol. 24, no. 2, pp. 84–90, 1981.
[39]
D. Chaum, “The dining cryptographers problem: Unconditional sender and recipient untraceability,” J. Cryptol., vol. 1, no. 1, pp. 65–75, 1988.
[40]
T. Ryffel, D. Pointcheval, F. Bach, E. Dufour-Sans, and R. Gay, “Partially encrypted deep learning using functional encryption,” in Proc. 33rd Int. Conf. Neural Inf. Process. Syst., 2019, Art. no.
[41]
K. Wei et al., “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE Trans. Inf. Forensics Secur., vol. 15, pp. 3454–3469, 2020.
[42]
R. Xu and J. Joshi, “Revisiting secure computation using functional encryption: Opportunities and research directions,” in Proc. 2nd IEEE Int. Conf. Trust Privacy Secur. Intell. Syst. Appl., 2020, pp. 226–235.
[43]
M. Abdalla, D. Catalano, D. Fiore, R. Gay, and B. Ursu, “Multi-input functional encryption for inner products: Function-hiding realizations and constructions without pairings,” in Proc. Annu. Int. Cryptol. Conf., Springer, 2018, pp. 597–627.
[44]
J. Chotard, E. Dufour Sans, R. Gay, D. H. Phan, and D. Pointcheval, “Decentralized multi-client functional encryption for inner product,” in Proc. Int. Conf. Theory Appl. Cryptol. Inf. Secur., Springer, 2018, pp. 703–732.
[45]
M. Abdalla, F. Benhamouda, and R. Gay, “From single-input to multi-client inner-product functional encryption,” in Proc. Int. Conf. Theory Appl. Cryptol. Inf. Secur., Springer, 2019, pp. 552–582.
[46]
R. Xu, J. Joshi, and C. Li, “CryptoNN: Training neural networks over encrypted data,” in Proc. IEEE 39th Int. Conf. Distrib. Comput. Syst., 2019, pp. 1199–1209.
[47]
R. Xu, J. Joshi, and C. Li, “NN-EMD: Efficiently training neural networks using encrypted multi-sourced datasets,” IEEE Trans. Dependable Secure Comput., vol. 19, no. 4, pp. 2807–2820, Jul./Aug. 2022.
[48]
D. Boneh, A. Sahai, and B. Waters, “Functional encryption: Definitions and challenges,” in Proc. Theory Cryptogr. Conf., Springer, 2011, pp. 253–273.
[49]
A. Shamir, “How to share a secret,” Commun. ACM, vol. 22, no. 11, pp. 612–613, 1979.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Dependable and Secure Computing
IEEE Transactions on Dependable and Secure Computing  Volume 21, Issue 5
Sept.-Oct. 2024
750 pages

Publisher

IEEE Computer Society Press

Washington, DC, United States

Publication History

Published: 05 January 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media