[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1109/INFOCOM.2019.8737416guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning

Published: 29 April 2019 Publication History

Abstract

Federated learning, i.e., a mobile edge computing framework for deep learning, is a recent advance in privacy-preserving machine learning, where the model is trained in a decentralized manner by the clients, i.e., data curators, preventing the server from directly accessing those private data from the clients. This learning mechanism significantly challenges the attack from the server side. Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i.e., user-level privacy leakage), which is a stronger privacy threat to precisely recover the private data from a specific client. This paper gives the first attempt to explore user-level privacy leakage against the federated learning by the attack from a malicious server. We propose a framework incorporating GAN with a multi-task discriminator, which simultaneously discriminates category, reality, and client identity of input samples. The novel discrimination on client identity enables the generator to recover user specified private data. Unlike existing works that tend to interfere the training process of the federated learning, the proposed method works “invisibly” on the server side. The experimental results demonstrate the effectiveness of the proposed attacking approach and the superior to the state-of-the-art.

References

[1]
R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proc. of ACM CCS, 2015, pp. 1310–1321.
[2]
H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” arXiv preprint arXiv:1602.05629, 2016.
[3]
Y. Aono, T. Hayashi, L. Wang, S. Moriai et al., “Privacy-preserving deep learning: Revisited and enhanced,” in Proc. of ATIS. Springer, 2017, pp. 100–110.
[4]
B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in Proc. of ACM CCS. ACM, 2017, pp. 603–618.
[5]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. of NIPS, 2014, pp. 2672–2680.
[6]
L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Inference attacks against collaborative learning,” arXiv preprint arXiv:1805.04049, 2018.
[7]
A. Odena, C. Olah, and J. Shlens, “Conditional image synthesis with auxiliary classifier gans,” arXiv preprint arXiv:1610.09585, 2016.
[8]
J. Chen, R. Monga, S. Bengio, and R. Jozefowicz, “Revisiting distributed synchronous sgd,” arXiv preprint arXiv:1604.00981, 2016.
[9]
J. Hamm, Y. Cao, and M. Belkin, “Learning privately from multiparty data,” in Proc. of ICML, 2016, pp. 555–563.
[10]
C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
[11]
R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client level perspective,” arXiv preprint arXiv:1712.07557, 2017.
[12]
P. Mohassel and Y. Zhang, “Secureml: A system for scalable privacy-preserving machine learning,” IACR Cryptology ePrint Archive, vol. 2017, p. 396, 2017.
[13]
G. Danner and M. Jelasity, “Fully distributed privacy preserving mini-batch gradient descent learning,” in Proc. of IFIP, 2015, pp. 30–44.
[14]
K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for privacy preserving machine learning,” IACR Cryptology ePrint Archive, vol. 2017, p. 281, 2017.
[15]
R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. of IEEE SP, 2017, pp. 3–18.
[16]
M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proc. of ACM SIGSAC, 2015, pp. 1322–1333.
[17]
X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” in Proc. of NIPS, 2016, pp. 2172–2180.
[18]
A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in Proc. of IEEE CVPR, 2015, pp. 5188–5196.
[19]
L. Yann, C. Corinna, and J. Christopher, “The most database of handwritten digits,” URL http.llyhann.lecun.comlexdblmnist, 1998.
[20]
F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Applications of Computer Vision, 1994., in Proc. of the Second IEEE Workshop on. IEEE, 1994, pp. 138–142.
[21]
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in Proc. of NIPS, 2016, pp. 2234–2242.

Cited By

View all
  • (2025)Concurrent vertical and horizontal federated learning with fuzzy cognitive mapsFuture Generation Computer Systems10.1016/j.future.2024.107482162:COnline publication date: 1-Jan-2025
  • (2024)DPSUR: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and ReleaseProceedings of the VLDB Endowment10.14778/3648160.364816417:6(1200-1213)Online publication date: 1-Feb-2024
  • (2024)Membership Inference Attacks and Defenses in Federated Learning: A SurveyACM Computing Surveys10.1145/370463357:4(1-35)Online publication date: 10-Dec-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
IEEE INFOCOM 2019 - IEEE Conference on Computer Communications
Apr 2019
2583 pages

Publisher

IEEE Press

Publication History

Published: 29 April 2019

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2025)Concurrent vertical and horizontal federated learning with fuzzy cognitive mapsFuture Generation Computer Systems10.1016/j.future.2024.107482162:COnline publication date: 1-Jan-2025
  • (2024)DPSUR: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and ReleaseProceedings of the VLDB Endowment10.14778/3648160.364816417:6(1200-1213)Online publication date: 1-Feb-2024
  • (2024)Membership Inference Attacks and Defenses in Federated Learning: A SurveyACM Computing Surveys10.1145/370463357:4(1-35)Online publication date: 10-Dec-2024
  • (2024)Pack: Towards Communication-Efficient Homomorphic Encryption in Federated LearningProceedings of the 2024 ACM Symposium on Cloud Computing10.1145/3698038.3698557(470-486)Online publication date: 20-Nov-2024
  • (2024)Federated Learning Using Multi-Modal Sensors with Heterogeneous Privacy Sensitivity LevelsACM Transactions on Multimedia Computing, Communications, and Applications10.1145/368680120:11(1-27)Online publication date: 5-Aug-2024
  • (2024)When Federated Learning Meets Privacy-Preserving ComputationACM Computing Surveys10.1145/367901356:12(1-36)Online publication date: 22-Jul-2024
  • (2024)A Profit-Maximizing Data Marketplace with Differentially Private Federated Learning under Price CompetitionProceedings of the ACM on Management of Data10.1145/36771272:4(1-27)Online publication date: 30-Sep-2024
  • (2024)Confidential Federated Learning for Heterogeneous Platforms against Client-Side Privacy LeakagesProceedings of the ACM Turing Award Celebration Conference - China 202410.1145/3674399.3674484(239-241)Online publication date: 5-Jul-2024
  • (2024)Fair and Robust Federated Learning via Decentralized and Adaptive Aggregation based on BlockchainACM Transactions on Sensor Networks10.1145/3673656Online publication date: 17-Jun-2024
  • (2024)Samplable Anonymous Aggregation for Private Federated Data AnalysisProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690224(2859-2873)Online publication date: 2-Dec-2024
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media