[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

FlGan: GAN-Based Unbiased Federated Learning Under Non-IID Settings

Published: 01 April 2024 Publication History

Abstract

Federated Learning (FL) suffers from low convergence and significant accuracy loss due to local biases caused by non-Independent and Identically Distributed (non-IID) data. To enhance the non-IID FL performance, a straightforward idea is to leverage the Generative Adversarial Network (GAN) to mitigate local biases using synthesized samples. Unfortunately, existing GAN-based solutions have inherent limitations, which do not support non-IID data and even compromise user privacy. To tackle the above issues, we propose a GAN-based unbiased FL scheme, called <sc>FlGan</sc>, to mitigate local biases using synthesized samples generated by GAN while preserving user-level privacy in the FL setting. Specifically, <sc>FlGan</sc> first presents a federated GAN algorithm using the divide-and-conquer strategy that eliminates the problem of model collapse in non-IID settings. To guarantee user-level privacy, <sc>FlGan</sc> then exploits Fully Homomorphic Encryption (FHE) to design the privacy-preserving GAN augmentation method for the unbiased FL. Extensive experiments show that <sc>FlGan</sc> achieves unbiased FL with <inline-formula><tex-math notation="LaTeX">$10\%-60\%$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>10</mml:mn><mml:mo>%</mml:mo><mml:mo>-</mml:mo><mml:mn>60</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="ma-ieq1-3309858.gif"/></alternatives></inline-formula> accuracy improvement compared with two state-of-the-art FL baselines (i.e., FedAvg and FedSGD) trained under different non-IID settings. The FHE-based privacy guarantees only cost about 0.53&#x0025; of the total overhead in <sc>FlGan</sc>.

References

[1]
K. Bonawitz et al., “Practical secure aggregation for privacy-preserving machine learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2017, pp. 1175–1191.
[2]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Artif. Intell. Statist., PMLR, 2017, pp. 1273–1282.
[3]
M. Song et al., “Analyzing user-level privacy attack against federated learning,” IEEE J. Sel. Areas Commun., vol. 38, no. 10, pp. 2430–2444, Oct. 2020.
[4]
A. Rakhlin, O. Shamir, and K. Sridharan, “Making gradient descent optimal for strongly convex stochastic optimization,” in Proc. Int. Conf. Mach. Learn., 2012, pp. 1–21.
[5]
Y. Huang et al., “Personalized cross-silo federated learning on non-iid data,” in Proc. AAAI Conf. Artif. Intell., AAAI Press, 2021, pp. 7865–7873.
[6]
D. A. E. Acar, Y. Zhao, R. M. Navarro, M. Mattina, P. N. Whatmough, and V. Saligrama, “Federated learning based on dynamic regularization,” in Proc. Int. Conf. Learn. Representations, 2021.
[7]
Q. Meng, W. Chen, Y. Wang, Z.-M. Ma, and T.-Y. Liu, “Convergence analysis of distributed stochastic gradient descent with shuffling,” Neurocomputing, vol. 337, pp. 46–57, 2019.
[8]
H. Wang, Z. Kaplan, D. Niu, and B. Li, “Optimizing federated learning on non-iid data with reinforcement learning,” in Proc. IEEE Conf. Comput. Commun., 2020, pp. 1698–1707.
[9]
M. Rasouli, T. Sun, and R. Rajagopal, “FedGAN: Federated generative adversarial networks for distributed data,” 2020,.
[10]
B. Xin, W. Yang, Y. Geng, S. Chen, S. Wang, and L. Huang, “Private FL-GAN: Differential privacy synthetic data generation based on federated learning,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., 2020, pp. 2927–2931.
[11]
C. Hardy, E. L. Merrer, and B. Sericola, “MD- GAN: Multi-discriminator generative adversarial networks for distributed datasets,” in Proc. IEEE Int. Parallel Distrib. Process. Symp., 2019, pp. 866–877.
[12]
Y. Zhang, H. Qu, Q. Chang, H. Liu, D. Metaxas, and C. Chen, “Training federated GANs with theoretical guarantees: A universal aggregation approach,” 2021,.
[13]
R. Yonetani, T. Takahashi, A. Hashimoto, and Y. Ushiku, “Decentralized learning of generative adversarial networks from non-iid data,” 2019,.
[14]
B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the GAN: Information leakage from collaborative deep learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2017, pp. 603–618.
[15]
R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2015, pp. 1310–1321.
[16]
X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of fedavg on non-iid data,” in Proc. Int. Conf. Learn. Representations, 2020.
[17]
X. Zhang, M. Hong, S. Dhople, W. Yin, and Y. Liu, “FedPD: A federated learning framework with optimal rates and adaptivity to non-iid data,” 2020,.
[18]
T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” in Proc. Mach. Learn. Syst., 2020, pp. 429–450.
[19]
J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, “Tackling the objective inconsistency problem in heterogeneous federated optimization,” in Proc. Neural Inf. Process. Syst., 2020, pp. 1–34.
[20]
A. Ghosh, J. Chung, D. Yin, and K. Ramchandran, “An efficient framework for clustered federated learning,” in Proc. Neural Inf. Process. Syst., 2020, pp. 1–28.
[21]
H. Wang, M. Yurochkin, Y. Sun, D. S. Papailiopoulos, and Y. Khazaeni, “Federated learning with matched averaging,” in Proc. Int. Conf. Learn. Representations, 2020, pp. 1–13.
[22]
M. Mirza and S. Osindero, “Conditional generative adversarial nets,” 2014,.
[23]
I. Goodfellow et al., “Generative adversarial nets,” in Proc. Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[24]
J. H. Cheon, A. Kim, M. Kim, and Y. Song, “Homomorphic encryption for arithmetic of approximate numbers,” in Proc. Int. Conf. Theory Appl. Cryptol. Inf. Secur., Springer, 2017, pp. 409–437.
[25]
H. Chen, I. Chillotti, and Y. Song, “Improved bootstrapping for approximate homomorphic encryption,” in Proc. Int. Conf. Theory Appl. Cryptographic Techn., Springer, 2019, pp. 34–54.
[26]
O. Regev, “On lattices, learning with errors, random linear codes, and cryptography,” J. ACM, vol. 56, no. 6, pp. 1–40, 2009.
[27]
L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in Proc. IEEE Symp. Secur. Privacy, 2019, pp. 691–706.
[28]
Q. Wang et al., “Privacy-preserving collaborative model learning: The case of word vector training,” IEEE Trans. Knowl. Data Eng., vol. 30, no. 12, pp. 2381–2393, Dec. 2018.
[29]
D. Bogdanov, S. Laur, and J. Willemson, “Sharemind: A framework for fast privacy-preserving computations,” in Proc. Eur. Symp. Res. Comput. Secur., Springer, 2008, pp. 192–206.
[30]
I. Durugkar, I. Gemp, and S. Mahadevan, “Generative multi-adversarial networks,” 2016,.
[31]
X. Jiang, M. Kim, K. E. Lauter, and Y. Song, “Secure outsourced matrix computation and application to neural networks,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., D. Lie, M. Mannan, M. Backes, and X. Wang, Eds., 2018, pp. 1209–1222.
[32]
T. Van Erven and P. Harremos, “Rényi divergence and Kullback-Leibler divergence,” IEEE Trans. Inf. Theory, vol. 60, no. 7, pp. 3797–3820, Jul. 2014.
[33]
B. Fuglede and F. Topsoe, “Jensen-Shannon divergence and Hilbert space embedding,” in Proc. Int. Symp. Inf. Theory, 2004, pp. 31.
[34]
“SEAL Microsoft (release 3.6),” microsoft Research, 2020. [Online]. Available: https://github.com/Microsoft/SEAL
[35]
Y. LeCun, C. Cortes, and C. J. Burges, “MNIST database,” 1998. [Online]. Available: http://yann.lecun.com/exdb/mnist
[36]
Zalando, ”Fashion mnist database,” 1998. [Online]. Available: https://github.com/zalandoresearch/fashion-mnist
[37]
A. Krizhevsky et al., “Learning multiple layers of features from tiny images,” Tech. Rep., 2009.
[38]
J. J. Hull, “A database for handwritten text recognition research,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, no. 5, pp. 550–554, May 1994.
[39]
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. Mach. Learn., 2015, pp. 448–456.
[40]
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” in Proc. Neural Inf. Process. Syst., 2016, pp. 2234–2242.
[41]
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
[42]
X. He, Q. Liu, and Y. Yang, “MV-GNN: Multi-view graph neural network for compression artifacts reduction,” IEEE Trans. Image Process., vol. 29, pp. 6829–6840, 2020.
[43]
Y. Wang et al., “GNNAdvisor: An adaptive and efficient runtime system for GNN acceleration on GPUs,” in Proc USENIX Symp. Operating Syst. Des. Implementation, 2021, pp. 515–531.

Cited By

View all
  • (2024)FRAMU: Attention-Based Machine Unlearning Using Federated Reinforcement LearningIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.338272636:10(5153-5167)Online publication date: 1-Oct-2024
  • (2024)A Federated Generative Adversarial Network With SSIM-PSNR-Based Weight Aggregation for Consumer Electronics WasteIEEE Transactions on Consumer Electronics10.1109/TCE.2024.341178570:3(6208-6215)Online publication date: 1-Aug-2024

Index Terms

  1. FlGan: GAN-Based Unbiased Federated Learning Under Non-IID Settings
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image IEEE Transactions on Knowledge and Data Engineering
      IEEE Transactions on Knowledge and Data Engineering  Volume 36, Issue 4
      April 2024
      458 pages

      Publisher

      IEEE Educational Activities Department

      United States

      Publication History

      Published: 01 April 2024

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 26 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)FRAMU: Attention-Based Machine Unlearning Using Federated Reinforcement LearningIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.338272636:10(5153-5167)Online publication date: 1-Oct-2024
      • (2024)A Federated Generative Adversarial Network With SSIM-PSNR-Based Weight Aggregation for Consumer Electronics WasteIEEE Transactions on Consumer Electronics10.1109/TCE.2024.341178570:3(6208-6215)Online publication date: 1-Aug-2024

      View Options

      View options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media