[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

RDP-GAN: A Rényi-Differential Privacy Based Generative Adversarial Network

Published: 09 January 2023 Publication History

Abstract

Generative adversarial networks (GANs) have attracted increasing attention recently owing to their impressive abilities to generate realistic samples with high privacy protection. Without directly interacting with training examples, the generative model can be used to estimate the underlying distribution of an original dataset while the discriminator can examine model quality of the generated samples by comparing the label values with training examples. In considering privacy issues in GANS, existing works focus on perturbing the parameters and analyzing the corresponding privacy protection capability, and the parameters are not directly exchanged between the generator and discriminator in GANs. Thus, in this work, we propose a Rényi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random Gaussian noise to the value of the exchanged loss function during training. Moreover, we derive analytical results characterizing the total privacy loss under the subsampling method and cumulative iterations, which show its effectiveness for the privacy budget allocation. In addition, in order to mitigate the negative impact of injecting noises, we enhance the proposed algorithm by adding an adaptive noise tuning step, which will change the amount of added noise according to the testing accuracy. Through extensive experimental results, we verify that the proposed algorithm can achieve a better privacy level while producing high-quality samples compared with a benchmark DP-GAN scheme based on noise perturbation on training gradients.

References

[1]
A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” 2015. [Online]. Available: http://arxiv.org/abs/1511.05644
[2]
I. Goodfellow et al., “Generative adversarial nets,” in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[3]
B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the GAN: Information leakage from collaborative deep learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2017, pp. 603–618.
[4]
J. Yoon, J. Jordon, and M. van der Schaar, “PATE-GAN: Generating synthetic data with differential privacy guarantees,” in Proc. Int. Conf. Learn. Representations, 2019.
[5]
Y.-X. Wang, B. Balle, and S. P. Kasiviswanathan, “Subsampled renyi differential privacy and analytical moments accountant,” in Proc. Mach. Learn. Res., 2019, pp. 1226–1235.
[6]
L. Xie, K. Lin, S. Wang, F. Wang, and J. Zhou, “Differentially private generative adversarial network,” 2018,.
[7]
C. Xu, J. Ren, D. Zhang, Y. Zhang, Z. Qin, and K. Ren, “GANobfuscator: Mitigating information leakage under GAN via differential privacy,” IEEE Trans. Inf. Forensics Security, vol. 14, no. 9, pp. 2358–2371, Sep. 2019.
[8]
C. Dwork and G. N. Rothblum, “Concentrated differential privacy,” 2016,.
[9]
M. Abadi et al., “Deep learning with differential privacy,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2016, pp. 308–318.
[10]
M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proc. 34th Int. Conf. Mach. Learn., 2017, pp. 214–223.
[11]
L. Yu, L. Liu, C. Pu, M. E. Gursoy, and S. Truex, “Differentially private model publishing for deep learning,” in Proc. IEEE Symp. Secur. Privacy, 2019, pp. 332–349.
[12]
C. Dwork et al., “The algorithmic foundations of differential privacy,” Found. Trends® Theor. Comput. Sci., vol. 9, no. 3/4, pp. 211–407, 2014.
[13]
C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Proc. Theory Cryptogr. Conf., 2006, pp. 265–284.
[14]
K. Wei et al., “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 3454–3469, 2020.
[15]
I. Mironov, “Rényi differential privacy,” in Proc. IEEE 30th Comput. Secur. Found. Symp., 2017, pp. 263–275.
[16]
M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proc. 22nd ACM SIGSAC Conf. Comput. Commun. Secur., 2015, pp. 1322–1333.
[17]
R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Privacy, 2017, pp. 3–18.
[18]
K. S. Liu, B. Li, and J. Gao, “Generative model: Membership attack, generalization and diversity,” 2018,.
[19]
J. Hayes, L. Melis, G. Danezis, and E. De Cristofaro, “LOGAN: Membership inference attacks against generative models,” Proc. Privacy Enhancing Technol., vol. 2019, no. 1, pp. 133–152, 2019.
[20]
S. Ruder, “An overview of gradient descent optimization algorithms,” 2016,.
[21]
K. Wei et al., “User-level privacy-preserving federated learning: Analysis and performance optimization,” IEEE Trans. Mobile Comput., vol. 21, no. 9, pp. 3388–3401, Sep. 2022.
[22]
F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1251–1258.
[23]
C. Darken and J. E. Moody, “Note on learning rate schedules for stochastic optimization,” in Proc. Adv. Neural Inf. Process. Syst., 1991, pp. 832–838.
[24]
Wikipedia contributors, “One-hot,” 2020. [Online]. Available: https://en.wikipedia.org/wiki/One-hot
[25]
S. Song, K. Chaudhuri, and A. D. Sarwate, “Stochastic gradient descent with differentially private updates,” in Proc. IEEE Glob. Conf. Signal Inf. Process., 2013, pp. 245–248.
[26]
S. Song, K. Chaudhuri, and A. Sarwate, “Learning from data with heterogeneous noise using SGD,” in Proc. Artif. Intell. Statist., 2015, pp. 894–902.
[27]
T. Zhu, G. Li, W. Zhou, and S. Y. Philip, “Differentially private data publishing and analysis: A survey,” IEEE Trans. Knowl. Data Eng., vol. 29, no. 8, pp. 1619–1638, Aug. 2017.
[28]
C. Tran, F. Fioretto, and P. V. Hentenryck, “Differentially private and fair deep learning: A lagrangian dual approach,” 2020,.
[29]
J. Lei, “Differentially private m-estimators,” in Proc. Adv. Neural Inf. Process. Syst., 2011, pp. 361–369.
[30]
J. Zhang, Z. Zhang, X. Xiao, Y. Yang, and M. Winslett, “Functional mechanism: Regression analysis under differential privacy,” Proc. Very Large Data Bases Endowment, vol. 5, no. 11, pp. 1364–375, Jul. 2012.
[31]
H. Li, L. Xiong, Z. Ji, and X. Jiang, “Partitioning-based mechanisms under personalized differential privacy,” in Proc. Adv. Knowl. Discov. Data Mining, 2017, pp. 615–627.
[32]
L. Mescheder, A. Geiger, and S. Nowozin, “Which training methods for GANs do actually converge?,” in Proc. 35th Int. Conf. Mach. Learn., 2018, pp. 3481–3490.
[33]
N. Kodali, J. D. Abernethy, J. Hays, and Z. Kira, “How to train your DRAGAN,” 2017. [Online]. Available: https://arxiv.org/abs/1705.07215
[34]
M. Rasouli, T. Sun, and R. Rajagopal, “FedGAN: Federated generative adversarial networks for distributed data,” 2020,.
[35]
M. Ding, D. López-Pérez, G. Mao, Z. Lin, and S. K. Das, “DNA-GA: A tractable approach for performance analysis of uplink cellular networks,” IEEE Trans. Commun., vol. 66, no. 1, pp. 355–369, Jan. 2018.
[36]
A. D. Pozzolo, O. Caelen, R. A. Johnson, and G. Bontempi, “Calibrating probability with undersampling for unbalanced classification,” in Proc. IEEE Symp. Ser. Comput. Intell., 2015, pp. 159–166.
[37]
S. J. Pocock et al., “Predicting survival in heart failure: A risk score based on 39 372 patients from 30 studies,” Eur. Heart J., vol. 34, no. 19, pp. 1404–1413, 2012.

Cited By

View all
  • (2024)PPDNN-CRP: privacy-preserving deep neural network processing for credit risk prediction in cloud: a homomorphic encryption-based approachJournal of Cloud Computing: Advances, Systems and Applications10.1186/s13677-024-00711-y13:1Online publication date: 15-Oct-2024
  • (2024)Security and Privacy on Generative Data in AIGC: A SurveyACM Computing Surveys10.1145/370362657:4(1-34)Online publication date: 10-Dec-2024
  • (2023)Covert Model Poisoning Against Federated Learning: Algorithm Design and OptimizationIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.327411921:3(1196-1209)Online publication date: 8-May-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Dependable and Secure Computing
IEEE Transactions on Dependable and Secure Computing  Volume 20, Issue 6
Nov.-Dec. 2023
869 pages

Publisher

IEEE Computer Society Press

Washington, DC, United States

Publication History

Published: 09 January 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)PPDNN-CRP: privacy-preserving deep neural network processing for credit risk prediction in cloud: a homomorphic encryption-based approachJournal of Cloud Computing: Advances, Systems and Applications10.1186/s13677-024-00711-y13:1Online publication date: 15-Oct-2024
  • (2024)Security and Privacy on Generative Data in AIGC: A SurveyACM Computing Surveys10.1145/370362657:4(1-34)Online publication date: 10-Dec-2024
  • (2023)Covert Model Poisoning Against Federated Learning: Algorithm Design and OptimizationIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.327411921:3(1196-1209)Online publication date: 8-May-2023

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media