[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3534678.3539376acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Bilateral Dependency Optimization: Defending Against Model-inversion Attacks

Published: 14 August 2022 Publication History

Abstract

Through using only a well-trained classifier, model-inversion (MI) attacks can recover the data used for training the classifier, leading to the privacy leakage of the training data. To defend against MI attacks, previous work utilizes a unilateral dependency optimization strategy, i.e., minimizing the dependency between inputs (i.e., features) and outputs (i.e., labels) during training the classifier. However, such a minimization process conflicts with minimizing the supervised loss that aims to maximize the dependency between inputs and outputs, causing an explicit trade-off between model robustness against MI attacks and model utility on classification tasks. In this paper, we aim to minimize the dependency between the latent representations and the inputs while maximizing the dependency between latent representations and the outputs, named a bilateral dependency optimization (BiDO) strategy. In particular, we use the dependency constraints as a universally applicable regularizer in addition to commonly used losses for deep neural networks (e.g., cross-entropy), which can be instantiated with appropriate dependency criteria according to different tasks. To verify the efficacy of our strategy, we propose two implementations of BiDO, by using two different dependency measures: BiDO with constrained covariance (BiDO-COCO) and BiDO with Hilbert-Schmidt Independence Criterion (BiDO-HSIC). Experiments show that BiDO achieves the state-of-the-art defense performance for a variety of datasets, classifiers, and MI attacks while suffering a minor classification-accuracy drop compared to the well-trained classifier with no defense, which lights up a novel road to defend against MI attacks.

Supplemental Material

MP4 File
Presentation video

References

[1]
Francis R. Bach and Michael I. Jordan. 2002. Kernel Independent Component Analysis. Journal of Machine Learning Research, Vol. 3 (2002), 1--48.
[2]
Si Chen, Mostafa Kahla, Ruoxi Jia, and Guo-Jun Qi. 2021. Knowledge-Enriched Distributional Model Inversion Attacks. In ICCV.
[3]
Christopher A Choquette-Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. 2021. Label-only membership inference attacks. In ICML.
[4]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In CCS.
[5]
Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In USENIX Security.
[6]
Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, and Masashi Sugiyama. 2021. Maximum mean discrepancy test is aware of adversarial attacks. In ICML.
[7]
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. The Journal of Machine Learning Research, Vol. 13, 1 (2012), 723--773.
[8]
Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. 2005 a. Measuring statistical dependence with Hilbert-Schmidt norms. In ALT.
[9]
Arthur Gretton, Ralf Herbrich, Alexander Smola, Olivier Bousquet, Bernhard Schölkopf, et al. 2005 b. Kernel methods for measuring independence. Journal of Machine Learning Research (2005).
[10]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR.
[11]
Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, and Sanjeev Arora. 2021. Evaluating gradient inversion attacks and defenses in federated learning. In NeurIPS.
[12]
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. Analyzing and improving the image quality of stylegan. In CVPR.
[13]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv (2014).
[14]
Durk P Kingma and Prafulla Dhariwal. 2018. Glow: Generative flow with invertible 1x1 convolutions. In NeurIPS.
[15]
Kalpesh Krishna, Gaurav Singh Tomar, Ankur P Parikh, Nicolas Papernot, and Mohit Iyyer. 2019. Thieves on sesame street! model extraction of bert-based apis. In ICLR.
[16]
Alex Krizhevsky, Geoffrey Hinton, et almbox. 2009. Learning multiple layers of features from tiny images. Technical Report (2009).
[17]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE (1998).
[18]
Xin-Chun Li and De-Chuan Zhan. 2021. FedRS: Federated Learning with Restricted Softmax for Label Distribution Non-IID Data. In KDD.
[19]
Yazhe Li, Roman Pogodin, Danica J Sutherland, and Arthur Gretton. 2021. Self-supervised learning with kernel dependence maximization. In NeurIPS.
[20]
Feng Liu, Wenkai Xu, Jie Lu, and Danica J. Sutherland. 2021. Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data. In NeurIPS.
[21]
Feng Liu, Wenkai Xu, Jie Lu, Guangquan Zhang, Arthur Gretton, and Danica J Sutherland. 2020. Learning deep kernels for non-parametric two-sample tests. In ICML.
[22]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In ICCV.
[23]
Fenglong Ma, Radha Chitta, Jing Zhou, Quanzeng You, Tong Sun, and Jing Gao. 2017. Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks. In KDD.
[24]
Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv (2015).
[25]
Dimitrios Rafailidis and Yannis Manolopoulos. 2019. Can Virtual Assistants Produce Recommendations?. In WIMS.
[26]
Maria Rigaki and Sebastian Garcia. 2020. A survey of privacy attacks in machine learning. arXiv (2020).
[27]
Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang Zhang. 2020. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning. In USENIX Security.
[28]
Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. 2021. Toward causal representation learning. arXiv (2021).
[29]
Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In CVPR.
[30]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv (2014).
[31]
Congzheng Song and Ananth Raghunathan. 2020. Information leakage in embedding models. In CCS.
[32]
Le Song, Alex Smola, Arthur Gretton, Justin Bedo, and Karsten Borgwardt. 2012. Feature Selection via Dependence Maximization. Journal of Machine Learning Research (2012).
[33]
Le Song, Alex Smola, Arthur Gretton, and Karsten M Borgwardt. 2007. A dependence maximization view of clustering. In ICML.
[34]
Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf. 2014. Deepface: Closing the gap to human-level performance in face verification. In CVPR.
[35]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
[36]
Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, and Alireza Makhzani. 2021. Variational Model Inversion Attacks. In NeurIPS.
[37]
Tianhao Wang, Yuheng Zhang, and Ruoxi Jia. 2020. Improving robustness to model inversion attacks via mutual information regularization. In AAAI.
[38]
Ziqi Yang, Ee-Chien Chang, and Zhenkai Liang. 2019. Adversarial neural network inversion via auxiliary knowledge alignment. arXiv (2019).
[39]
Ziqi Yang, Bin Shao, Bohan Xuan, Ee-Chien Chang, and Fan Zhang. 2020. Defending model inversion and membership inference attacks via prediction purification. arXiv (2020).
[40]
Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Bo Li, and Dawn Song. 2020. The secret revealer: Generative model-inversion attacks against deep neural networks. In CVPR.
[41]
Zirui Zhou, Lingyang Chu, Changxin Liu, Lanjun Wang, Jian Pei, and Yong Zhang. 2021. Towards Fair Federated Learning. In KDD.

Cited By

View all
  • (2024)Novel Privacy Attacks and Defenses Against Neural NetworksProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690863(5113-5115)Online publication date: 2-Dec-2024
  • (2024)Unstoppable Attack: Label-Only Model Inversion Via Conditional Diffusion ModelIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337281519(3958-3973)Online publication date: 2024
  • (2024)Boosting Model Inversion Attacks With Adversarial ExamplesIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.328501521:3(1451-1468)Online publication date: May-2024
  • Show More Cited By

Index Terms

  1. Bilateral Dependency Optimization: Defending Against Model-inversion Attacks

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
      August 2022
      5033 pages
      ISBN:9781450393850
      DOI:10.1145/3534678
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 14 August 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. deep neural networks
      2. model-inversion attacks
      3. privacy leakage
      4. statistical dependency

      Qualifiers

      • Research-article

      Funding Sources

      • RGC Early Career Scheme
      • Guangdong Basic and Applied Basic Research Foundation
      • NSFC Young Scientists Fund

      Conference

      KDD '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

      Upcoming Conference

      KDD '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)109
      • Downloads (Last 6 weeks)11
      Reflects downloads up to 31 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Novel Privacy Attacks and Defenses Against Neural NetworksProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690863(5113-5115)Online publication date: 2-Dec-2024
      • (2024)Unstoppable Attack: Label-Only Model Inversion Via Conditional Diffusion ModelIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337281519(3958-3973)Online publication date: 2024
      • (2024)Boosting Model Inversion Attacks With Adversarial ExamplesIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.328501521:3(1451-1468)Online publication date: May-2024
      • (2024)Model Inversion Robustness: Can Transfer Learning Help?2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01158(12183-12193)Online publication date: 16-Jun-2024
      • (2024)On the Vulnerability of Skip Connections to Model Inversion AttacksComputer Vision – ECCV 202410.1007/978-3-031-73004-7_9(140-157)Online publication date: 1-Nov-2024
      • (2024)Improving Robustness to Model Inversion Attacks via Sparse Coding ArchitecturesComputer Vision – ECCV 202410.1007/978-3-031-72989-8_7(117-136)Online publication date: 26-Oct-2024
      • (2023)Label-only model inversion attacks via knowledge transferProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669137(68895-68907)Online publication date: 10-Dec-2023
      • (2023)On strengthening and defending graph reconstruction attack with Markov chain approximationProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3620215(42843-42877)Online publication date: 23-Jul-2023
      • (2023)Diversity-enhancing generative network for few-shot hypothesis adaptationProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3618738(8260-8275)Online publication date: 23-Jul-2023
      • (2023)Quality-Agnostic Deepfake Detection with Intra-model Collaborative Learning2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.02045(22321-22332)Online publication date: 1-Oct-2023
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media