[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Hierarchical Machine Unlearning

  • Conference paper
  • First Online:
Learning and Intelligent Optimization (LION 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14286))

Included in the following conference series:

  • 861 Accesses

Abstract

In recent years, deep neural networks have enjoyed tremendous success in industry and academia, especially for their applications in visual recognition and natural language processing. While large-scale deep models bring incredible performance, their massive data requirements pose a huge threat to data privacy protection. With the growing emphasis on data security, the study of data privacy leakage in machine learning, such as machine unlearning, has become increasingly important. There have been many works on machine unlearning, and other research has proposed training several submodels to speed up the retraining process, by dividing the training data into several disjoint fragments. When the impact of a particular data point in the model is to be removed, the model owner simply retrains the sub-model containing this data point. Nevertheless, current learning methods for machine unlearning are still not widely used due to model applicability, usage overhead, etc. Based on this situation, we propose a novel hierarchical learning method, Hierarchical Machine Unlearning (HMU), with the known distribution of unlearning requests. Compared with previous methods, ours has better efficiency. Using the known distribution, the data can be partitioned and sorted, thus reducing the overhead in the data deletion process. We propose to train the model using the hierarchical data set after partitioning, which further reduces the loss of prediction accuracy of the existing methods. It is also combined with incremental learning methods to speed up the training process. Finally, the effectiveness and efficiency of the method proposed in this paper are verified by multiple experiments.

Supported by science and technology project of Big Data Center of State Grid Corporation of China, “Research on Trusted Data Destruction Technology for Intelligent Analysis” (No. SGSJ0000AZJS2100107) and the National Key Research and Development Program of China under Grants 2020YFB1005900.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 63.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 79.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, Edinburgh, Scotland (2012)

    Google Scholar 

  2. Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  3. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models (2017)

    Google Scholar 

  4. Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy, 01 May 2015. IEEE (2015). https://doi.org/10.1109/sp.2015.35

  5. Ginart, A., Guan, M., Valiant, G., Zou, J.Y.: Making AI forget you: data deletion in machine learning. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, p. Article 316 (2019)

    Google Scholar 

  6. Bourtoule, L., et al.: Machine unlearning. In: 2021 IEEE Symposium on Security and Privacy (SP), 01 May 2021. IEEE (2021). https://doi.org/10.1109/sp40001.2021.00019

  7. Gupta, V., Jung, C., Neel, S., Roth, A., Sharifi-Malvajerdi, S., Waites, C.: Adaptive machine unlearning. arXiv:2106.04378. https://ui.adsabs.harvard.edu/abs/2021arXiv210604378G

  8. Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020). https://doi.org/10.1109/TIFS.2020.2988575

    Article  Google Scholar 

  9. Bertram, T., et al.: Five years of the right to be forgotten. In; Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 06 Nov 2019. ACM (2019). https://doi.org/10.1145/3319535.3354208

  10. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.: Reading digits in natural images with unsupervised feature learning. In: NIPS (2011)

    Google Scholar 

  11. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kang Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, H., Xia, Y., Li, Y., Li, W., Liu, K., Gao, X. (2023). Hierarchical Machine Unlearning. In: Sellmann, M., Tierney, K. (eds) Learning and Intelligent Optimization. LION 2023. Lecture Notes in Computer Science, vol 14286. Springer, Cham. https://doi.org/10.1007/978-3-031-44505-7_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44505-7_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44504-0

  • Online ISBN: 978-3-031-44505-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics