Abstract
In recent years, deep neural networks have enjoyed tremendous success in industry and academia, especially for their applications in visual recognition and natural language processing. While large-scale deep models bring incredible performance, their massive data requirements pose a huge threat to data privacy protection. With the growing emphasis on data security, the study of data privacy leakage in machine learning, such as machine unlearning, has become increasingly important. There have been many works on machine unlearning, and other research has proposed training several submodels to speed up the retraining process, by dividing the training data into several disjoint fragments. When the impact of a particular data point in the model is to be removed, the model owner simply retrains the sub-model containing this data point. Nevertheless, current learning methods for machine unlearning are still not widely used due to model applicability, usage overhead, etc. Based on this situation, we propose a novel hierarchical learning method, Hierarchical Machine Unlearning (HMU), with the known distribution of unlearning requests. Compared with previous methods, ours has better efficiency. Using the known distribution, the data can be partitioned and sorted, thus reducing the overhead in the data deletion process. We propose to train the model using the hierarchical data set after partitioning, which further reduces the loss of prediction accuracy of the existing methods. It is also combined with incremental learning methods to speed up the training process. Finally, the effectiveness and efficiency of the method proposed in this paper are verified by multiple experiments.
Supported by science and technology project of Big Data Center of State Grid Corporation of China, “Research on Trusted Data Destruction Technology for Intelligent Analysis” (No. SGSJ0000AZJS2100107) and the National Key Research and Development Program of China under Grants 2020YFB1005900.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, Edinburgh, Scotland (2012)
Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models (2017)
Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy, 01 May 2015. IEEE (2015). https://doi.org/10.1109/sp.2015.35
Ginart, A., Guan, M., Valiant, G., Zou, J.Y.: Making AI forget you: data deletion in machine learning. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, p. Article 316 (2019)
Bourtoule, L., et al.: Machine unlearning. In: 2021 IEEE Symposium on Security and Privacy (SP), 01 May 2021. IEEE (2021). https://doi.org/10.1109/sp40001.2021.00019
Gupta, V., Jung, C., Neel, S., Roth, A., Sharifi-Malvajerdi, S., Waites, C.: Adaptive machine unlearning. arXiv:2106.04378. https://ui.adsabs.harvard.edu/abs/2021arXiv210604378G
Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020). https://doi.org/10.1109/TIFS.2020.2988575
Bertram, T., et al.: Five years of the right to be forgotten. In; Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 06 Nov 2019. ACM (2019). https://doi.org/10.1145/3319535.3354208
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.: Reading digits in natural images with unsupervised feature learning. In: NIPS (2011)
Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhu, H., Xia, Y., Li, Y., Li, W., Liu, K., Gao, X. (2023). Hierarchical Machine Unlearning. In: Sellmann, M., Tierney, K. (eds) Learning and Intelligent Optimization. LION 2023. Lecture Notes in Computer Science, vol 14286. Springer, Cham. https://doi.org/10.1007/978-3-031-44505-7_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-44505-7_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44504-0
Online ISBN: 978-3-031-44505-7
eBook Packages: Computer ScienceComputer Science (R0)