Abstract
Model extraction attacks are one of the threats to machine learning as a service (MLaaS). An adversary’s objective is to steal the ML model provided on the MLaaS through application programming interfaces (APIs). The adversary is motivated because the attack avoids various costs for training deep neural networks (DNNs) and infringes on the competitive features of the services. It is important to clarify possible attacks on these systems. Model extraction attacks have faced trade-offs between the domain knowledge in the extraction image sets and the query efficiency. This paper introduces a formula-driven model extraction attack that does NOT use natural images. Our extraction image sets consist of fractal images that represent patterns effectively on natural objects and scenes around us and are generated using mathematical formulas from fractal geometry. We expect the fractal image sets to reduce costs for acquiring images for attack and effectively extract features from the target DNN model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Codes are available at https://github.com/y0sh1d4/model_extraction_attack_without_natural_images.
References
Barnsley, M.F.: Fractals Everywhere. Academic Press (2014)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27 (2014)
Juuti, M., Szyller, S., Marchal, S., Asokan, N.: PRADA: protecting against DNN model stealing attacks. In: 2019 IEEE European Symposium on Security and Privacy (EuroS &P), pp. 512–527. IEEE (2019)
Kataoka, H., et al.: Pre-training without natural images. In: Asian Conference on Computer Vision (ACCV) (2020)
LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database. ATT Labs 2 (2010). http://yann.lecun.com/exdb/mnist
Orekondy, T., Schiele, B., Fritz, M.: Knockoff nets: stealing functionality of black-box models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4954–4963 (2019)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and Communications Security, pp. 506–519 (2017)
Reddi, S.J., Kale, S., Kumar, S.: On the convergence of Adam and beyond. arXiv preprint arXiv:1904.09237 (2019)
da Silva, J.R.C., Berriel, R.F., Badue, C., de Souza, A.F., Oliveira-Santos, T.: Copycat CNN: stealing knowledge by persuading confession with random non-labeled data. CoRR abs/1806.05476 (2018). http://arxiv.org/abs/1806.05476
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Truong, J.B., Maini, P., Walls, R.J., Papernot, N.: Data-free model extraction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4771–4780 (2021)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. CoRR abs/1708.07747 (2017). http://arxiv.org/abs/1708.07747
Acknowledgement
This work was supported by JSPS Grant-in-Aid for Early-Career Scientists Grant Number 23K16910.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yoshida, K., Fujino, T. (2024). Model Extraction Attack Without Natural Images. In: Andreoni, M. (eds) Applied Cryptography and Network Security Workshops. ACNS 2024. Lecture Notes in Computer Science, vol 14587. Springer, Cham. https://doi.org/10.1007/978-3-031-61489-7_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-61489-7_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-61488-0
Online ISBN: 978-3-031-61489-7
eBook Packages: Computer ScienceComputer Science (R0)