[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Model Extraction Attack Without Natural Images

  • Conference paper
  • First Online:
Applied Cryptography and Network Security Workshops (ACNS 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14587))

Included in the following conference series:

  • 311 Accesses

Abstract

Model extraction attacks are one of the threats to machine learning as a service (MLaaS). An adversary’s objective is to steal the ML model provided on the MLaaS through application programming interfaces (APIs). The adversary is motivated because the attack avoids various costs for training deep neural networks (DNNs) and infringes on the competitive features of the services. It is important to clarify possible attacks on these systems. Model extraction attacks have faced trade-offs between the domain knowledge in the extraction image sets and the query efficiency. This paper introduces a formula-driven model extraction attack that does NOT use natural images. Our extraction image sets consist of fractal images that represent patterns effectively on natural objects and scenes around us and are generated using mathematical formulas from fractal geometry. We expect the fractal image sets to reduce costs for acquiring images for attack and effectively extract features from the target DNN model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 47.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 59.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Codes are available at https://github.com/y0sh1d4/model_extraction_attack_without_natural_images.

References

  1. Barnsley, M.F.: Fractals Everywhere. Academic Press (2014)

    Google Scholar 

  2. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  3. Juuti, M., Szyller, S., Marchal, S., Asokan, N.: PRADA: protecting against DNN model stealing attacks. In: 2019 IEEE European Symposium on Security and Privacy (EuroS &P), pp. 512–527. IEEE (2019)

    Google Scholar 

  4. Kataoka, H., et al.: Pre-training without natural images. In: Asian Conference on Computer Vision (ACCV) (2020)

    Google Scholar 

  5. LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database. ATT Labs 2 (2010). http://yann.lecun.com/exdb/mnist

  6. Orekondy, T., Schiele, B., Fritz, M.: Knockoff nets: stealing functionality of black-box models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4954–4963 (2019)

    Google Scholar 

  7. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and Communications Security, pp. 506–519 (2017)

    Google Scholar 

  8. Reddi, S.J., Kale, S., Kumar, S.: On the convergence of Adam and beyond. arXiv preprint arXiv:1904.09237 (2019)

  9. da Silva, J.R.C., Berriel, R.F., Badue, C., de Souza, A.F., Oliveira-Santos, T.: Copycat CNN: stealing knowledge by persuading confession with random non-labeled data. CoRR abs/1806.05476 (2018). http://arxiv.org/abs/1806.05476

  10. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  11. Truong, J.B., Maini, P., Walls, R.J., Papernot, N.: Data-free model extraction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4771–4780 (2021)

    Google Scholar 

  12. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. CoRR abs/1708.07747 (2017). http://arxiv.org/abs/1708.07747

Download references

Acknowledgement

This work was supported by JSPS Grant-in-Aid for Early-Career Scientists Grant Number 23K16910.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kota Yoshida .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yoshida, K., Fujino, T. (2024). Model Extraction Attack Without Natural Images. In: Andreoni, M. (eds) Applied Cryptography and Network Security Workshops. ACNS 2024. Lecture Notes in Computer Science, vol 14587. Springer, Cham. https://doi.org/10.1007/978-3-031-61489-7_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-61489-7_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-61488-0

  • Online ISBN: 978-3-031-61489-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics