[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Random Sparsity Defense Against Adversarial Attack

  • Conference paper
  • First Online:
PRICAI 2021: Trends in Artificial Intelligence (PRICAI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13032))

Included in the following conference series:

  • 1512 Accesses

Abstract

More recently, with the development of deep learning and the expansion of its applications, deep learning became the targets of attackers. Researchers found that adversarial perturbations have excellent efficiency for attacking deep neural network. The adversarial examples that are crafted by adding tiny and imperceptible perturbations to modify pixels can make the classifier output wrong results with high confidence. This demonstrates the vulnerability of deep neural network. In this paper, we proposes a method to defend the adversarial attack, reducing output distortion owing to the attack. The proposed method, called random sparsity defense, is a combination of whitening and random sparsity. The method increases randomness in sparsity-based defense and weakens the adverse effects of randomness through whitening. Experimental results on MNIST dataset show that the proposed random sparsity defense can resist attack well and has a good ability to correct classification results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 71.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 89.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chae, J., Hong, K.Y., Kim, J.: A pressure ulcer care system for remote medical assistance: residual U-Net with an attention model based for wound area segmentation. arXiv:2101.09433 (2021)

  2. Adhikari, A., Hollander, R.D., Tolios, I., Bekkum, M.V., Raaijmakers, S.: Adversarial patch camouflage against aerial detection. In: SPIE Security + Defence 2020 Digital Forum 21 (2020)

    Google Scholar 

  3. Kim, D., Joo, D., Kim, J.: TiVGAN: text to Image to Video Generation with Step-by-Step Evolutionary Generator. In: IEEE Access PP (2020)

    Google Scholar 

  4. Müller, N., Kowatsch, D., Böttinger, K.: Data poisoning attacks on regression learning and corresponding defenses. In: 2020 IEEE 25th Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 80–89 (2020)

    Google Scholar 

  5. Hidano, S., Murakami, T., Katsumata, S., et al.: Model inversion attacks for online prediction systems: without knowledge of non-sensitive attributes. In: IEICE Transactions on Information and Systems, pp. 2665–2676 (2018)

    Google Scholar 

  6. Hidano, S., Kawamoto, Y., Murakami, T.: TransMIA: membership inference attacks using transfer shadow training. arXiv:2011.14661 (2020)

  7. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014). arxiv:1312.6199

  8. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015). arxiv:1412.6572

  9. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597 (2016)

    Google Scholar 

  10. Song, Y., Kim, T., Nowozin, S., Ermon, S., Kushman, N.: PixelDefend: leveraging generative models to understand and defend against adversarial examples. In: International Conference on Learning Representations (2018). arxiv:1710.10766

  11. Marzi, Z., Gopalakrishnan, S., Madhow, U., Pedarsani, R.: Sparsity-based defense against adversarial attacks on linear classifiers. In: IEEE International Symposium on Information Theory (ISIT) (2018). arxiv:1801.04695

  12. Gopalakrishnan, S., Marzi, Z., Madhow, U.: Robust adversarial learning via sparsifying front ends. In: IEEE International Symposium on Information Theory (ISIT) (2018). arxiv:1810.10625

  13. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv:1607.02533 (2016)

  14. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018). https://ieeexplore.ieee.org/document/8579055

  15. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=rJzIBfZAb

  16. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. In: SIAM Review, vol. 51, pp. 34–81 (2009)

    Google Scholar 

  17. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. In: SIAM Reviews, vol. 43, pp. 129–159 (2001)

    Google Scholar 

  18. Engan, K., Aase, S.O., Husoy, J.H.: Method of optimal directions for frame design. In: Proceedings IEEE International Conference on Acoust Speech Signal Process (ICASSP), vol. 5, pp. 2443–2446 (1999)

    Google Scholar 

  19. Abdi, H., Williams, L.J.: Principal component analysis. In: Wiley Interdisciplinary Reviews Computational Statistics, pp. 433–459 (2010)

    Google Scholar 

  20. Aharon, M., Elad, M., Bruckstein, A.M.: The K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representation. In: IEEE Trans. Signal Process. 54, 4311–4322 (2006)

    Google Scholar 

  21. Bhagoji, A.N., Cullina, D., Sitawarin, C., Mittal, P.: Enhancing robustness of machine learning systems via data transformations. In: Conference on Information Sciences and Systems (CISS) (2018). https://ieeexplore.ieee.org/document/8362326

  22. Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A.: Mitigating adversarial effects through randomization. arXiv:1711.01991 (2017)

  23. Daubechies, I.: Ten lectures on wavelets. Philadelphia. In: PA:SIAM Books (1992)

    Google Scholar 

  24. Cohen, A., Daubechies, I., Feauveau, J.C.: Biorthogonal bases of compactly supported wavelets. In: Commun. Pure Appl. Math. 45(5), 485–560 (1992)

    Google Scholar 

Download references

Acknowledgments

This work is supported by Shanghai Municipal Natural Science Foundation (Grant No. 21ZR1401200, Grant No.18ZR1401200), Special Fund for Innovation and Development of Shanghai Industrial Internet (Grant No. XX-GYHL-01–19-2527).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ting Lu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, N. et al. (2021). Random Sparsity Defense Against Adversarial Attack. In: Pham, D.N., Theeramunkong, T., Governatori, G., Liu, F. (eds) PRICAI 2021: Trends in Artificial Intelligence. PRICAI 2021. Lecture Notes in Computer Science(), vol 13032. Springer, Cham. https://doi.org/10.1007/978-3-030-89363-7_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89363-7_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89362-0

  • Online ISBN: 978-3-030-89363-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics