Abstract
More recently, with the development of deep learning and the expansion of its applications, deep learning became the targets of attackers. Researchers found that adversarial perturbations have excellent efficiency for attacking deep neural network. The adversarial examples that are crafted by adding tiny and imperceptible perturbations to modify pixels can make the classifier output wrong results with high confidence. This demonstrates the vulnerability of deep neural network. In this paper, we proposes a method to defend the adversarial attack, reducing output distortion owing to the attack. The proposed method, called random sparsity defense, is a combination of whitening and random sparsity. The method increases randomness in sparsity-based defense and weakens the adverse effects of randomness through whitening. Experimental results on MNIST dataset show that the proposed random sparsity defense can resist attack well and has a good ability to correct classification results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chae, J., Hong, K.Y., Kim, J.: A pressure ulcer care system for remote medical assistance: residual U-Net with an attention model based for wound area segmentation. arXiv:2101.09433 (2021)
Adhikari, A., Hollander, R.D., Tolios, I., Bekkum, M.V., Raaijmakers, S.: Adversarial patch camouflage against aerial detection. In: SPIE Security + Defence 2020 Digital Forum 21 (2020)
Kim, D., Joo, D., Kim, J.: TiVGAN: text to Image to Video Generation with Step-by-Step Evolutionary Generator. In: IEEE Access PP (2020)
Müller, N., Kowatsch, D., Böttinger, K.: Data poisoning attacks on regression learning and corresponding defenses. In: 2020 IEEE 25th Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 80–89 (2020)
Hidano, S., Murakami, T., Katsumata, S., et al.: Model inversion attacks for online prediction systems: without knowledge of non-sensitive attributes. In: IEICE Transactions on Information and Systems, pp. 2665–2676 (2018)
Hidano, S., Kawamoto, Y., Murakami, T.: TransMIA: membership inference attacks using transfer shadow training. arXiv:2011.14661 (2020)
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014). arxiv:1312.6199
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015). arxiv:1412.6572
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597 (2016)
Song, Y., Kim, T., Nowozin, S., Ermon, S., Kushman, N.: PixelDefend: leveraging generative models to understand and defend against adversarial examples. In: International Conference on Learning Representations (2018). arxiv:1710.10766
Marzi, Z., Gopalakrishnan, S., Madhow, U., Pedarsani, R.: Sparsity-based defense against adversarial attacks on linear classifiers. In: IEEE International Symposium on Information Theory (ISIT) (2018). arxiv:1801.04695
Gopalakrishnan, S., Marzi, Z., Madhow, U.: Robust adversarial learning via sparsifying front ends. In: IEEE International Symposium on Information Theory (ISIT) (2018). arxiv:1810.10625
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv:1607.02533 (2016)
Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018). https://ieeexplore.ieee.org/document/8579055
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=rJzIBfZAb
Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. In: SIAM Review, vol. 51, pp. 34–81 (2009)
Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. In: SIAM Reviews, vol. 43, pp. 129–159 (2001)
Engan, K., Aase, S.O., Husoy, J.H.: Method of optimal directions for frame design. In: Proceedings IEEE International Conference on Acoust Speech Signal Process (ICASSP), vol. 5, pp. 2443–2446 (1999)
Abdi, H., Williams, L.J.: Principal component analysis. In: Wiley Interdisciplinary Reviews Computational Statistics, pp. 433–459 (2010)
Aharon, M., Elad, M., Bruckstein, A.M.: The K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representation. In: IEEE Trans. Signal Process. 54, 4311–4322 (2006)
Bhagoji, A.N., Cullina, D., Sitawarin, C., Mittal, P.: Enhancing robustness of machine learning systems via data transformations. In: Conference on Information Sciences and Systems (CISS) (2018). https://ieeexplore.ieee.org/document/8362326
Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A.: Mitigating adversarial effects through randomization. arXiv:1711.01991 (2017)
Daubechies, I.: Ten lectures on wavelets. Philadelphia. In: PA:SIAM Books (1992)
Cohen, A., Daubechies, I., Feauveau, J.C.: Biorthogonal bases of compactly supported wavelets. In: Commun. Pure Appl. Math. 45(5), 485–560 (1992)
Acknowledgments
This work is supported by Shanghai Municipal Natural Science Foundation (Grant No. 21ZR1401200, Grant No.18ZR1401200), Special Fund for Innovation and Development of Shanghai Industrial Internet (Grant No. XX-GYHL-01–19-2527).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, N. et al. (2021). Random Sparsity Defense Against Adversarial Attack. In: Pham, D.N., Theeramunkong, T., Governatori, G., Liu, F. (eds) PRICAI 2021: Trends in Artificial Intelligence. PRICAI 2021. Lecture Notes in Computer Science(), vol 13032. Springer, Cham. https://doi.org/10.1007/978-3-030-89363-7_45
Download citation
DOI: https://doi.org/10.1007/978-3-030-89363-7_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89362-0
Online ISBN: 978-3-030-89363-7
eBook Packages: Computer ScienceComputer Science (R0)