[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Benchmarking Robustness Beyond \(l_p\) Norm Adversaries

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 Workshops (ECCV 2022)

Abstract

Recently, a significant boom has been noticed in the generation of a variety of malicious examples ranging from adversarial perturbations to common noises to natural adversaries. These malicious examples are highly effective in fooling almost ‘any’ deep neural network. Therefore, to protect the integrity of deep networks, research efforts have been started in building the defense against these anomalies of the individual category. The prime reason for such individual handling of noises is the lack of one unique dataset which can be used to benchmark against multiple malicious examples and hence in turn can help in building a true ‘universal’ defense algorithm. This research work is an aid towards that goal that created a dataset termed “wide angle anomalies” containing 19 different malicious categories. On top of that, an extensive experimental evaluation has been performed on the proposed dataset using popular deep neural networks to detect these wide-angle anomalies. The experiments help in identifying a possible relationship between different anomalies and how easy or difficult to detect an anomaly if it is seen or unseen during training-testing. We assert that the experiments in seen and unseen category attack training-testing reveals several surprising and interesting outcomes including possible connection among adversaries. We believe it can help in building a universal defense algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 79.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 99.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    While the results are reported using VGG, similar evaluation analysis (with ± 1–12% as shown in Table 5) is observed across wide range of backbone networks including Xception [11], InceptionV3 [49], DenseNet121 [26], and MobileNet [25]. However, VGG tops each network in the majority of the cases and is hence chosen for detailed study in the paper.

References

  1. Agarwal, A., Goswami, G., Vatsa, M., Singh, R., Ratha, N.K.: Damad: database, attack, and model agnostic adversarial perturbation detector. IEEE Trans. Neural Netw. Learn. Syst. 33, 1–13 (2021). https://doi.org/10.1109/TNNLS.2021.3051529

    Article  MathSciNet  Google Scholar 

  2. Agarwal, A., Ratha, N., Vatsa, M., Singh, R.: Exploring robustness connection between artificial and natural adversarial examples. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 179–186 (2022)

    Google Scholar 

  3. Agarwal, A., Ratha, N.K.: Black-box adversarial entry in finance through credit card fraud detection. In: CIKM Workshops (2021)

    Google Scholar 

  4. Agarwal, A., Ratha, N.K.: On the robustness of stock market regressors. In: ECML-PKDD Workshops (2022)

    Google Scholar 

  5. Agarwal, A., Singh, R., Vatsa, M., Ratha, N.: Image transformation-based defense against adversarial perturbation on deep learning models. IEEE Trans. Depend. Secure Comput. 18(5), 2106–2121 (2021). https://doi.org/10.1109/TDSC.2020.3027183

    Article  Google Scholar 

  6. Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: Cognitive data augmentation for adversarial defense via pixel masking. Pattern Recogn. Lett. 146, 244–251 (2021)

    Article  Google Scholar 

  7. Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: Intelligent and adaptive mixup technique for adversarial robustness. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 824–828 (2021). https://doi.org/10.1109/ICIP42928.2021.9506180

  8. Agarwal, A., Vatsa, M., Singh, R., Ratha, N.K.: Noise is inside me! generating adversarial perturbations with noise derived from natural filters. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 3354–3363 (2020)

    Google Scholar 

  9. Andriushchenko, M., Flammarion, N.: Understanding and improving fast adversarial training. Adv. Neural Inf. Process. Syst. 33, 16048–16059 (2020)

    Google Scholar 

  10. Chhabra, S., Agarwal, A., Singh, R., Vatsa, M.: Attack agnostic adversarial defense via visual imperceptible bound. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 5302–5309 (2021). https://doi.org/10.1109/ICPR48806.2021.9412663

  11. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)

    Google Scholar 

  12. Chun, S., Oh, S.J., Yun, S., Han, D., Choe, J., Yoo, Y.: An empirical evaluation on robustness and uncertainty of regularization methods. arXiv preprint arXiv:2003.03879 (2020)

  13. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  14. Dodge, S., Karam, L.: Quality resilient deep neural networks. arXiv preprint arXiv:1703.08119 (2017)

  15. Esmaeilpour, M., Cardinal, P., Koerich, A.L.: Cyclic defense gan against speech adversarial attacks. IEEE Signal Process. Lett. 28, 1769–1773 (2021)

    Article  Google Scholar 

  16. Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231 (2019)

  17. Geirhos, R., Temme, C.R., Rauber, J., Schütt, H.H., Bethge, M., Wichmann, F.A.: Generalisation in humans and deep neural networks. Adv. Neural Inf. Process. Syst. 31, 1–13 (2018)

    Google Scholar 

  18. Goel, A., Singh, A., Agarwal, A., Vatsa, M., Singh, R.: Smartbox: benchmarking adversarial detection and mitigation algorithms for face recognition. In: 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–7. IEEE (2018)

    Google Scholar 

  19. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  20. Goswami, G., Agarwal, A., Ratha, N., Singh, R., Vatsa, M.: Detecting and mitigating adversarial perturbations for robust face recognition. Int. J. Comput. Vision 127(6), 719–742 (2019)

    Article  Google Scholar 

  21. Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261 (2019)

  22. Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15262–15271 (2021)

    Google Scholar 

  23. Hermann, K., Chen, T., Kornblith, S.: The origins and prevalence of texture bias in convolutional neural networks. Adv. Neural Inf. Process. Syst. 33, 19000–19015 (2020)

    Google Scholar 

  24. Hosseini, H., Poovendran, R.: Semantic adversarial examples. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1614–1619 (2018)

    Google Scholar 

  25. Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  26. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  27. Kamann, C., Rother, C.: Benchmarking the robustness of semantic segmentation models with respect to common corruptions. Int. J. Comput. Vision 129(2), 462–483 (2021)

    Article  Google Scholar 

  28. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112. Chapman and Hall/CRC (2018)

    Google Scholar 

  29. Landau, B., Smith, L.B., Jones, S.S.: The importance of shape in early lexical learning. Cogn. Dev. 3(3), 299–321 (1988)

    Article  Google Scholar 

  30. Li, F., Liu, X., Zhang, X., Li, Q., Sun, K., Li, K.: Detecting localized adversarial examples: a generic approach using critical region analysis. In: IEEE INFOCOM 2021-IEEE Conference on Computer Communications, pp. 1–10. IEEE (2021)

    Google Scholar 

  31. Li, X., Li, J., Dai, T., Shi, J., Zhu, J., Hu, X.: Rethinking natural adversarial examples for classification models. arXiv preprint arXiv:2102.11731 (2021)

  32. Ma, X., et al.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2021)

    Article  Google Scholar 

  33. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  34. Mikołajczyk, A., Grochowski, M.: Data augmentation for improving deep learning in image classification problem. In: 2018 International Interdisciplinary PhD Workshop (IIPhDW), pp. 117–122. IEEE (2018)

    Google Scholar 

  35. Mintun, E., Kirillov, A., Xie, S.: On interaction between augmentations and corruptions in natural corruption robustness. Adv. Neural Inf. Process. Syst. 34, 1–13 (2021)

    Google Scholar 

  36. Modas, A., Rade, R., Ortiz-Jiménez, G., Moosavi-Dezfooli, S.M., Frossard, P.: Prime: a few primitives can boost robustness to common corruptions. arXiv preprint arXiv:2112.13547 (2021)

  37. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765–1773 (2017)

    Google Scholar 

  38. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  39. Morrison, K., Gilby, B., Lipchak, C., Mattioli, A., Kovashka, A.: Exploring corruption robustness: inductive biases in vision transformers and mlp-mixers. arXiv preprint arXiv:2106.13122 (2021)

  40. Pedraza, A., Deniz, O., Bueno, G.: Really natural adversarial examples. Int. J. Mach. Learn. Cybern. 13, 1–13 (2021)

    Google Scholar 

  41. Pei, Y., Huang, Y., Zou, Q., Zhang, X., Wang, S.: Effects of image degradation and degradation removal to cnn-based image classification. IEEE Trans. Pattern Anal. Mach. Intell. 43(4), 1239–1253 (2019)

    Article  Google Scholar 

  42. Raghunathan, A., Xie, S.M., Yang, F., Duchi, J.C., Liang, P.: Adversarial training can hurt generalization. arXiv preprint arXiv:1906.06032 (2019)

  43. Saikia, T., Schmid, C., Brox, T.: Improving robustness against common corruptions with frequency biased models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10211–10220 (2021)

    Google Scholar 

  44. Samangouei, P., Kabkab, M., Chellappa, R.: Defense-gan: protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605 (2018)

  45. Schneider, S., Rusak, E., Eck, L., Bringmann, O., Brendel, W., Bethge, M.: Improving robustness against common corruptions by covariate shift adaptation. Adv. Neural Inf. Process. Syst. 33, 11539–11551 (2020)

    Google Scholar 

  46. Shafahi, A., et al.: Adversarial training for free! Adv. Neural Inf. Process. Syst. 32 (2019)

    Google Scholar 

  47. Shafahi, A., Najibi, M., Xu, Z., Dickerson, J., Davis, L.S., Goldstein, T.: Universal adversarial training. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5636–5643 (2020)

    Google Scholar 

  48. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  49. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  50. Taheri, H., Pedarsani, R., Thrampoulidis, C.: Asymptotic behavior of adversarial training in binary classification. arXiv preprint arXiv:2010.13275 (2020)

  51. Tramer, F.: Detecting adversarial examples is (nearly) as hard as classifying them. arXiv preprint arXiv:2107.11630 (2021)

  52. Wang, J., et al.: Smsnet: a new deep convolutional neural network model for adversarial example detection. IEEE Trans. Multimedia 24, 230–244 (2021)

    Article  Google Scholar 

  53. Xue, M., Yuan, C., He, C., Wang, J., Liu, W.: Naturalae: natural and robust physical adversarial examples for object detectors. J. Inf. Secur. Appl. 57, 102694 (2021)

    Google Scholar 

  54. Zhang, H., Chen, H., Song, Z., Boning, D., Dhillon, I.S., Hsieh, C.J.: The limitations of adversarial training and the blind-spot attack. arXiv preprint arXiv:1901.04684 (2019)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Akshay Agarwal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Agarwal, A., Ratha, N., Vatsa, M., Singh, R. (2023). Benchmarking Robustness Beyond \(l_p\) Norm Adversaries. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13801. Springer, Cham. https://doi.org/10.1007/978-3-031-25056-9_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25056-9_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25055-2

  • Online ISBN: 978-3-031-25056-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics