[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3475724.3483610acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Frequency Centric Defense Mechanisms against Adversarial Examples

Published: 22 October 2021 Publication History

Abstract

Adversarial example(AE) aims at fooling a Convolution Neural Network by introducing small perturbations in the input image. The proposed work uses the magnitude and phase of the Fourier Spectrum and the entropy of the image to defend against AE. We demonstrate the defense in two ways: by training an adversarial detector and denoising the adversarial effect. Experiments were conducted on the low-resolution CIFAR-10 and high-resolution ImageNet datasets. The adversarial detector has 99% accuracy for FGSM and PGD attacks on the CIFAR-10 dataset. However, the detection accuracy falls to 50% for sophisticated DeepFool and Carlini & Wagner attacks on ImageNet. We overcome the limitation by using autoencoder and show that 70% of AEs are correctly classified after denoising.

References

[1]
Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. 39--57. https://doi.org/10.1109/SP.2017.49
[2]
I-Ting Chen and Birsen Sirkeci-Mergen. 2018. A comparative study of autoencoders against adversarial attacks. In Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV). The Steering Committee of The World Congress in Computer Science, Computer ?, 132--136.
[3]
Antonia Creswell and Anil Anthony Bharath. 2018. Denoising adversarial autoencoders. IEEE transactions on neural networks and learning systems, Vol. 30, 4 (2018), 968--984.
[4]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248--255.
[5]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. (2015). arxiv: 1412.6572 [stat.ML]
[6]
Paula Harder, Franz-Josef Pfreundt, Margret Keuper, and Janis Keuper. 2021. SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain. arXiv preprint arXiv:2103.03000 (2021).
[7]
Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. 2017. Adversarial example defense: Ensembles of weak defenses are not strong. In 11th $$USENIX$$ workshop on offensive technologies ($$WOOT$$ 17) .
[8]
Uiwon Hwang, Jaewoo Park, Hyemi Jang, Sungroh Yoon, and Nam Ik Cho. 2019. PuVAE: A Variational Autoencoder to Purify Adversarial Examples. CoRR, Vol. abs/1903.00585 (2019). arxiv: 1903.00585 http://arxiv.org/abs/1903.00585
[9]
Alex Krizhevsky, Geoffrey Hinton, et almbox. 2009. Learning multiple layers of features from tiny images. (2009).
[10]
Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. arxiv: 1807.03888 [stat.ML]
[11]
Peter Lorenz, Paula Harder, Dominik Straßel, Margret Keuper, and Janis Keuper. 2021. Detecting AutoAttack Perturbations in the Frequency Domain. In ICML 2021 Workshop on Adversarial Machine Learning . https://openreview.net/forum?id=8uWOTxbwo-Z
[12]
Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi N. R. Wijewickrema, Michael E. Houle, Grant Schoenebeck, Dawn Song, and James Bailey. 2018. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality. CoRR, Vol. abs/1801.02613 (2018). arxiv: 1801.02613 http://arxiv.org/abs/1801.02613
[13]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[14]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition . 2574--2582.
[15]
Andras Rozsa, Ethan M Rudd, and Terrance E Boult. 2016. Adversarial diversity and hard positive generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops . 25--32.
[16]
Rajeev Sahay, Rehana Mahfuz, and Aly El Gamal. 2018. Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach. CoRR, Vol. abs/1812.03087 (2018). arxiv: 1812.03087 http://arxiv.org/abs/1812.03087
[17]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. arxiv: 1409.1556 [cs.CV]
[18]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Rethinking the Inception Architecture for Computer Vision. CoRR, Vol. abs/1512.00567 (2015). arxiv: 1512.00567 http://arxiv.org/abs/1512.00567
[19]
Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. 2020. On Adaptive Attacks to Adversarial Example Defenses. arxiv: 2002.08347 [cs.LG]
[20]
Yusuke Tsuzuku and Issei Sato. 2018. On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions. CoRR, Vol. abs/1809.04098 (2018). arxiv: 1809.04098 http://arxiv.org/abs/1809.04098
[21]
Lucien Wald. 2002. Data Fusion. Definitions and Architectures - Fusion of Images of Different Spatial Resolutions .
[22]
Weilin Xu, David Evans, and Yanjun Qi. 2017. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. CoRR, Vol. abs/1704.01155 (2017). arxiv: 1704.01155 http://arxiv.org/abs/1704.01155
[23]
Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D. Cubuk, and Justin Gilmer. 2019. A Fourier Perspective on Model Robustness in Computer Vision. CoRR, Vol. abs/1906.08988 (2019). arxiv: 1906.08988 http://arxiv.org/abs/1906.08988

Cited By

View all
  • (2024)Metricizing the Euclidean Space Toward Desired Distance Relations in Point CloudsIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.342024619(7304-7319)Online publication date: 2024
  • (2024)Boosting Black-Box Attack to Deep Neural Networks With Conditional Diffusion ModelsIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.339060919(5207-5219)Online publication date: 2024
  • (2022)Defense Against Adversarial Examples Using Beneficial Noise2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)10.23919/APSIPAASC55919.2022.9979828(1842-1848)Online publication date: 7-Nov-2022
  • Show More Cited By

Index Terms

  1. Frequency Centric Defense Mechanisms against Adversarial Examples

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia
    October 2021
    73 pages
    ISBN:9781450386722
    DOI:10.1145/3475724
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 October 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial examples
    2. deep neural network
    3. entropy
    4. fourier transform

    Qualifiers

    • Research-article

    Conference

    MM '21
    Sponsor:
    MM '21: ACM Multimedia Conference
    October 20, 2021
    Virtual Event, China

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)10
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Metricizing the Euclidean Space Toward Desired Distance Relations in Point CloudsIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.342024619(7304-7319)Online publication date: 2024
    • (2024)Boosting Black-Box Attack to Deep Neural Networks With Conditional Diffusion ModelsIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.339060919(5207-5219)Online publication date: 2024
    • (2022)Defense Against Adversarial Examples Using Beneficial Noise2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)10.23919/APSIPAASC55919.2022.9979828(1842-1848)Online publication date: 7-Nov-2022
    • (2022)On Fooling Facial Recognition Systems using Adversarial Patches2022 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN55064.2022.9892071(1-8)Online publication date: 18-Jul-2022
    • (2022)Intermediate-Layer Transferable Adversarial Attack With DNN AttentionIEEE Access10.1109/ACCESS.2022.320469610(95451-95461)Online publication date: 2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media