[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3474085.3475171acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning

Published: 17 October 2021 Publication History

Abstract

High-level representation-guided pixel denoising and adversarial training are independent solutions to enhance the robustness of CNNs against adversarial attacks by pre-processing input data and re-training models, respectively. Most recently, adversarial training techniques have been widely studied and improved while the pixel denoising-based method is getting less attractive. However, it is still questionable whether there exists a more advanced pixel denoising-based method and whether the combination of the two solutions benefits each other. To this end, we first comprehensively investigate two kinds of pixel denoising methods for adversarial robustness enhancement (i.e., existing additive-based and unexplored filtering-based methods) under the loss functions of image-level and semantic-level, respectively, showing that pixel-wise filtering can obtain much higher image quality (e.g., higher PSNR) as well as higher robustness (e.g., higher accuracy on adversarial examples) than existing pixel-wise additive-based method. However, we also observe that the robustness results of the filtering-based method rely on the perturbation amplitude of adversarial examples used for training. To address this problem, we propose predictive perturbation-aware & pixel-wise filtering, where dual-perturbation filtering and an uncertainty-aware fusion module are designed and employed to automatically perceive the perturbation amplitude during the training and testing process. The method is termed as AdvFilter. Moreover, we combine adversarial pixel denoising methods with three adversarial training-based methods, hinting that considering data and models jointly is able to achieve more robust CNNs. The experiments conduct on NeurIPS-2017DEV, SVHN and CIFAR10 datasets and show advantages over enhancing CNNs' robustness, high generalization to different models and noise levels.

Supplementary Material

MP4 File (MM21-mfp0029.mp4)
Presentation video of "AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning"

References

[1]
[n.d.]. The Street View House Numbers (SVHN) Dataset. http://ufldl.stanford. edu/housenumbers/.
[2]
Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018).
[3]
Yupeng Cheng, Felix Juefei-Xu, Qing Guo, Huazhu Fu, Xiaofei Xie, Shang-Wei Lin, Weisi Lin, and Yang Liu. 2020. Adversarial Exposure Attack on Diabetic Retinopathy Imagery. arXiv preprint arXiv:2009.09231 (2020).
[4]
Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. 2007. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Transactions on image processing, Vol. 16, 8 (2007), 2080--2095.
[5]
Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E Kounavis, and Duen Horng Chau. 2017. Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900 (2017).
[6]
Guang Deng and LW Cahill. 1993. An adaptive Gaussian filter for noise reduction and edge detection. In 1993 IEEE conference record nuclear science symposium and medical imaging conference. IEEE, 1615--1619.
[7]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248--255.
[8]
Alexei A Efros and William T Freeman. 2001. Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques. 341--346.
[9]
How-Lung Eng and Kai-Kuang Ma. 2001. Noise adaptive soft-switching median filter. IEEE Transactions on image processing, Vol. 10, 2 (2001), 242--251.
[10]
Ruijun Gao, Qing Guo, Felix Juefei-Xu, Hongkai Yu, Xuhong Ren, Wei Feng, and Song Wang. 2020. Making Images Undiscoverable from Co-Saliency Detection. arXiv preprint arXiv:2009.09258 (2020).
[11]
Ruijun Gao, Qing Guo, Felix Juefei-Xu, Hongkai Yu, and Wei Feng. 2021. Advhaze: Adversarial haze attack. arXiv preprint arXiv:2104.13673 (2021).
[12]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[13]
Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. 2017. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017).
[14]
Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, Jian Wang, Bing Yu, Wei Feng, and Yang Liu. 2020. Watch out! Motion is Blurring the Vision of Your Deep Neural Networks. In Advances in Neural Information Processing Systems (NeurIPS) .
[15]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[16]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Deep Residual Learning for Image Recognition.
[17]
Alain Hore and Djemel Ziou. 2010. Image quality metrics: PSNR vs. SSIM. In 2010 20th international conference on pattern recognition. IEEE, 2366--2369.
[18]
Harini Kannan, Alexey Kurakin, and Ian Goodfellow. 2018. Adversarial Logit Pairing. In NeurIPS.
[19]
Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical Report.
[20]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016).
[21]
Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, et al. 2018. Adversarial attacks and defences competition. In The NIPS'17 Competition: Building Intelligent Systems. Springer, 195--231.
[22]
Guanlin Li, Shuya Ding, Jun Luo, and Chang Liu. 2020. Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 800--808.
[23]
Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. 2018. Defense against adversarial attacks using high-level representation guided denoiser. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1778--1787.
[24]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[25]
Ben Mildenhall, Jonathan T Barron, Jiawen Chen, Dillon Sharlet, Ren Ng, and Robert Carroll. 2018. Burst denoising with kernel prediction networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2502--2510.
[26]
Jonas Rauber, Wieland Brendel, and Matthias Bethge. 2017. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. In Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning. http://arxiv.org/abs/1707.04131
[27]
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention. Springer, 234--241.
[28]
Leonid I Rudin, Stanley Osher, and Emad Fatemi. 1992. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, Vol. 60, 1--4 (1992), 259--268.
[29]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[30]
Binyu Tian, Qing Guo, Felix Juefei-Xu, Wen Le Chan, Yupeng Cheng, Xiaohong Li, Xiaofei Xie, and Shengchao Qin. 2021 a. Bias Field Poses a Threat to DNN-Based X-Ray Recognition. In IEEE International Conference on Multimedia and Expo (ICME) .
[31]
Binyu Tian, Felix Juefei-Xu, Qing Guo, Xiaofei Xie, Xiaohong Li, and Yang Liu. 2021 b. AVA: Adversarial Vignetting Attack against Visual Recognition. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) .
[32]
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 (2017).
[33]
Cihang Xie, Mingxing Tan, Boqing Gong, Alan Yuille, and Quoc V Le. 2020. Smooth adversarial training. arXiv preprint arXiv:2006.14536 (2020).
[34]
Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L Yuille, and Kaiming He. 2019. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 501--509.
[35]
Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1492--1500.
[36]
Ziang Yan, Yiwen Guo, and Changshui Zhang. 2018. Deep defense: Training dnns with improved adversarial robustness. In Advances in Neural Information Processing Systems. 419--428.
[37]
Liming Zhai, Felix Juefei-Xu, Qing Guo, Xiaofei Xie, Lei Ma, Wei Feng, Shengchao Qin, and Yang Liu. 2020. It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception. arXiv preprint arXiv:2009.09205 (2020).

Cited By

View all
  • (2024)Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive LearningProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680708(8024-8033)Online publication date: 28-Oct-2024
  • (2024)Texture Re-Scalable Universal Adversarial PerturbationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.341603019(8291-8305)Online publication date: 2024
  • (2024)Fast Propagation Is Better: Accelerating Single-Step Adversarial Training via Sampling SubnetworksIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337700419(4547-4559)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '21: Proceedings of the 29th ACM International Conference on Multimedia
      October 2021
      5796 pages
      ISBN:9781450386517
      DOI:10.1145/3474085
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 17 October 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. adversarial defense
      2. adversarial training
      3. image denoising

      Qualifiers

      • Research-article

      Funding Sources

      • National Key Research and Development Program
      • National Natural Science Foundation of China
      • National Research Foundation Singapore

      Conference

      MM '21
      Sponsor:
      MM '21: ACM Multimedia Conference
      October 20 - 24, 2021
      Virtual Event, China

      Acceptance Rates

      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)25
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 11 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive LearningProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680708(8024-8033)Online publication date: 28-Oct-2024
      • (2024)Texture Re-Scalable Universal Adversarial PerturbationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.341603019(8291-8305)Online publication date: 2024
      • (2024)Fast Propagation Is Better: Accelerating Single-Step Adversarial Training via Sampling SubnetworksIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337700419(4547-4559)Online publication date: 2024
      • (2024)EfficientDeRain+: Learning Uncertainty-Aware Filtering via RainMix Augmentation for High-Efficiency DerainingInternational Journal of Computer Vision10.1007/s11263-024-02281-7Online publication date: 4-Nov-2024
      • (2024)Boosting Transferability in Vision-Language Attacks via Diversification Along the Intersection Region of Adversarial TrajectoryComputer Vision – ECCV 202410.1007/978-3-031-72998-0_25(442-460)Online publication date: 30-Sep-2024
      • (2023)Model-Contrastive Learning for Backdoor EliminationProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612415(8869-8880)Online publication date: 26-Oct-2023
      • (2023)ALA: Naturalness-aware Adversarial Lightness AttackProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3611914(2418-2426)Online publication date: 26-Oct-2023
      • (2022)Saliency Map-Based Local White-Box Adversarial Attack Against Deep Neural NetworksArtificial Intelligence10.1007/978-3-031-20500-2_1(3-14)Online publication date: 27-Aug-2022

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media