[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3386263.3407601acmotherconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article

On Configurable Defense against Adversarial Example Attacks

Published: 07 September 2020 Publication History

Abstract

Machine learning systems based on deep neural networks (DNNs) have gained mainstream adoption in many applications. Recently, however, DNNs are shown to be vulnerable to adversarial example attacks with slight perturbations on the inputs. Existing defense mechanisms against such attacks try to improve the overall robustness of the system, but they do not differentiate different targeted attacks even though the corresponding impacts may vary significantly. To tackle this problem, we propose a novel configurable defense mechanism in this work, wherein we are able to flexibly tune the robustness of the system against different targeted attacks to satisfy application requirements. This is achieved by refining the DNN loss function with an attack sensitive matrix to represent the impacts of different targeted attacks. Experimental results on CIFAR-10 data set demonstrate the efficacy of the proposed solution.

Supplementary Material

MP4 File (3386263.3407601.mp4)
Presentation video

References

[1]
Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. AIsec, 2017.
[2]
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. Security and Privacy (S&P), 2017.
[3]
Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. Ead: elastic-net attacks to deep neural networks via adversarial examples. AAAI, 2018.
[4]
Francesco Croce and Matthias Hein. Sparse and imperceivable adversarial attacks. In Proceedings of the IEEE International Conference on Computer Vision, pages 4724--4732, 2019.
[5]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015.
[6]
Shixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversarial examples. ICLR, 2015.
[7]
Warren He, Bo Li, and Dawn Song. Decision boundary analysis of adversarial examples. ICLR, 2018.
[8]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. ICLR, 2017.
[9]
Xiang Ling, Shouling Ji, Jiaxu Zou, Jiannan Wang, Chunming Wu, Bo Li, and Ting Wang. Deepsec: A uniform platform for security analysis of deep learning model. Security and Privacy (S&P), 2019.
[10]
Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Michael E Houle, Grant Schoenebeck, Dawn Song, and James Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. ICLR, 2018.
[11]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. ICLR, 2018.
[12]
Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. CCS, 2017.
[13]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. CVPR, 2017.
[14]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. Security and Privacy (S&P), 2016.
[15]
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. Security and Privacy (S&P), 2016.
[16]
Andrew Slavin Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. AAAI, 2018.
[17]
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. ICLR, 2018.
[18]
Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. NDSS, 2018.
[19]
Zhengyuan Yang, Yixuan Zhang, Jerry Yu, Junjie Cai, and Jiebo Luo. End-to-end multi-modal multi-task vehicle control for self-driving cars with visual perceptions. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 2289--2294. IEEE, 2018.
[20]
Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems, 2019.
[21]
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697--8710, 2018.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSI
September 2020
597 pages
ISBN:9781450379441
DOI:10.1145/3386263
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 September 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial example attack
  2. configurable defense
  3. deep neural networks
  4. machine learning

Qualifiers

  • Research-article

Funding Sources

Conference

GLSVLSI '20
GLSVLSI '20: Great Lakes Symposium on VLSI 2020
September 7 - 9, 2020
Virtual Event, China

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 114
    Total Downloads
  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)1
Reflects downloads up to 10 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media