Detect Adversarial Examples by Using Feature Autoencoder
Abstract
References
Index Terms
- Detect Adversarial Examples by Using Feature Autoencoder
Recommendations
Feature autoencoder for detecting adversarial examples
AbstractDeep neural networks (DNNs) have gained widespread adoption in computer vision. Unfortunately, state‐of‐the‐art DNNs are vulnerable to adversarial example (AE) attacks, where an adversary introduces imperceptible perturbations to a test example ...
A lightweight unsupervised adversarial detector based on autoencoder and isolation forest
AbstractAlthough deep neural networks (DNNs) have performed well on many perceptual tasks, they are vulnerable to adversarial examples that are generated by adding slight but maliciously crafted perturbations to benign images. Adversarial detection is an ...
Highlights- We observe that adversarial detection is sensitive to the perturbation level.
- We train a shallow autoencoder to find two key features from adversarial examples.
- We propose a lightweight and unsupervised adversarial detector.
A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples
AbstractDeep neural networks (DNNs) are vulnerable to adversarial attacks that generate adversarial examples by adding small perturbations to the clean images. To combat adversarial attacks, the two main defense methods used are denoising and adversarial ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
- Editors:
- Xingming Sun,
- Xiaorui Zhang,
- Zhihua Xia,
- Elisa Bertino
Publisher
Springer-Verlag
Berlin, Heidelberg
Publication History
Author Tags
Qualifiers
- Article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0