Jiang et al., 2020 - Google Patents
Cycle‐Consistent Adversarial GAN: The Integration of Adversarial Attack and DefenseJiang et al., 2020
View PDF- Document ID
- 5622218281703856117
- Author
- Jiang L
- Qiao K
- Qin R
- Wang L
- Yu W
- Chen J
- Bu H
- Yan B
- Publication year
- Publication venue
- Security and Communication Networks
External Links
Snippet
In image classification of deep learning, adversarial examples where input is intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them. Different attack and defense strategies …
- 235000000332 black box 0 abstract description 16
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/08—Learning methods
- G06N3/086—Learning methods using evolutionary programming, e.g. genetic algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/02—Knowledge representation
- G06N5/022—Knowledge engineering, knowledge acquisition
- G06N5/025—Extracting rules from data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/12—Computer systems based on biological models using genetic models
- G06N3/126—Genetic algorithms, i.e. information processing using digital simulations of the genetic system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/04—Inference methods or devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Practical detection of trojan neural networks: Data-limited and data-free cases | |
Jia et al. | Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning | |
Chakraborty et al. | A survey on adversarial attacks and defences | |
Yuan et al. | Adversarial examples: Attacks and defenses for deep learning | |
Liu et al. | SocInf: Membership inference attacks on social media health data with machine learning | |
Li et al. | An approximated gradient sign method using differential evolution for black-box adversarial attack | |
Li et al. | Adaptive momentum variance for attention-guided sparse adversarial attacks | |
Peng et al. | Bilateral dependency optimization: Defending against model-inversion attacks | |
Srinivasan et al. | Robustifying models against adversarial attacks by langevin dynamics | |
Hou et al. | Similarity-based integrity protection for deep learning systems | |
Jiang et al. | Cycle‐Consistent Adversarial GAN: The Integration of Adversarial Attack and Defense | |
Liu et al. | Adversaries or allies? Privacy and deep learning in big data era | |
Laykaviriyakul et al. | Collaborative Defense-GAN for protecting adversarial attacks on classification system | |
Mi et al. | Adversarial examples based on object detection tasks: A survey | |
Gong et al. | b3: Backdoor attacks against black-box machine learning models | |
Jin et al. | DANAA: Towards transferable attacks with double adversarial neuron attribution | |
Yan et al. | Towards explainable model extraction attacks | |
Wang et al. | Attention‐guided black‐box adversarial attacks with large‐scale multiobjective evolutionary optimization | |
Vyas et al. | Evaluation of adversarial attacks and detection on transfer learning model | |
Yu et al. | Security and Privacy in Federated Learning | |
Liu et al. | Gradient-leaks: Enabling black-box membership inference attacks against machine learning models | |
Xie et al. | Defending local poisoning attacks in multi-party learning via immune system | |
Zhang et al. | A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks | |
Li et al. | Online alternate generator against adversarial attacks | |
Chen et al. | Boundary augment: A data augment method to defend poison attack |