[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Jiang et al., 2020 - Google Patents

Cycle‐Consistent Adversarial GAN: The Integration of Adversarial Attack and Defense

Jiang et al., 2020

View PDF @Full View
Document ID
5622218281703856117
Author
Jiang L
Qiao K
Qin R
Wang L
Yu W
Chen J
Bu H
Yan B
Publication year
Publication venue
Security and Communication Networks

External Links

Snippet

In image classification of deep learning, adversarial examples where input is intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them. Different attack and defense strategies …
Continue reading at onlinelibrary.wiley.com (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6267Classification techniques
    • G06K9/6268Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary programming, e.g. genetic algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6217Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computer systems utilising knowledge based models
    • G06N5/02Knowledge representation
    • G06N5/022Knowledge engineering, knowledge acquisition
    • G06N5/025Extracting rules from data
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/12Computer systems based on biological models using genetic models
    • G06N3/126Genetic algorithms, i.e. information processing using digital simulations of the genetic system
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computer systems utilising knowledge based models
    • G06N5/04Inference methods or devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity

Similar Documents

Publication Publication Date Title
Wang et al. Practical detection of trojan neural networks: Data-limited and data-free cases
Jia et al. Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning
Chakraborty et al. A survey on adversarial attacks and defences
Yuan et al. Adversarial examples: Attacks and defenses for deep learning
Liu et al. SocInf: Membership inference attacks on social media health data with machine learning
Li et al. An approximated gradient sign method using differential evolution for black-box adversarial attack
Li et al. Adaptive momentum variance for attention-guided sparse adversarial attacks
Peng et al. Bilateral dependency optimization: Defending against model-inversion attacks
Srinivasan et al. Robustifying models against adversarial attacks by langevin dynamics
Hou et al. Similarity-based integrity protection for deep learning systems
Jiang et al. Cycle‐Consistent Adversarial GAN: The Integration of Adversarial Attack and Defense
Liu et al. Adversaries or allies? Privacy and deep learning in big data era
Laykaviriyakul et al. Collaborative Defense-GAN for protecting adversarial attacks on classification system
Mi et al. Adversarial examples based on object detection tasks: A survey
Gong et al. b3: Backdoor attacks against black-box machine learning models
Jin et al. DANAA: Towards transferable attacks with double adversarial neuron attribution
Yan et al. Towards explainable model extraction attacks
Wang et al. Attention‐guided black‐box adversarial attacks with large‐scale multiobjective evolutionary optimization
Vyas et al. Evaluation of adversarial attacks and detection on transfer learning model
Yu et al. Security and Privacy in Federated Learning
Liu et al. Gradient-leaks: Enabling black-box membership inference attacks against machine learning models
Xie et al. Defending local poisoning attacks in multi-party learning via immune system
Zhang et al. A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks
Li et al. Online alternate generator against adversarial attacks
Chen et al. Boundary augment: A data augment method to defend poison attack