Explanatory subgraph attacks against Graph Neural Networks
References
Index Terms
- Explanatory subgraph attacks against Graph Neural Networks
Recommendations
Explainability-based Backdoor Attacks Against Graph Neural Networks
WiseML '21: Proceedings of the 3rd ACM Workshop on Wireless Security and Machine LearningBackdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while performing normally on other benign inputs. There are already numerous works ...
E-SAGE: Explainability-Based Defense Against Backdoor Attacks on Graph Neural Networks
Wireless Artificial Intelligent Computing Systems and ApplicationsAbstractGraph Neural Networks (GNNs) have recently been widely adopted in multiple domains. Yet, they are notably vulnerable to adversarial and backdoor attacks. In particular, backdoor attacks based on subgraph insertion have been shown to be effective ...
Two-level adversarial attacks for graph neural networks
AbstractGraph neural networks (GNNs) have achieved significant success in numerous graph-based applications. Unfortunately, they are sensitive to adversarial examples generated by modifying graphs with imperceptible perturbations. Therefore, researchers ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Publisher
Elsevier Science Ltd.
United Kingdom
Publication History
Author Tags
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0