[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Explanatory subgraph attacks against Graph Neural Networks

Published: 01 April 2024 Publication History

Abstract

Graph Neural Networks (GNNs) are often viewed as black boxes due to their lack of transparency, which hinders their application in critical fields. Many explanation methods have been proposed to address the interpretability issue of GNNs. These explanation methods reveal explanatory information about graphs from different perspectives. However, the explanatory information may also pose an attack risk to GNN models.
In this work, we will explore this problem from the explanatory subgraph perspective. To this end, we utilize a powerful GNN explanation method called SubgraphX and deploy it locally to obtain explanatory subgraphs from given graphs. Then we propose methods for conducting evasion attacks and backdoor attacks based on the local explainer. In evasion attacks, the attacker gets explanatory subgraphs of test graphs from the local explainer and replace their explanatory subgraphs with an explanatory subgraph of other labels, making the target model misclassify test graphs as wrong labels. In backdoor attacks, the attacker employs the local explainer to select an explanatory trigger and locate suitable injection locations. We validate the effectiveness of our proposed attacks on state-of-art GNN models and different datasets. The results also demonstrate that our proposed backdoor attack is more efficient, adaptable, and concealed than previous backdoor attacks.

References

[1]
Alon U., An introduction to systems biology: Design principles of biological circuits, Chapman and Hall/CRC, 2006.
[2]
Alon U., Network motifs: theory and experimental approaches, Nature Reviews Genetics 8 (6) (2007) 450–461.
[3]
Dai H., Li H., Tian T., Huang X., Wang L., Zhu J., et al., Adversarial attack on graph structured data, in: International conference on machine learning, PMLR, 2018, pp. 1115–1124.
[4]
Fey, M., & Lenssen, J. E. (2019). Fast Graph Representation Learning with PyTorch Geometric. In ICLR workshop on representation learning on graphs and manifolds.
[5]
Gilbert E.N., Random graphs, The Annals of Mathematical Statistics 30 (4) (1959) 1141–1144.
[6]
Goodfellow I.J., Shlens J., Szegedy C., Explaining and harnessing adversarial examples, 2014, arXiv preprint arXiv:1412.6572.
[7]
Hamilton W., Ying Z., Leskovec J., Inductive representation learning on large graphs, Advances in Neural Information Processing Systems 30 (2017).
[8]
Hu, Z., Dong, Y., Wang, K., Chang, K.-W., & Sun, Y. (2020). Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 1857–1867).
[9]
Hu W., Liu B., Gomes J., Zitnik M., Liang P., Pande V., et al., Strategies for pre-training graph neural networks, 2019, arXiv preprint arXiv:1905.12265.
[10]
Ju W., Luo X., Ma Z., Yang J., Deng M., Zhang M., GHNN: Graph harmonic neural networks for semi-supervised graph-level classification, Neural Networks 151 (2022) 70–79,.
[11]
Kearnes S., McCloskey K., Berndl M., Pande V., Riley P., Molecular graph convolutions: moving beyond fingerprints, Journal of Computer-Aided Molecular Design 30 (2016) 595–608.
[12]
Kipf T.N., Welling M., Semi-supervised classification with graph convolutional networks, 2016, arXiv preprint arXiv:1609.02907.
[13]
Li X., Cheng Y., Understanding the message passing in graph neural networks via power iteration clustering, Neural Networks 140 (2021) 130–135,.
[14]
Liu M., Luo Y., Wang L., Xie Y., Yuan H., Gui S., et al., DIG: A turnkey library for diving into graph deep learning research, Journal of Machine Learning Research 22 (240) (2021) 1–9. URL http://jmlr.org/papers/v22/21-0343.html.
[15]
Luo D., Cheng W., Xu D., Yu W., Zong B., Chen H., et al., Parameterized explainer for graph neural network, Advances in Neural Information Processing Systems 33 (2020) 19620–19631.
[16]
Ma, Y., Wang, S., Derr, T., Wu, L., & Tang, J. (2021). Graph adversarial attack via rewiring. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining (pp. 1161–1169).
[17]
Milo R., Shen-Orr S., Itzkovitz S., Kashtan N., Chklovskii D., Alon U., Network motifs: simple building blocks of complex networks, Science 298 (5594) (2002) 824–827.
[18]
Morris C., Kriege N.M., Bause F., Kersting K., Mutzel P., Neumann M., TUDataset: A collection of benchmark datasets for learning with graphs, in: ICML 2020 Workshop on Graph Representation Learning and beyond, 2020, arXiv:2007.08663 URL www.graphlearning.io.
[19]
Qiu, J., Chen, Q., Dong, Y., Zhang, J., Yang, H., Ding, M., et al. (2020). Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 1150–1160).
[20]
Qiu, J., Tang, J., Ma, H., Dong, Y., Wang, K., & Tang, J. (2018). Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 2110–2119).
[21]
Salha-Galvan G., Lutzeyer J.F., Dasoulas G., Hennequin R., Vazirgiannis M., Modularity-aware graph autoencoders for joint community detection and link prediction, Neural Networks 153 (2022) 474–495,.
[22]
Shen-Orr S.S., Milo R., Mangan S., Alon U., Network motifs in the transcriptional regulation network of Escherichia coli, Nature Genetics 31 (1) (2002) 64–68.
[23]
Veličković P., Cucurull G., Casanova A., Romero A., Lio P., Bengio Y., Graph attention networks, 2017, arXiv preprint arXiv:1710.10903.
[24]
Wang J., Zhang S., Xiao Y., Song R., A review on graph neural network methods in financial applications, 2021, arXiv preprint arXiv:2111.15367.
[25]
Weber M., Domeniconi G., Chen J., Weidele D.K.I., Bellei C., Robinson T., et al., Anti-money laundering in bitcoin: Experimenting with graph convolutional networks for financial forensics, 2019, arXiv preprint arXiv:1908.02591.
[26]
Wei T., Chow T.W., Ma J., Zhao M., ExpGCN: Review-aware graph convolution network for explainable recommendation, Neural Networks 157 (2023) 202–215,.
[27]
Wu H., Wang C., Tyshetskiy Y., Docherty A., Lu K., Zhu L., Adversarial examples on graph data: Deep insights into attack and defense, 2019, arXiv preprint arXiv:1903.01610.
[28]
Xu K., Chen H., Liu S., Chen P.-Y., Weng T.-W., Hong M., et al., Topology attack and defense for graph neural networks: An optimization perspective, 2019, arXiv preprint arXiv:1906.04214.
[29]
Xu K., Hu W., Leskovec J., Jegelka S., How powerful are graph neural networks?, 2018, arXiv preprint arXiv:1810.00826.
[30]
Xu H., Ma Y., Liu H.-C., Deb D., Liu H., Tang J.-L., et al., Adversarial attacks and defenses in images, graphs and text: A review, International Journal of Automation and Computing 17 (2020) 151–178.
[31]
Xu, J., Xue, M., & Picek, S. (2021). Explainability-based backdoor attacks against graph neural networks. In Proceedings of the 3rd ACM workshop on wireless security and machine learning (pp. 31–36).
[32]
Ying Z., Bourgeois D., You J., Zitnik M., Leskovec J., Gnnexplainer: Generating explanations for graph neural networks, Advances in Neural Information Processing Systems 32 (2019).
[33]
Yuan H., Yu H., Wang J., Li K., Ji S., On explainability of graph neural networks via subgraph explorations, in: International conference on machine learning, PMLR, 2021, pp. 12241–12252.
[34]
Zhang M., Chen Y., Link prediction based on graph neural networks, Advances in Neural Information Processing Systems 31 (2018).
[35]
Zhang, Z., Jia, J., Wang, B., & Gong, N. Z. (2021). Backdoor attacks to graph neural networks. In Proceedings of the 26th ACM symposium on access control models and technologies (pp. 15–26).
[36]
Zhang Z., Liu Q., Huang Z., Wang H., Lu C., Liu C., et al., Graphmi: Extracting private graph data from graph neural networks, 2021, arXiv preprint arXiv:2106.02820.
[37]
Zhou Z., Zhou C., Li X., Yao J., Yao Q., Han B., On strengthening and defending graph reconstruction attack with Markov chain approximation, 2023, arXiv preprint arXiv:2306.09104.
[38]
Zügner, D., Akbarnejad, A., & Günnemann, S. (2018). Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 2847–2856).
[39]
Zügner D., Günnemann S., Adversarial attacks on graph neural networks via meta learning, 2019, arXiv:1902.08412.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Neural Networks
Neural Networks  Volume 172, Issue C
Apr 2024
925 pages

Publisher

Elsevier Science Ltd.

United Kingdom

Publication History

Published: 01 April 2024

Author Tags

  1. Graph Neural Networks
  2. Explainability
  3. Adversarial attacks
  4. Backdoor attacks

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media