[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3447548.3467416acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Public Access

Graph Adversarial Attack via Rewiring

Published: 14 August 2021 Publication History

Abstract

Graph Neural Networks (GNNs) have demonstrated their powerful capability in learning representations for graph-structured data. Consequently, they have enhanced the performance of many graph-related tasks such as node classification and graph classification. However, it is evident from recent studies that GNNs are vulnerable to adversarial attacks. Their performance can be largely impaired by deliberately adding carefully created unnoticeable perturbations to the graph. Existing attacking methods often produce perturbation by adding/deleting a few edges, which might be noticeable even when the number of modified edges is small. In this paper, we propose a graph rewiring operation to perform the attack. It can affect the graph in a less noticeable way compared to existing operations such as adding/deleting edges. We then utilize deep reinforcement learning to learn the strategy to effectively perform the rewiring operations. Experiments on real-world graphs demonstrate the effectiveness of the proposed framework. To understand the proposed framework, we further analyze how its generated perturbation impacts the target model and the advantages of the rewiring operations. The implementation of the proposed framework is available at https://github.com/alge24/ReWatt.

References

[1]
Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
[2]
Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024--1034, 2017.
[3]
Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
[4]
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844--3852, 2016.
[5]
Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure Leskovec. Hierarchical graph representation learning withdifferentiable pooling. arXiv preprint arXiv:1806.08804, 2018.
[6]
Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[7]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
[8]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
[9]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
[10]
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39--57. IEEE, 2017.
[11]
Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, and Jiliang Tang. Adversarial attacks and defenses on graphs: A review and empirical study. arXiv preprint arXiv:2003.00653, 2020.
[12]
Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples for graph data: Deep insights into attack and defense.
[13]
Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, and Anil Jain. Adversarial attacks and defenses in images, graphs and text: A review. arXiv preprint arXiv:1909.08072, 2019.
[14]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2847--2856. ACM, 2018.
[15]
Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In International Conference on Learning Representations, 2019.
[16]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack on graph structured data. In International Conference on Machine Learning, pages 1123--1132, 2018.
[17]
Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. Node injection attacks on graphs via reinforcement learning. arXiv preprint arXiv:1909.06543, 2019.
[18]
Benjamin A Miller, Mustafa Çamurcu, Alexander J Gomez, Kevin Chan, and Tina Eliassi-Rad. Improving robustness to attacks against vertex classification. 2019.
[19]
Hau Chan and Leman Akoglu. Optimizing network robustness by edge rewiring: a general framework. Data Mining and Knowledge Discovery, 30(5):1395--1425, 2016.
[20]
Arpita Ghosh and Stephen Boyd. Growing well-connected graphs. In Proceedings of the 45th IEEE Conference on Decision and Control, pages 6605--6611. IEEE, 2006.
[21]
Marinka Zitnik, Rok Sosiv c, Marcus W. Feldman, and Jure Leskovec. Evolution of resilience in protein interactomes across the tree of life. Proceedings of the National Academy of Sciences, 116(10):4426--4433, 2019.
[22]
Aleksandar Bojchevski and Stephan Günnemann. Adversarial attacks on node embeddings via graph poisoning. In International Conference on Machine Learning, pages 695--704, 2019.
[23]
Xiaoyun Wang, Joe Eaton, Cho-Jui Hsieh, and Felix Wu. Attack graph convolutional networks by adding fake nodes. arXiv preprint arXiv:1810.10751, 2018.
[24]
Lichao Sun, Ji Wang, Philip S Yu, and Bo Li. Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528, 2018.
[25]
Miroslav Fiedler. Algebraic connectivity of graphs. Czechoslovak mathematical journal, 23(2):298--305, 1973.
[26]
Bojan Mohar, Y Alavi, G Chartrand, and OR Oellermann. The laplacian spectrum of graphs. Graph theory, combinatorics, and applications, 2(871--898):12, 1991.
[27]
Gilbert W Stewart. Matrix perturbation theory. 1990.
[28]
David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. arXiv preprint arXiv:1211.0053, 2012.
[29]
Aliaksei Sandryhaila and Jose MF Moura. Discrete signal processing on graphs: Frequency analysis. IEEE Transactions on Signal Processing, 62(12):3042--3054, 2014.
[30]
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
[31]
Kristian Kersting, Nils M. Kriege, Christopher Morris, Petra Mutzel, and Marion Neumann. Benchmark data sets for graph kernels, 2016.
[32]
Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1365--1374. ACM, 2015.
[33]
Solomon Kullback. Information theory and statistics. Courier Corporation, 1997.

Cited By

View all
  • (2025)A graph backdoor detection method for data collection scenariosCybersecurity10.1186/s42400-024-00305-w8:1Online publication date: 2-Jan-2025
  • (2024)Unsupervised Heterogeneous Graph Rewriting Attack via Node ClusteringProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671716(3057-3068)Online publication date: 25-Aug-2024
  • (2024)Debiased Graph Poisoning Attack via Contrastive Surrogate ObjectiveProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679686(3012-3021)Online publication date: 21-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
August 2021
4259 pages
ISBN:9781450383325
DOI:10.1145/3447548
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 August 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial attack
  2. graph neural networks
  3. rewiring

Qualifiers

  • Research-article

Funding Sources

Conference

KDD '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Upcoming Conference

KDD '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)486
  • Downloads (Last 6 weeks)63
Reflects downloads up to 16 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)A graph backdoor detection method for data collection scenariosCybersecurity10.1186/s42400-024-00305-w8:1Online publication date: 2-Jan-2025
  • (2024)Unsupervised Heterogeneous Graph Rewriting Attack via Node ClusteringProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671716(3057-3068)Online publication date: 25-Aug-2024
  • (2024)Debiased Graph Poisoning Attack via Contrastive Surrogate ObjectiveProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679686(3012-3021)Online publication date: 21-Oct-2024
  • (2024)Untargeted Adversarial Attack on Knowledge Graph EmbeddingsProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657702(1701-1711)Online publication date: 10-Jul-2024
  • (2024)A New Strategy of Graph Structure Attack: Multi-View Perturbation Candidate Edge LearningIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.340086011:5(4158-4168)Online publication date: Sep-2024
  • (2024)Simple and Efficient Partial Graph Adversarial Attack: A New PerspectiveIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.336497236:8(4245-4259)Online publication date: Aug-2024
  • (2024)Revisiting Adversarial Attacks on Graph Neural Networks for Graph ClassificationIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331305936:5(2166-2178)Online publication date: May-2024
  • (2024)A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural NetworksIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.331893619(59-73)Online publication date: 1-Jan-2024
  • (2024)Single-Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement LearningIEEE Transactions on Computational Social Systems10.1109/TCSS.2024.337755411:5(6135-6150)Online publication date: Oct-2024
  • (2024)Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via MotifsIEEE Transactions on Computational Social Systems10.1109/TCSS.2023.326709411:2(2479-2493)Online publication date: Apr-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media