[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3336191.3371789acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article
Open access

All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs

Published: 22 January 2020 Publication History

Abstract

Recent studies have demonstrated that machine learning approaches like deep learning methods are easily fooled by adversarial attacks. Recently, a highly-influential study examined the impact of adversarial attacks on graph data and demonstrated that graph embedding techniques are also vulnerable to adversarial attacks. Fake users on social media and fake product reviews are examples of perturbations in graph data that are realistic counterparts of the adversarial models proposed. Graphs are widely used in a variety of domains and it is highly important to develop graph analysis techniques that are robust to adversarial attacks. One of the recent studies on generating adversarial attacks for graph data is Nettack. The Nettack model has shown to be very successful in deceiving the Graph Convolutional Network (GCN) model. Nettack is also transferable to other node classification approaches e.g. node embeddings. In this paper, we explore the properties of Nettack perturbations, in search for effective defenses against them. Our first finding is that Nettack demonstrates a very specific behavior in the spectrum of the graph: only high-rank (low-valued) singular components of the graph are affected. Following that insight, we show that a low-rank approximation of the graph, that uses only the top singular components for its reconstruction, can greatly reduce the effects of Nettack and boost the performance of GCN when facing adversarial attacks. Indicatively, on the CiteSeer dataset, our proposed defense mechanism is able to reduce the success rate of Nettack from 98% to 36%. Furthermore, we show that tensor-based node embeddings, which by default project the graph into a low-rank subspace, are robust against Nettack perturbations. Lastly, we propose LowBlow, a low-rank adversarial attack which is able to affect the classification performance of both GCN and tensor-based node embeddings and we show that the low-rank attack is noticeable and making it unnoticeable results in a high-rank attack.

References

[1]
Lada A Adamic and Natalie Glance. 2005. The political blogosphere and the 2004 US election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery. ACM, 36--43.
[2]
Saba A Al-Sayouri, Ekta Gujral, Danai Koutra, Evangelos E Papalexakis, and Sarah S Lam. 2018. t-PINE: Tensor-based Predictable and Interpretable Node Embeddings. arXiv preprint arXiv:1805.01889 (2018).
[3]
Alessandro Bessi. 2015. Two samples test for discrete power-law distributions. arXiv preprint arXiv:1503.00643 (2015).
[4]
Smriti Bhagat, Graham Cormode, and S Muthukrishnan. 2011. Node classification in social networks. Social network data analytics . Springer, 115--148.
[5]
Arjun Nitin Bhagoji, Daniel Cullina, and Prateek Mittal. 2017. Dimensionality reduction as a defense against evasion attacks on machine learning classifiers. arXiv preprint (2017).
[6]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012).
[7]
Aleksandar Bojcheski and Stephan Günnemann. 2018. Adversarial attacks on node embeddings. arXiv preprint arXiv:1809.01093 (2018).
[8]
Aleksandar Bojchevski and Stephan Günnemann. 2018. Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking. (2018).
[9]
J Douglas Carroll and Jih-Jie Chang. 1970. Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Youn” decomposition. Psychometrika, Vol. 35, 3 (1970), 283--319.
[10]
Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. 2009. Power-law distributions in empirical data. SIAM review, Vol. 51, 4 (2009), 661--703.
[11]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial Attack on Graph Structured Data. arXiv preprint arXiv:1806.02371 (2018).
[12]
Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, and others. 2004. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 99--108.
[13]
Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Siwei Li, Li Chen, Michael E Kounavis, and Duen Horng Chau. 2018. Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression. arXiv preprint arXiv:1802.06816 (2018).
[14]
C. Eckart and G. Young. 1936. The approximation of one matrix by another of lower rank. Psychometrika, Vol. 1, 3 (1936), 211--218.
[15]
C Lee Giles, Kurt D Bollacker, and Steve Lawrence. 1998. CiteSeer: An automatic citation indexing system. Proceedings of the third ACM conference on Digital libraries. ACM, 89--98.
[16]
Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 855--864.
[17]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in Neural Information Processing Systems. 1024--1034.
[18]
R.A. Harshman. 1970. Foundations of the PARAFAC procedure: Models and conditions for an" explanatory" multimodal factor analysis. (1970).
[19]
Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. 5th International Conference on Learning Representations (ICLR-17) (2017).
[20]
T.G. Kolda and B.W. Bader. 2009. Tensor decompositions and applications. SIAM review, Vol. 51, 3 (2009).
[21]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).
[22]
Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14). 1188--1196.
[23]
Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. 2000. Automating the construction of internet portals with machine learning. Information Retrieval, Vol. 3, 2 (2000), 127--163.
[24]
Shike Mei and Xiaojin Zhu. 2015. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners. In AAAI. 2871--2877.
[25]
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
[26]
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111--3119.
[27]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ieee, 86--94.
[28]
Evangelos E Papalexakis, Christos Faloutsos, and Nicholas D Sidiropoulos. 2017. Tensors for data mining and data fusion: Models, applications, and scalable algorithms. ACM Transactions on Intelligent Systems and Technology (TIST), Vol. 8, 2 (2017), 16.
[29]
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 582--597.
[30]
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining . ACM, 701--710.
[31]
Bryan Perozzi, Vivek Kulkarni, and Steven Skiena. 2016. Walklets: Multiscale graph embeddings for interpretable network classification. arXiv preprint arXiv:1605.02115 (2016).
[32]
Trang Pham, Truyen Tran, Dinh Q Phung, and Svetha Venkatesh. 2017. Column Networks for Collective Classification. In AAAI. 2485--2491.
[33]
Neil Shah, Alex Beutel, Brian Gallagher, and Christos Faloutsos. 2014a. Spotting suspicious link behavior with fbox: An adversarial perspective. In Data Mining (ICDM), 2014 IEEE International Conference on. IEEE, 959--964.
[34]
Neil Shah, Alex Beutel, Brian Gallagher, and Christos Faloutsos. 2014b. Spotting Suspicious Link Behavior with fBox: An Adversarial Perspective. arXiv preprint arXiv:1410.3915 (2014).
[35]
Mingjie Sun, Jian Tang, Huichen Li, Bo Li, Chaowei Xiao, Yao Chen, and Dawn Song. 2018. Data Poisoning Attack against Unsupervised Node Embedding Methods. arXiv preprint arXiv:1810.12881 (2018).
[36]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[37]
Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Y Chang. 2015. Network Representation Learning with Rich Text Information. In IJCAI . 2111--2117.
[38]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2847--2856.

Cited By

View all
  • (2025)A Unified Optimization-Based Framework for Certifiably Robust and Fair Graph Neural NetworksIEEE Transactions on Signal Processing10.1109/TSP.2024.351409173(83-98)Online publication date: 1-Jan-2025
  • (2025)Label Guided Graph Optimized Convolutional Network for Semi-Supervised LearningIEEE Transactions on Signal and Information Processing over Networks10.1109/TSIPN.2025.352596111(71-84)Online publication date: 2025
  • (2025)Defending adversarial attacks in Graph Neural Networks via tensor enhancementPattern Recognition10.1016/j.patcog.2024.110954158(110954)Online publication date: Feb-2025
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
WSDM '20: Proceedings of the 13th International Conference on Web Search and Data Mining
January 2020
950 pages
ISBN:9781450368223
DOI:10.1145/3336191
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 January 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial machine learning
  2. graph convolutional networks
  3. graph mining
  4. graph representation learning
  5. tensors

Qualifiers

  • Research-article

Funding Sources

Conference

WSDM '20

Acceptance Rates

Overall Acceptance Rate 498 of 2,863 submissions, 17%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,093
  • Downloads (Last 6 weeks)123
Reflects downloads up to 18 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)A Unified Optimization-Based Framework for Certifiably Robust and Fair Graph Neural NetworksIEEE Transactions on Signal Processing10.1109/TSP.2024.351409173(83-98)Online publication date: 1-Jan-2025
  • (2025)Label Guided Graph Optimized Convolutional Network for Semi-Supervised LearningIEEE Transactions on Signal and Information Processing over Networks10.1109/TSIPN.2025.352596111(71-84)Online publication date: 2025
  • (2025)Defending adversarial attacks in Graph Neural Networks via tensor enhancementPattern Recognition10.1016/j.patcog.2024.110954158(110954)Online publication date: Feb-2025
  • (2025)Simplified PCNet with robustnessNeural Networks10.1016/j.neunet.2024.107099184(107099)Online publication date: Apr-2025
  • (2025)SPMGAE: Self-purified masked graph autoencoders release robust expression powerNeurocomputing10.1016/j.neucom.2024.128631611(128631)Online publication date: Jan-2025
  • (2025)Explainability-based adversarial attack on graphs through edge perturbationKnowledge-Based Systems10.1016/j.knosys.2024.112895310(112895)Online publication date: Feb-2025
  • (2024)Adversarial Attacks in Machine Learning: Key Insights and Defense ApproachesApplied Data Science and Analysis10.58496/ADSA/2024/0112024(121-147)Online publication date: 7-Aug-2024
  • (2024)Graph adversarial diffusion convolutionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693311(30789-30806)Online publication date: 21-Jul-2024
  • (2024)Unsupervised deep graph structure and embedding learningProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/259(2342-2350)Online publication date: 3-Aug-2024
  • (2024)Fight Fire with Fire: Towards Robust Graph Neural Networks on Dynamic Graphs via Actively DefenseProceedings of the VLDB Endowment10.14778/3659437.365945717:8(2050-2063)Online publication date: 1-Apr-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media