Abstract
Counterfactual examples have emerged as an effective approach to produce simple and understandable post-hoc explanations. In the context of graph classification, previous work has focused on generating counterfactual explanations by manipulating the most elementary units of a graph, i.e., removing an existing edge, or adding a non-existing one. In this paper, we claim that such language of explanation might be too fine-grained, and turn our attention to some of the main characterizing features of real-world complex networks, such as the tendency to close triangles, the existence of recurring motifs, and the organization into dense modules. We thus define a general density-based counterfactual search framework to generate instance-level counterfactual explanations for graph classifiers, which can be instantiated with different notions of dense substructures. In particular, we show two specific instantiations of this general framework: a method that searches for counterfactual graphs by opening or closing triangles, and a method driven by maximal cliques. We also discuss how the general method can be instantiated to exploit any other notion of dense substructures, including, for instance, a given taxonomy of nodes. We evaluate the effectiveness of our approaches in 7 brain network datasets and compare the counterfactual statements generated according to several widely-used metrics. Results confirm that adopting a semantic-relevant unit of change like density is essential to define versatile and interpretable counterfactual explanation methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Density is usually defined as the number of edges over the number of possible edges. W.l.o.g. we omit the denominator.
- 2.
In our experiments we constrained the max deviation between the number of edges added and removed in terms of the max number of nodes b that a clique added can have with respect to the number of nodes in the clique removed, and set \(b = 10\).
- 3.
http://fcon_1000.projects.nitrc.org/indi/ACPI/html/acpi_mta_1.html.
- 4.
- 5.
See http://preprocessed-connectomes-project.org/abide/dparsf.html for AUT and BIP, and https://ccraddock.github.io/cluster_roi/atlases.html for ADHD and ADHDM. For OHSU, PEK, and KKI, the data was already preprocessed.
- 6.
- 7.
References
Abrate, C., Bonchi, F.: Counterfactual graphs for explainable classification of brain networks. In: SIGKDD, pp. 2495–2504 (2021)
Baldassarre, F., Azizpour, H.: Explainability techniques for graph convolutional networks. arXiv preprint arXiv:1905.13686 (2019)
Barabási, A.L., Oltvai, Z.N.: Network biology: understanding the cell’s functional organization. Nat. Rev. Genet. 5(2), 101–113 (2004)
Bhatore, S., Mohan, L., Reddy, Y.R.: Machine learning techniques for credit risk evaluation: a systematic literature review. J. Bank. Financ. Technol. 4(1), 111–138 (2020). https://doi.org/10.1007/s42786-020-00020-3
Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 8, pp. 8–13 (2017)
Buluç, A., Meyerhenke, H., Safro, I., Sanders, P., Schulz, C.: Recent advances in graph partitioning. In: Kliemann, L., Sanders, P. (eds.) Algorithm Engineering. LNCS, vol. 9220, pp. 117–158. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49487-6_4
Chang, L., Qin, L.: Cohesive Subgraph Computation over Large Sparse Graphs: Algorithms, Data Structures, and Programming Techniques. Springer Series in the Data Sciences, Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-030-03599-0
Craddock, C., et al.: The neuro bureau preprocessing initiative: open sharing of preprocessed neuroimaging data and derivatives. Front. Neuroinform. 7, 27 (2013)
Craddock, R.C., James, G.A., Holtzheimer III, P.E., Hu, X.P., Mayberg, H.S.: A whole brain fMRI atlas generated via spatially constrained spectral clustering. Hum. Brain Mapp. 33(8), 1914–1928 (2012)
Du, Y., Fu, Z., Calhoun, V.D.: Classification and prediction of brain disorders using functional connectivity: promising but challenging. Front. Neurosci. 12, 525 (2018)
Ewald, V., et al.: Posterior fossa sub-arachnoid cysts observed in patients with bipolar disorder: a retrospective cohort study. Cerebellum 22, 1–9 (2022)
Fang, Y., Wang, K., Lin, X., Zhang, W.: Cohesive Subgraph Search over Large Heterogeneous Information Networks. SpringerBriefs in Computer Science, Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-030-97568-5
Faragó, A., Mojaveri, Z.R.: In search of the densest subgraph. Algorithms 12(8), 157 (2019)
Fortunato, S.: Community detection in graphs. Phys. Rep. 486(3–5), 75–174 (2010)
Funke, T., Khosla, M., Anand, A.: ZORRO: valid, sparse, and stable explanations in graph neural networks. TKDE (2021)
Gionis, A., Tsourakakis, C.E.: Dense subgraph discovery: KDD 2015 tutorial. In: SIGKDD, pp. 2313–2314 (2015)
Girvan, M., Newman, M.E.J.: Community structure in social and biological networks. Proc. Natl. Acad. Sci. 99(12), 7821–7826 (2002)
Guidotti, R.: Counterfactual explanations and how to find them: literature review and benchmarking. Data Min. Knowl. Discov. 1–55 (2022)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM CSUR 51(5), 1–42 (2018)
Gulfidan, G., Turanli, B., Beklen, H., Sinha, R., Arga, K.Y.: Pan-cancer mapping of differential protein-protein interactions. Sci. Rep. 10(1), 1–12 (2020)
Gutiérrez-Gómez, L., Delvenne, J.-C.: Unsupervised network embeddings with node identity awareness. Appl. Netw. Sci. 4(1), 1–21 (2019). https://doi.org/10.1007/s41109-019-0197-1
Ha, S., Sohn, I.J., Kim, N., Sim, H.J., Cheon, K.A.: Characteristics of brains in autism spectrum disorder: structure, function and connectivity across the lifespan. Exp. Neurobiol. 24(4), 273 (2015)
Huang, Q., Yamada, M., Tian, Y., Singh, D., Chang, Y.: GraphLIME: local interpretable model explanations for graph neural networks. TKDE (2022)
Karimi, A.H., Barthe, G., Schölkopf, B., Valera, I.: A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. arXiv preprint arXiv:2010.04050 (2020)
Kim, D., et al.: Posterior cerebellar vermal deficits in bipolar disorder. J. Affect. Disord. 150, 499–506 (2013). https://doi.org/10.1016/j.jad.2013.04.050
Kim, Y., Hao, J., Gautam, Y., Mersha, T.B., Kang, M.: DiffGRN: differential gene regulatory network analysis. IJDMB 20(4), 362 (2018)
Kononenko, I.: Machine learning for medical diagnosis: history, state of the art and perspective. Artif. Intell. Med. 23(1), 89–109 (2001)
Koutrouli, M., Karatzas, E., Paez-Espino, D., Pavlopoulos, G.A.: A guide to conquer the biological network era using graph theory. Front. Bioeng. Biotechnol. 8, 34 (2020)
Lai, Y., Wu, B., Chen, L., Zhao, H.: A statistical method for identifying differential gene-gene co-expression patterns. Bioinformatics 20(17), 3146–3155 (2004)
Lanciano, T., Bonchi, F., Gionis, A.: Explainable classification of brain networks via contrast subgraphs. In: SIGKDD (2020)
Lanciano, T., Savino, A., Porcu, F., Cittaro, D., Bonchi, F., Provero, P.: Contrast subgraphs allow comparing homogeneous and heterogeneous networks derived from omics data. GigaScience 12 (2023)
de Lara, N., Pineau, E.: A simple baseline algorithm for graph classification. arXiv preprint arXiv:1810.09155 (2018)
Lee, V.E., Ruan, N., Jin, R., Aggarwal, C.: A survey of algorithms for dense subgraph discovery. In: Aggarwal, C., Wang, H. (eds.) Managing and Mining Graph Data. Advances in Database Systems, vol. 40, pp. 303–336. Springer, Boston (2010). https://doi.org/10.1007/978-1-4419-6045-0_10
Lucic, A., Ter Hoeve, M.A., Tolomei, G., De Rijke, M., Silvestri, F.: CF-GNNExplainer: counterfactual explanations for graph neural networks. In: AISTATS, pp. 4499–4511 (2022)
Luo, D., et al.: Parameterized explainer for graph neural network. Adv. Neural. Inf. Process. Syst. 33, 19620–19631 (2020)
Malliaros, F.D., Vazirgiannis, M.: Clustering and community detection in directed networks: a survey. Phys. Rep. 533(4), 95–142 (2013)
Meng, L., Xiang, J.: Brain network analysis and classification based on convolutional neural network. Front. Comput. Neurosci. 12, 95 (2018)
Milo, R., Shen-Orr, S., Itzkovitz, S., Kashtan, N., Chklovskii, D., Alon, U.: Network motifs: simple building blocks of complex networks. Science 298(5594), 824–827 (2002)
Minichino, A., et al.: The role of cerebellum in unipolar and bipolar depression: a review of the main neurobiological findings. Riv. Psichiatria 49, 124–31 (2014). https://doi.org/10.1708/1551.16907
Misman, M.F., et al.: Classification of adults with autism spectrum disorder using deep neural network. In: AiDAS, pp. 29–34 (2019)
Moraffah, R., Karami, M., Guo, R., Raglin, A., Liu, H.: Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explor. Newsl. 22(1), 18–33 (2020)
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)
Nascimento, M.C., de Carvalho, A.C.: Spectral methods for graph clustering - a survey. Eur. J. Oper. Res. 211(2), 221–231 (2011)
Newman, M.E.J.: Modularity and community structure in networks. Proc. Natl. Acad. Sci. 103(23), 8577–8582 (2006)
Nibbe, R.K., Chowdhury, S.A., Koyutürk, M., Ewing, R., Chance, M.R.: Protein-protein interaction networks in the biology of disease. WIREs Syst. Biol. Med. 3(3), 357–367 (2011)
Pan, S., Wu, J., Zhu, X., Long, G., Zhang, C.: Task sensitive feature exploration and learning for multitask graph classification. IEEE Trans. Cybern. 47(3), 744–758 (2016)
Perotti, A., Bajardi, P., Bonchi, F., Panisson, A.: GRAPHSHAP: motif-based explanations for black-box graph classifiers. arXiv preprint arXiv:2202.08815 (2022)
Pope, P.E., Kolouri, S., Rostami, M., Martin, C.E., Hoffmann, H.: Explainability methods for graph convolutional neural networks. In: CVPR, pp. 10772–10781 (2019)
Prado-Romero, M.A., Prenkaj, B., Stilo, G., Giannotti, F.: A survey on graph counterfactual explanations: definitions, methods, evaluation. arXiv preprint arXiv:2210.12089 (2022)
Sani, G., et al.: Association between duration of lithium exposure and hippocampus/amygdala volumes in type i bipolar disorder. J. Affect. Disord. 232, 341–348 (2018)
Schaeffer, S.E.: Graph clustering. Comput. Sci. Rev. 1(1), 27–64 (2007)
Schlichtkrull, M.S., De Cao, N., Titov, I.: Interpreting graph neural networks for NLP with differentiable edge masking. In: ICLR (2020)
Schnake, T., et al.: Higher-order explanations of graph neural networks via relevant walks. arXiv preprint arXiv:2006.03589 (2020)
Singh, A.J., Ramsey, S.A., Filtz, T.M., Kioussi, C.: Differential gene regulatory networks in development and disease. Cell. Mol. Life Sci. 75(6), 1013–1025 (2018)
Sörös, P., et al.: Inattention predicts increased thickness of left occipital cortex in men with ADHD. Front. Psychiatry 8, 170 (2017). https://doi.org/10.3389/fpsyt.2017.00170
Tzourio-Mazoyer, N., et al.: Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15(1), 273–289 (2002)
Vu, M., Thai, M.T.: PGM-explainer: probabilistic graphical model explanations for graph neural networks. Adv. Neural. Inf. Process. Syst. 33, 12225–12235 (2020)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)
Wang, S., He, L., Cao, B., Lu, C.T., Yu, P.S., Ragin, A.B.: Structural deep brain network mining. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2017, pp. 475–484 (2017)
Wang, X., Wu, Y., Zhang, A., Feng, F., He, X., Chua, T.S.: Reinforced causal explainer for graph neural networks. Trans. Pattern Anal. Mach. Intell. 45(2), 2297–2309 (2023)
Wellawatte, G.P., Seshadri, A., White, A.D.: Model agnostic generation of counterfactual explanations for molecules. Chem. Sci. 13(13), 3697–3705 (2022)
Weston, C.S.: Four social brain regions, their dysfunctions, and sequelae, extensively explain autism spectrum disorder symptomatology. Brain Sci. 9(6), 130 (2019)
Wu, Q., Hao, J.K.: A review on algorithms for maximum clique problems. Eur. J. Oper. Res. 242(3), 693–709 (2015)
Wu, Z., Luo, Y., Gao, Yu., Han, Y., Wu, K., Li, X.: The role of frontal and occipital cortices in processing sustained visual attention in young adults with attention-deficit/hyperactivity disorder: a functional near-infrared spectroscopy study. Neurosci. Bull. 36(6), 659–663 (2020). https://doi.org/10.1007/s12264-020-00492-9
Yan, Y., Zhu, J., Duda, M., Solarz, E., Sripada, C., Koutra, D.: GroupINN: grouping-based interpretable neural network for classification of limited, noisy brain data. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, pp. 772–782 (2019)
Ying, R., Bourgeois, D., You, J., Zitnik, M., Leskovec, J.: GNN explainer: a tool for post-hoc explanation of graph neural networks. arXiv preprint arXiv:1903.03894 (2019)
You, J., Gomes-Selman, J.M., Ying, R., Leskovec, J.: Identity-aware graph neural networks. In: AAAI Conference on Artificial Intelligence, pp. 10737–10745 (2021)
Yuan, H., Tang, J., Hu, X., Ji, S.: XGNN: towards model-level explanations of graph neural networks. In: SIGKDD, pp. 430–438 (2020)
Yuan, H., Yu, H., Gui, S., Ji, S.: Explainability in graph neural networks: a taxonomic survey. IEEE Trans. Pattern Anal. Mach. Intell. 45, 5782–5799 (2022)
Yuan, H., Yu, H., Wang, J., Li, K., Ji, S.: On explainability of graph neural networks via subgraph explorations. In: ICML, pp. 12241–12252 (2021)
Zhang, Y., Defazio, D., Ramesh, A.: ReLEx: a model-agnostic relational model explainer. In: AIES, pp. 1042–1049 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Abrate, C., Preti, G., Bonchi, F. (2023). Counterfactual Explanations for Graph Classification Through the Lenses of Density. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1901. Springer, Cham. https://doi.org/10.1007/978-3-031-44064-9_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-44064-9_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44063-2
Online ISBN: 978-3-031-44064-9
eBook Packages: Computer ScienceComputer Science (R0)