[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1007/978-3-031-70341-6_23guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Disentangled Counterfactual Graph Augmentation Framework for Fair Graph Learning with Information Bottleneck

Published: 08 September 2024 Publication History

Abstract

Graph Neural Networks (GNNs) are susceptible to inheriting and even amplifying biases within datasets, subsequently leading to discriminatory decision-making. Our empirical observation reveals that the inconsistent distribution of sensitive attributes conditioned on labels significantly contributes to unfairness. To mitigate this problem, we suggest rectifying this inconsistency of the original dataset through a counterfactual augmentation strategy. Existing methods usually generate counterfactual samples from an entangled representation space, which fail to distinguish the different dependencies on sensitive attributes. Thus, we propose a novel disentangled counterfactual graph augmentation method based on the Information Bottleneck theory, named Fair Disentangled Graph Information Bottleneck (FDGIB). Specifically, FDGIB embeds graphs into two disentangled representation spaces: sensitive-related and sensitive-independent. By satisfying three conditions, FDGIB theoretically guarantees the disentanglement of different sensitive dependencies. We acquire credible counterfactual augmented graphs to facilitate consistency in data distribution and generate fair representations. FDGIB serves as a plug-and-play preprocessing framework that can collaborate with any GNNs. We validate the effectiveness of our model in promoting fairness learning through extensive experiments. Our source code is available at https://github.com/Evanlyf/FDGIB.

References

[1]
Agarwal, C., Lakkaraju, H., Zitnik, M.: Towards a unified framework for fair and stable graph representation learning. In: Uncertainty in Artificial Intelligence, pp. 2114–2124. PMLR (2021)
[2]
Agarwal C, Queen O, Lakkaraju H, and Zitnik M Evaluating explainability for graph neural networks Sci. Data 2023 10 1 144
[3]
Alemi, A.A., Fischer, I., Dillon, J.V., Murphy, K.: Deep variational information bottleneck. arXiv preprint arXiv:1612.00410 (2016)
[4]
Belghazi, M.I., et al.: Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062 (2018)
[5]
Boratto, L., Fabbri, F., Fenu, G., Marras, M., Medda, G.: Counterfactual graph augmentation for consumer unfairness mitigation in recommender systems. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pp. 3753–3757 (2023)
[6]
Cheng, P., Hao, W., Dai, S., Liu, J., Gan, Z., Carin, L.: Club: a contrastive log-ratio upper bound of mutual information. In: International Conference on Machine Learning, pp. 1779–1788. PMLR (2020)
[7]
Dai, E., Wang, S.: Learning fair graph neural networks with limited and private sensitive attribute information. IEEE Trans. Knowl. Data Eng. (2022)
[8]
Deniz Kose, O., Shen, Y.: Fairgat: Fairness-aware graph attention networks. arXiv e-prints pp. arXiv–2303 (2023)
[9]
Dong, Y., Liu, N., Jalaian, B., Li, J.: Edits: Modeling and mitigating data bias for graph neural networks. In: Proceedings of the ACM Web Conference 2022, pp. 1259–1269 (2022)
[10]
Dong, Y., Wang, S., Ma, J., Liu, N., Li, J.: Interpreting unfairness in graph neural networks via training node attribution. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 7441–7449 (2023)
[11]
Du M, Yang F, Zou N, and Hu X Fairness in deep learning: a computational perspective IEEE Intell. Syst. 2020 36 4 25-34
[12]
Feng, S., Wan, H., Wang, N., Luo, M.: Botrgcn: Twitter bot detection with relational graph convolutional networks. In: Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 236–239 (2021)
[13]
Guo, S., et al.: Self-supervised spatial-temporal bottleneck attentive network for efficient long-term traffic forecasting. In: 2023 IEEE 39th International Conference on Data Engineering (ICDE), pp. 1585–1596. IEEE (2023)
[14]
Guo, Z., Li, J., Xiao, T., Ma, Y., Wang, S.: Towards fair graph neural networks via graph counterfactual. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pp. 669–678 (2023)
[15]
Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. Adv. Neural Inform. Process. Systems 30 (2017)
[16]
Hu, J., Wang, C., Lin, X.: Spatio-temporal pyramid networks for traffic forecasting. In: Machine Learning and Knowledge Discovery in Databases: Research Track: European Conference, ECML PKDD 2023, Turin, Italy, September 18–22, 2023, Proceedings, Part I. (2023).
[17]
Jin, J., Li, H., Feng, F., Ding, S., Wu, P., He, X.: Fairly recommending with social attributes: a flexible and controllable optimization approach. Adv. Neural Inform. Process. Syst. 36 (2024)
[18]
Jordan KL and Freiburger TL The effect of race/ethnicity on sentencing: examining sentence type, jail length, and prison length J. Ethnicity Criminal Justice 2015 13 3 179-196
[19]
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
[20]
Kose, O.D., Shen, Y.: Fair node representation learning via adaptive data augmentation. arXiv preprint arXiv:2201.08549 (2022)
[21]
Kumar S, Mallik A, Khetarpal A, and Panda B Influence maximization in social networks using graph embedding and graph neural network Inf. Sci. 2022 607 1617-1636
[22]
Li H, Wang X, Zhang Z, Yuan Z, Li H, and Zhu W Disentangled contrastive learning on graphs Adv. Neural. Inf. Process. Syst. 2021 34 21872-21884
[23]
Lin, X., Kang, J., Cong, W., Tong, H.: Bemap: Balanced message passing for fair graph neural network. arXiv preprint arXiv:2306.04107 (2023)
[24]
Ling, H., Jiang, Z., Luo, Y., Ji, S., Zou, N.: Learning fair graph representations via automated data augmentations. In: The Eleventh International Conference on Learning Representations (2022)
[25]
Ma, J., Cui, P., Kuang, K., Wang, X., Zhu, W.: Disentangled graph convolutional networks. In: International Conference on Machine Learning, pp. 4212–4221. PMLR (2019)
[26]
Ma, J., Guo, R., Wan, M., Yang, L., Zhang, A., Li, J.: Learning fair node representations with graph counterfactual fairness. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 695–703 (2022)
[27]
Pham, D., Zhang, Y.: Counterfactual based reinforcement learning for graph neural networks. Ann. Oper. Res. 1–17 (2022).
[28]
Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. arXiv preprint physics/0004057 (2000)
[29]
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)
[30]
Wang, X., Chen, H., Tang, S., Wu, Z., Zhu, W.: Disentangled representation learning. arXiv preprint arXiv:2211.11695 (2022)
[31]
Wang, Y., Zhao, Y., Dong, Y., Chen, H., Li, J., Derr, T.: Improving fairness in graph neural networks via mitigating sensitive attribute leakage. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1938–1948 (2022)
[32]
Weilbach, C.D., Harvey, W., Wood, F.: Graphically structured diffusion models. In: International Conference on Machine Learning, pp. 36887–36909 (2023)
[33]
Wu, J., et al.: Disenkgat: knowledge graph embedding with disentangled graph attention network. In: Proceedings of the 30th ACM International Conference on Information and Knowledge Management, pp. 2140–2149 (2021)
[34]
Wu, L., Chen, L., Shao, P., Hong, R., Wang, X., Wang, M.: Learning fair representations for recommendation: a graph-based perspective. In: Proceedings of the Web Conference 2021, pp. 2198–2208 (2021)
[35]
Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K.i., Jegelka, S.: Representation learning on graphs with jumping knowledge networks. In: International Conference on Machine Learning, pp. 5453–5462. PMLR (2018)
[36]
Zhang, W., Zhang, L., Pfoser, D., Zhao, L.: Disentangled dynamic graph deep generation. In: Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pp. 738–746. SIAM (2021)
[37]
Zhao H and Gordon GJ Inherent tradeoffs in learning fair representations J. Mach. Learn. Res. 2022 23 1 2527-2552
[38]
Zhao Q, Wu Z, Zhang Z, and Zhou J Koutra D, Plant C, Gomez Rodriguez M, Baralis E, and Bonchi F Long-tail augmented graph contrastive learning for recommendation Machine Learning and Knowledge Discovery in Databases: Research Track: European Conference, ECML PKDD 2023, Turin, Italy, September 18–22, 2023, Proceedings, Part IV 2023 Cham Springer Nature Switzerland 387-403

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part I
Sep 2024
513 pages
ISBN:978-3-031-70340-9
DOI:10.1007/978-3-031-70341-6
  • Editors:
  • Albert Bifet,
  • Jesse Davis,
  • Tomas Krilavičius,
  • Meelis Kull,
  • Eirini Ntoutsi,
  • Indrė Žliobaitė

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 08 September 2024

Author Tags

  1. Graph fairness learning
  2. Information bottleneck
  3. Counterfactual graph
  4. Disentangled representation learning

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 04 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media