[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3366423.3380087acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Learning Model-Agnostic Counterfactual Explanations for Tabular Data

Published: 20 April 2020 Publication History

Abstract

Counterfactual explanations can be obtained by identifying the smallest change made to an input vector to influence a prediction in a positive way from a user’s viewpoint; for example, from ’loan rejected’ to ’awarded’ or from ’high risk of cardiovascular disease’ to ’low risk’. Previous approaches would not ensure that the produced counterfactuals be proximate (i.e., not local outliers) and connected to regions with substantial data density (i.e., close to correctly classified observations), two requirements known as counterfactual faithfulness. Our contribution is twofold. First, drawing ideas from the manifold learning literature, we develop a framework, called C-CHVAE, that generates faithful counterfactuals. Second, we suggest to complement the catalog of counterfactual quality measures using a criterion to quantify the degree of difficulty for a certain counterfactual suggestion. Our real world experiments suggest that faithful counterfactuals come at the cost of higher degrees of difficulty.

References

[1]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2019. A reductions approach to fair classification. In ICML.
[2]
Naveed Akhtar and Ajmal Mian. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6(2018), 14410–14430.
[3]
Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. 2017. Adversarial patch. arXiv preprint arXiv:1712.09665(2017).
[4]
Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, and Freddy Lecue. 2018. Interpretable Credit Application Predictions With Counterfactual Explanations. NeurIPS workshop: Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy (2018).
[5]
Nina Grgic-Hlaca, Elissa M Redmiles, Krishna P Gummadi, and Adrian Weller. 2018. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 World Wide Web Conference. International World Wide Web Conferences Steering Committee, 903–912.
[6]
Radoslav Harman and Vladimír Lacko. 2010. On decompositional algorithms for uniform sampling from n-spheres and n-balls. Journal of Multivariate Analysis 101, 10 (2010), 2297–2304.
[7]
Oleg Ivanov, Michael Figurnov, and Dmitry Vetrov. 2018. Variational Autoencoder with Arbitrary Conditioning. arXiv preprint arXiv:1806.02382(2018).
[8]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. Proceedings of the 2nd International Conference on Learning Representations (ICLR) (2013).
[9]
Michael T Lash, Qihang Lin, Nick Street, Jennifer G Robinson, and Jeffrey Ohlmann. 2017. Generalized inverse classification. In Proceedings of the 2017 SIAM International Conference on Data Mining. SIAM, 162–170.
[10]
Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki. 2019. Issues with post-hoc counterfactual explanations: a discussion. ICML Workshop on Human in the Loop Learning(2019).
[11]
Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. Inverse Classification for Comparison-based Interpretability in Machine Learning. arXiv preprint arXiv:1712.08443(2017).
[12]
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2015. Adversarial autoencoders. arXiv preprint arXiv:1511.05644(2015).
[13]
Alfredo Nazabal, Pablo M Olmos, Zoubin Ghahramani, and Isabel Valera. 2018. Handling incomplete heterogeneous data using VAEs. arXiv preprint arXiv:1807.03653(2018).
[14]
Martin Pawelczyk, Johannes Haug, Klaus Broelemann, and Gjergji Kasneci. 2019. Towards User Empowerment. NeurIPS Workshop on Human-Centric Machine Learning (2019).
[15]
Vera Regitz-Zagrosek. 2012. Sex and gender differences in health. EMBO reports 13, 7 (2012), 596–603.
[16]
Christopher Russell. 2019. Efficient Search for Diverse Coherent Explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM FAT, 20–28.
[17]
Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in neural information processing systems. 3483–3491.
[18]
Gabriele Tolomei, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 465–474.
[19]
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. 2017. Wasserstein auto-encoders. arXiv preprint arXiv:1711.01558(2017).
[20]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 10–19.
[21]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard Journal of Law & Technology 31, 2 (2017), 2018.
[22]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 1171–1180.

Cited By

View all
  • (2024)CF-OPTProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694096(49558-49579)Online publication date: 21-Jul-2024
  • (2024)Learning decision trees and forests with algorithmic recourseProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692993(22936-22962)Online publication date: 21-Jul-2024
  • (2024)Counterfactual metarules for local and global recourseProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692218(3707-3724)Online publication date: 21-Jul-2024
  • Show More Cited By

Index Terms

  1. Learning Model-Agnostic Counterfactual Explanations for Tabular Data
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image ACM Conferences
          WWW '20: Proceedings of The Web Conference 2020
          April 2020
          3143 pages
          ISBN:9781450370233
          DOI:10.1145/3366423
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Sponsors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 20 April 2020

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. Counterfactual explanations
          2. Interpretability
          3. Transparency

          Qualifiers

          • Research-article
          • Research
          • Refereed limited

          Conference

          WWW '20
          Sponsor:
          WWW '20: The Web Conference 2020
          April 20 - 24, 2020
          Taipei, Taiwan

          Acceptance Rates

          Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)233
          • Downloads (Last 6 weeks)10
          Reflects downloads up to 13 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)CF-OPTProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694096(49558-49579)Online publication date: 21-Jul-2024
          • (2024)Learning decision trees and forests with algorithmic recourseProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692993(22936-22962)Online publication date: 21-Jul-2024
          • (2024)Counterfactual metarules for local and global recourseProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692218(3707-3724)Online publication date: 21-Jul-2024
          • (2024)ReLax: Efficient and Scalable Recourse Explanation Benchmarking using JAXJournal of Open Source Software10.21105/joss.065679:103(6567)Online publication date: Nov-2024
          • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
          • (2024)TABCF: Counterfactual Explanations for Tabular Data Using a Transformer-Based VAEProceedings of the 5th ACM International Conference on AI in Finance10.1145/3677052.3698673(274-282)Online publication date: 14-Nov-2024
          • (2024)Counterfactual Explanation at Will, with Zero Privacy LeakageProceedings of the ACM on Management of Data10.1145/36549332:3(1-29)Online publication date: 30-May-2024
          • (2024)CARMA: A practical framework to generate recommendations for causal algorithmic recourse at scaleProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659003(1745-1762)Online publication date: 3-Jun-2024
          • (2024)Out-of-Distribution Aware Classification for Tabular DataProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679755(65-75)Online publication date: 21-Oct-2024
          • (2024)Deep Neural Networks and Tabular Data: A SurveyIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.322916135:6(7499-7519)Online publication date: Jun-2024
          • Show More Cited By

          View Options

          Login options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media