[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/3294771.3294834guideproceedingsArticle/Chapter ViewAbstractPublication PagesnipsConference Proceedingsconference-collections
Article
Free access

Avoiding discrimination through causal reasoning

Published: 04 December 2017 Publication History

Abstract

Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively.
Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about our model of the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.

References

[1]
Richard S Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. "Learning Fair Representations." In: Proceedings of the International Conference of Machine Learning 28 (2013), pp. 325-333.
[2]
Moritz Hardt, Eric Price, Nati Srebro, et al. "Equality of opportunity in supervised learning". In: Advances in Neural Information Processing Systems. 2016, pp. 3315-3323.
[3]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. "Fairness Through Awareness". In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. 2012, pp. 214-226.
[4]
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. "Certifying and removing disparate impact". In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2015, pp. 259-268.
[5]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gómez Rogriguez, and Krishna P. Gummadi. "Fairness Constraints: Mechanisms for Fair Classification". In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 2017, pp. 962-970.
[6]
Harrison Edwards and Amos Storkey. "Censoring Representations with an Adversary". In: (Nov. 18, 2015). arXiv: 1511.05897v3.
[7]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gómez Rodriguez, and Krishna P. Gummadi. "Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification Without Disparate Mistreatment". In: Proceedings of the 26th International Conference on World Wide Web. 2017, pp. 1171-1180.
[8]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. "Fairness in Criminal Justice Risk Assessments: The State of the Art". In: (Mar. 27, 2017). arXiv: 1703.09207v1.
[9]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. "Inherent Trade-Offs in the Fair Determination of Risk Scores". In: (Sept. 19, 2016). arXiv: 1609.05807v1.
[10]
Alexandra Chouldechova. "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments". In: (Oct. 24, 2016). arXiv: 1610.07524v1.
[11]
Judea Pearl. Causality. Cambridge University Press, 2009.
[12]
Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. "On the (im)possibility of fairness". In: (Sept. 23, 2016). arXiv: 1609.07236v1.
[13]
Paul R Rosenbaum and Donald B Rubin. "The central role of the propensity score in observational studies for causal effects". In: Biometrika (1983), pp. 41-55.
[14]
Bilal Qureshi, Faisal Kamiran, Asim Karim, and Salvatore Ruggieri. "Causal Discrimination Discovery Through Propensity Score Analysis". In: (Aug. 12, 2016). arXiv: 1608.03735.
[15]
Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. "Counterfactual Fairness". In: (Mar. 20, 2017). arXiv: 1703.06856v1.
[16]
Tyler J Vander Weele and Whitney R Robinson. "On causal interpretation of race in regressions adjusting for confounding and mediating variables". In: Epidemiology 25.4 (2014), p. 473.
[17]
Francesco Bonchi, Sara Hajian, Bud Mishra, and Daniele Ramazzotti. "Exposing the probabilistic causal structure of discrimination". In: (Mar. 8, 2017). arXiv: 1510.00552v3.
[18]
Lu Zhang and Xintao Wu. "Anti-discrimination learning: a causal modeling-based framework". In: International Journal of Data Science and Analytics (2017), pp. 1-16.
[19]
Razieh Nabi and Ilya Shpitser. "Fair Inference On Outcomes". In: (May 29, 2017). arXiv: 1705.10378v1.
[20]
Peter J Bickel, Eugene A Hammel, J William O'Connell, et al. "Sex bias in graduate admissions: Data from Berkeley". In: Science 187.4175 (1975), pp. 398-404.
[21]
Faisal Kamiran, Indrė Žliobaitė, and Toon Calders. "Quantifying explainable discrimination and removing illegal discrimination in automated decision making". In: Knowledge and information systems 35.3 (2013), pp. 613-644.
[22]
Nicholas Cornia and Joris M Mooij. "Type-II errors of independence tests can lead to arbitrarily large errors in estimated causal effects: An illustrative example". In: Proceedings of the Workshop on Causal Inference (UAI). 2014, pp. 35-42.
[23]
Joshua Angrist and Alan B Krueger. Instrumental variables and the search for identification: From supply and demand to natural experiments. Tech. rep. National Bureau of Economic Research, 2001.
[24]
Toon Calders and Sicco Verwer. "Three naive Bayes approaches for discrimination-free classification". In: Data Mining and Knowledge Discovery 21.2 (2010), pp. 277-292.

Cited By

View all
  • (2024)Counterfactual Explanation at Will, with Zero Privacy LeakageProceedings of the ACM on Management of Data10.1145/36549332:3(1-29)Online publication date: 30-May-2024
  • (2024)Causal Inference with Latent Variables: Recent Advances and Future ProspectivesProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671450(6677-6687)Online publication date: 25-Aug-2024
  • (2024)"It's the most fair thing to do but it doesn't make any sense": Perceptions of Mathematical Fairness Notions by Hiring ProfessionalsProceedings of the ACM on Human-Computer Interaction10.1145/36373608:CSCW1(1-35)Online publication date: 26-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems
December 2017
7104 pages

Publisher

Curran Associates Inc.

Red Hook, NY, United States

Publication History

Published: 04 December 2017

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)185
  • Downloads (Last 6 weeks)7
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Counterfactual Explanation at Will, with Zero Privacy LeakageProceedings of the ACM on Management of Data10.1145/36549332:3(1-29)Online publication date: 30-May-2024
  • (2024)Causal Inference with Latent Variables: Recent Advances and Future ProspectivesProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671450(6677-6687)Online publication date: 25-Aug-2024
  • (2024)"It's the most fair thing to do but it doesn't make any sense": Perceptions of Mathematical Fairness Notions by Hiring ProfessionalsProceedings of the ACM on Human-Computer Interaction10.1145/36373608:CSCW1(1-35)Online publication date: 26-Apr-2024
  • (2023)Causal fairness for outcome controlProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668183(47575-47597)Online publication date: 10-Dec-2023
  • (2023)Weak proxies are sufficient and preferable for fairness with missing sensitive attributesProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3620230(43258-43288)Online publication date: 23-Jul-2023
  • (2023)Bias Mitigation for Machine Learning Classifiers: A Comprehensive SurveyACM Journal on Responsible Computing10.1145/3631326Online publication date: 1-Nov-2023
  • (2023)Learning from Discriminatory Training DataProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604710(752-763)Online publication date: 8-Aug-2023
  • (2023)How Redundant are Redundant Encodings? Blindness in the Wild and Racial Disparity when Race is UnobservedProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594034(667-686)Online publication date: 12-Jun-2023
  • (2023)Simplicity Bias Leads to Amplified Performance DisparitiesProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594003(355-369)Online publication date: 12-Jun-2023
  • (2023)Causality-guided Graph Learning for Session-based RecommendationProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3614803(3083-3093)Online publication date: 21-Oct-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media