[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3375627.3375864acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Public Access

A Geometric Solution to Fair Representations

Published: 07 February 2020 Publication History

Abstract

To reduce human error and prejudice, many high-stakes decisions have been turned over to machine algorithms. However, recent research suggests that this does not remove discrimination, and can perpetuate harmful stereotypes. While algorithms have been developed to improve fairness, they typically face at least one of three shortcomings: they are not interpretable, their prediction quality deteriorates quickly compared to unbiased equivalents, and %the methodology cannot easily extend other algorithms they are not easily transferable across models% (e.g., methods to reduce bias in random forests cannot be extended to neural networks) . To address these shortcomings, we propose a geometric method that removes correlations between data and any number of protected variables. Further, we can control the strength of debiasing through an adjustable parameter to address the trade-off between prediction quality and fairness. The resulting features are interpretable and can be used with many popular models, such as linear regression, random forest, and multilayer perceptrons. The resulting predictions are found to be more accurate and fair compared to several state-of-the-art fair AI algorithms across a variety of benchmark datasets. Our work shows that debiasing data is a simple and effective solution toward improving fairness.

References

[1]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. https://www.propublica.org/article/machine-bias-risk-assessments-incriminal- sentencing
[2]
Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2017. A Convex Framework for Fair Regression. (2017), 1--15. arXiv:1706.02409 http://arxiv.org/abs/1706.02409
[3]
Leo Breiman. 2001. Random forests. Machine learning 45, 1 (2001), 5--32.
[4]
Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM TIST 2, 3 (2011), 27.
[5]
Irene Chen, Fredrik D. Johansson, and David Sontag. 2018. Why Is My Classifier Discriminatory? (2018). https://doi.org/arXiv:1805.12002v2 arXiv:1805.12002
[6]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153--163.
[7]
Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 (2018).
[8]
D. Ciregan, U. Meier, and J. Schmidhuber. 2012. Multi-column deep neural networks for image classification. In CVPR. 3642--3649. https://doi.org/10.1109/ CVPR.2012.6248110
[9]
Shai Danziger, Jonathan Levav, and Liora Avnaim-Pesso. 2011. Extraneous factors in judicial decisions. PNAS 108, 17 (2011), 6889--6892. https://doi.org/10.1073/ pnas.1018033108
[10]
Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. Science advances 4, 1 (2018), eaao5580.
[11]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In ITCS. ACM, 214--226.
[12]
Yoav Freund and Robert E Schapire. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 55, 1 (1997), 119--139.
[13]
Ben Hutchinson and Margaret Mitchell. 2019. 50 Years of Test (Un) fairness: Lessons for Machine Learning. In FAT*. ACM, 49--58.
[14]
Ayush Jaiswal, Yue Wu, Wael AbdAlmageed, and Premkumar Natarajan. 2018. Unsupervised Adversarial Invariance. In NIPS. Curran Associates, Inc., 5092--5102. arXiv:1809.10083 http://arxiv.org/abs/1809.10083
[15]
James E. Johndrow and Kristian Lum. 2017. An algorithm for removing sensitive information: application to race-independent recidivism prediction. (2017), 1--25. arXiv:1703.04957 http://arxiv.org/abs/1703.04957
[16]
F Kamiran, T Calders, and M Pechenizkiy. 2010. Discrimination Aware Decision Tree Learning. In ICDM. 869--874. https://doi.org/10.1109/ICDM.2010.50
[17]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent Trade- Offs in the Fair Determination of Risk Scores. (sep 2016), 1--23. arXiv:1609.05807 http://arxiv.org/abs/1609.05807
[18]
Arie W. Kruglanski and Icek Ajzen. 1983. Bias and error in human judgment. European Journal of Social Psychology 13, 1 (1983), 1--44. https://doi.org/10.1002/ejsp.2420130102 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/ejsp.2420130102
[19]
Bo Liu, Ying Wei, and Yu Zhang amd Qiang Yang. 2017. Deep Neural Networks for High Dimension, Low Sample Size Data. In IJCAI. 2287--2293.
[20]
Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schölkopf, and Olivier Bachem. 2019. On the Fairness of Disentangled Representations. arXiv preprint arXiv:1905.13662 (2019).
[21]
Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. The Variational Fair Autoencoder. (2015), 1--11. arXiv:1511.00830 http: //arxiv.org/abs/1511.00830
[22]
Anandi Mani, Sendhil Mullainathan, Eldar Shafir, and Jiaying Zhao. 2013. Poverty impedes cognitive function. science 341, 6149 (2013), 976--980.
[23]
Daniel Moyer, Shuyang Gao, Rob Brekelmans, Greg Ver Steeg, and Aram Galstyan. 2018. Invariant Representations without Adversarial Training. Nips (2018). arXiv:1805.09458 http://arxiv.org/abs/1805.09458
[24]
Matt Olfat and Anil Aswani. 2018. Convex Formulations for Fair Principal Component Analysis. (2018). arXiv:1802.03765 http://arxiv.org/abs/1802.03765
[25]
Matthew Olson, Abraham J. Wyner, and Richard Berk. 2018. Modern Neural Networks Generalize on Small Data Sets. In NIPS. 3623--3632.
[26]
Cathy O'Neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
[27]
Emma Pierson, Camelia Simoiu, Jan Overgoor, Sam Corbett-Davies, Vignesh Ramachandran, Cheryl Phillips, and Sharad Goel. 2017. A large-scale analysis of racial disparities in police stops across the United States. preprint arXiv:1706.05678 (2017).
[28]
Frank Rosenblatt. 1961. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC.
[29]
Samira Samadi, Uthaipon Tantipongpipat, Jamie Morgenstern, Mohit Singh, and Santosh Vempala. 2018. The Price of Fair PCA: One Extra Dimension. Nips (2018). https://doi.org/10.1152/ajprenal.00633.2017 arXiv:1811.00103
[30]
Anuj K Shah, Sendhil Mullainathan, and Eldar Shafir. 2012. Some consequences of having too little. Science 338, 6107 (2012), 682--685.
[31]
Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare). IEEE, 1--7.
[32]
Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. NIPS 2017-December, Mmd (2017), 586--597. arXiv:arXiv:1705.11122v3
[33]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2015. Fairness Constraints: Mechanisms for Fair Classification. 54 (2015). https://doi.org/10.1109/TRO.2009.2019886 arXiv:1507.05259
[34]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2016. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. (2016). https://doi.org/ 10.1145/3038912.3052660 arXiv:1610.08452
[35]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, and Adrian Weller. 2017. From Parity to Preference-based Notions of Fairness in Classification. In NIPS. arXiv:1707.00010 http://arxiv.org/abs/1707. 00010
[36]
Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning Fair Representations. In ICML. 325--333. http://proceedings.mlr.press/ v28/zemel13.html

Cited By

View all
  • (2024)FairHash: A Fair and Memory/Time-efficient HashmapProceedings of the ACM on Management of Data10.1145/36549392:3(1-29)Online publication date: 30-May-2024
  • (2024)On the relation of causality- versus correlation-based feature selection on model fairnessProceedings of the 39th ACM/SIGAPP Symposium on Applied Computing10.1145/3605098.3636018(56-64)Online publication date: 8-Apr-2024
  • (2024)Predicting prenatal depression and assessing model bias using machine learning modelsBiological Psychiatry Global Open Science10.1016/j.bpsgos.2024.100376(100376)Online publication date: Aug-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
February 2020
439 pages
ISBN:9781450371100
DOI:10.1145/3375627
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 February 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. debiased features
  2. fair ai
  3. fair classification
  4. geometric method
  5. interpretable method
  6. orthogonal space
  7. projection
  8. sensitive information

Qualifiers

  • Research-article

Funding Sources

Conference

AIES '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)276
  • Downloads (Last 6 weeks)21
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)FairHash: A Fair and Memory/Time-efficient HashmapProceedings of the ACM on Management of Data10.1145/36549392:3(1-29)Online publication date: 30-May-2024
  • (2024)On the relation of causality- versus correlation-based feature selection on model fairnessProceedings of the 39th ACM/SIGAPP Symposium on Applied Computing10.1145/3605098.3636018(56-64)Online publication date: 8-Apr-2024
  • (2024)Predicting prenatal depression and assessing model bias using machine learning modelsBiological Psychiatry Global Open Science10.1016/j.bpsgos.2024.100376(100376)Online publication date: Aug-2024
  • (2023)The Role of Explainable AI in the Research Field of AI EthicsACM Transactions on Interactive Intelligent Systems10.1145/359997413:4(1-39)Online publication date: 1-Jun-2023
  • (2023)Promoting Ethical Uses in Artificial Intelligence Applied to EducationAugmented Intelligence and Intelligent Tutoring Systems10.1007/978-3-031-32883-1_53(604-615)Online publication date: 22-May-2023
  • (2022)Tackling Documentation Debt: A Survey on Algorithmic Fairness DatasetsProceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3551624.3555286(1-13)Online publication date: 6-Oct-2022
  • (2022)The larger the fairer?Proceedings of the 59th ACM/IEEE Design Automation Conference10.1145/3489517.3530427(163-168)Online publication date: 10-Jul-2022
  • (2022)Algorithmic fairness datasets: the story so farData Mining and Knowledge Discovery10.1007/s10618-022-00854-z36:6(2074-2152)Online publication date: 17-Sep-2022
  • (2021)A Survey on Bias and Fairness in Machine LearningACM Computing Surveys10.1145/345760754:6(1-35)Online publication date: 13-Jul-2021
  • (2021)A Review of Gender Bias Mitigation in Credit Scoring Models2021 Ethics and Explainability for Responsible Data Science (EE-RDS)10.1109/EE-RDS53766.2021.9708589(1-10)Online publication date: 27-Oct-2021
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media