[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Preference-Informed Fairness

Authors Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, Gal Yona



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2020.16.pdf
  • Filesize: 0.56 MB
  • 23 pages

Document Identifiers

Author Details

Michael P. Kim
  • Stanford University, CA, USA
Aleksandra Korolova
  • University of Southern California, CA, USA
Guy N. Rothblum
  • Weizmann Institute of Science, Rehovot, Israel
Gal Yona
  • Weizmann Institute of Science, Rehovot, Israel

Acknowledgements

This work grew out of conversations during the semester on Societal Concerns in Algorithms and Data Analysis (SCADA) hosted at the Weizmann Institute of Science. The authors thank Omer Reingold for helpful conversations, which influenced our understanding and the presentation of the work.

Cite As Get BibTex

Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona. Preference-Informed Fairness. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020) https://doi.org/10.4230/LIPIcs.ITCS.2020.16

Abstract

In this work, we study notions of fairness in decision-making systems when individuals have diverse preferences over the possible outcomes of the decisions. Our starting point is the seminal work of Dwork et al. [ITCS 2012] which introduced a notion of individual fairness (IF): given a task-specific similarity metric, every pair of individuals who are similarly qualified according to the metric should receive similar outcomes. We show that when individuals have diverse preferences over outcomes, requiring IF may unintentionally lead to less-preferred outcomes for the very individuals that IF aims to protect (e.g. a protected minority group). A natural alternative to IF is the classic notion of fair division, envy-freeness (EF): no individual should prefer another individual’s outcome over their own. Although EF allows for solutions where all individuals receive a highly-preferred outcome, EF may also be overly-restrictive for the decision-maker. For instance, if many individuals agree on the best outcome, then if any individual receives this outcome, they all must receive it, regardless of each individual’s underlying qualifications for the outcome.
We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style constraints, but allows for deviations provided they are in line with individuals' preferences. We show that PIIF can permit outcomes that are more favorable to individuals than any IF solution, while providing considerably more flexibility to the decision-maker than EF. In addition, we show how to efficiently optimize any convex objective over the outcomes subject to PIIF for a rich class of individual preferences. Finally, we demonstrate the broad applicability of the PIIF framework by extending our definitions and algorithms to the multiple-task targeted advertising setting introduced by Dwork and Ilvento [ITCS 2019].

Subject Classification

ACM Subject Classification
  • Theory of computation → Theory and algorithms for application domains
Keywords
  • algorithmic fairness

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes. 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW), 2019. URL: http://arxiv.org/abs/1904.02095.
  2. Julia Angwin, Noam Scheiber, and Ariana Tobin. Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads. ProPublica, Dec 20, 2017. URL: https://www.propublica.org/article/facebook-is-letting-job-advertisers-target-only-men.
  3. Julia Angwin, Ariana Tobin, and Madeleine Varner. Facebook (Still) Letting Housing Advertisers Exclude Users by Race. ProPublica, Nov. 21, 2017. URL: https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin.
  4. Maria-Florina Balcan, Travis Dick, Ritesh Noothigattu, and Ariel D Procaccia. Envy-Free Classification. arXiv preprint, 2018. URL: http://arxiv.org/abs/1809.08700.
  5. Vijay S Bawa. Optimal rules for ordering uncertain prospects. Journal of Financial Economics, 2(1):95-121, 1975. Google Scholar
  6. Katie Benner, Glenn Thrush, and Mike Isaac. Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says. The New York Times, Mar 28, 2019. URL: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html.
  7. Miranda Bogen. All the Ways Hiring Algorithms Can Introduce Bias. Harvard Business Review, May 6, 2019. URL: https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias.
  8. Urszula Chajewska, Daphne Koller, and Dirk Ormoneit. Learning an agent’s utility function by observing behavior. In ICML, pages 35-42, 2001. Google Scholar
  9. Shuchi Chawla, Christina Ilvento, and Meena Jagadeesan. Individual Fairness in Sponsored Search Auctions. arXiv preprint, 2019. URL: http://arxiv.org/abs/1906.08732.
  10. Jeffrey Dastin. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, Oct 10, 2018. Google Scholar
  11. Amit Datta, Michael Carl Tschantz, and Anupam Datta. Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1):92-112, 2015. Google Scholar
  12. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS), pages 214-226. ACM, 2012. Google Scholar
  13. Cynthia Dwork and Christina Ilvento. Fairness Under Composition. In 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, pages 33:1-33:20, 2019. Google Scholar
  14. Duncan K Foley. Resource allocation and the public sector. PhD thesis, Yale University, 1967. Google Scholar
  15. Stephen Gillen, Christopher Jung, Michael Kearns, and Aaron Roth. Online Learning with an Unknown Fairness Metric. NeurIPS, 2018. Google Scholar
  16. Josef Hadar and William R Russell. Rules for ordering uncertain prospects. The American economic review, 59(1):25-34, 1969. Google Scholar
  17. Ursula Hébert-Johnson, Michael P Kim, Omer Reingold, and Guy N Rothblum. Multicalibration: Calibration for the (Computationally-Identifiable) Masses. ICML, 2018. Google Scholar
  18. Deborah Hellman. Two concepts of discrimination. Virgina Law Review, 102:895, 2016. Google Scholar
  19. Deborah Hellman. Measuring Algorithmic Fairness. Virginia Law Review, ssrn.3418528, 2019. Google Scholar
  20. Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu. Eliciting and Enforcing Subjective Individual Fairness. arXiv, 2019. URL: http://arxiv.org/abs/1905.10660.
  21. Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. ICML, 2018. Google Scholar
  22. Michael Kearns, Aaron Roth, and Saeed Sharifi-Malvajerdi. Average Individual Fairness: Algorithms, Generalization and Experiments. arXiv, 2019. URL: http://arxiv.org/abs/1905.10607.
  23. Michael P. Kim, Omer Reingold, and Guy N. Rothblum. Fairness Through Computationally-Bounded Awareness. NeurIPS, 2018. Google Scholar
  24. Anja Lambrecht and Catherine E Tucker. Algorithmic bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads. SSRN, ssrn.2852260, 2018. Google Scholar
  25. Laura Murphy. Facebook’s Civil Rights Audit - Progress Report, June 30, 2019. URL: https://fbnewsroomus.files.wordpress.com/2019/06/civilrightaudit_final.pdf.
  26. Cathy O'Neil. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2017. Google Scholar
  27. Guy Rothblum and Gal Yona. Probably Approximately Metric-Fair Learning. In ICML, 2018. Google Scholar
  28. Sheryl Sandberg. Doing More to Protect Against Discrimination in Housing, Employment and Credit Advertising. Facebook Newsroom, Mar 19, 2019. URL: https://newsroom.fb.com/news/2019/03/protecting-against-discrimination-in-ads/.
  29. Ariana Tobin. HUD sues Facebook over housing discrimination and says the company’s algorithms have made the problem worse. ProPublica, Mar 28, 2019. URL: https://www.propublica.org/article/hud-sues-facebook-housing-discrimination-advertising-algorithms.
  30. Ariana Tobin and Jeremy B. Merrill. Facebook Is Letting Job Advertisers Target Only Men. ProPublica, Sept 18, 2018. URL: https://www.propublica.org/article/facebook-is-letting-job-advertisers-target-only-men.
  31. Upturn. Upturn Amicus Brief in Onuoha v. Facebook, Nov 16, 2018. URL: https://www.courtlistener.com/recap/gov.uscourts.cand.304918/gov.uscourts.cand.304918.76.1.pdf.
  32. Hal Varian. Efficiency, equity and envy. Journal of Economic Theory, 9:63-91, 1974. Google Scholar
  33. John Von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior (Commemorative Edition). Princeton University Press, 2007. Google Scholar
  34. Muhammad Bilal Zafar, Isabel Valera, Manuel Rodriguez, Krishna Gummadi, and Adrian Weller. From parity to preference-based notions of fairness in classification. In Advances in Neural Information Processing Systems, pages 229-239, 2017. Google Scholar
  35. Mark Zuckerberg. The Facts About Facebook. The Wall Street Journal, Jan 24, 2019. Opinion in The Wall Street Journal URL: https://www.wsj.com/articles/the-facts-about-facebook-11548374613.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail