[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3531146.3533226acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness

Published: 20 June 2022 Publication History

Abstract

Most proposed algorithmic fairness techniques require access to demographic data in order to make performance comparisons and standardizations across groups, however this data is largely unavailable in practice, hindering the widespread adoption of algorithmic fairness. Through this paper, we consider calls to collect more data on demographics to enable algorithmic fairness and challenge the notion that discrimination can be overcome with smart enough technical methods and sufficient data. We show how these techniques largely ignore broader questions of data governance and systemic oppression when categorizing individuals for the purpose of fairer algorithmic processing. In this work, we explore under what conditions demographic data should be collected and used to enable algorithmic fairness methods by characterizing a range of social risks to individuals and communities. For the risks to individuals we consider the unique privacy risks of sensitive attributes, the possible harms of miscategorization and misrepresentation, and the use of sensitive data beyond data subjects’ expectations. Looking more broadly, the risks to entire groups and communities include the expansion of surveillance infrastructure in the name of fairness, misrepresenting and mischaracterizing what it means to be part of a demographic group, and ceding the ability to define what constitutes biased or unfair treatment. We argue that, by confronting these questions before and during the collection of demographic data, algorithmic fairness methods are more likely to actually mitigate harmful treatment disparities without reinforcing systems of oppression. Towards this end, we assess privacy-focused methods of data collection and use and participatory data governance structures as proposals for more responsibly collecting demographic data.

References

[1]
Aapti Institute and Open Data Institute. 2021. Enabling data sharing for social benefit through data trusts. Technical Report. Global Partnership on Artificial Intelligence. https://gpai.ai/projects/data-governance/data-trusts/enabling-data-sharing-for-social-benefit-data-trusts-interim-report.pdf
[2]
Rachad Alao, Miranda Bogen, Jingang Miao, Ilya Mironov, and Jonathan Tannen. 2021. How Meta is working to assess fairness in relation to race in the U.S. across its products and systems. Technical Report. Meta AI. 34 pages.
[3]
McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2021. ”What We Can’t Measure, We Can’t Understand”: Challenges to Demographic Data Procurement in the Pursuit of Fairness. arXiv:2011.02282 [cs] (Jan. 2021). http://arxiv.org/abs/2011.02282 arXiv:2011.02282.
[4]
Chloé Bakalar, Renata Barreto, Miranda Bogen, Sam Corbett-Davies, Melissa Hall, Isabel Kloumann, Michelle Lam, Joaquin Quiñonero Candela, Manish Raghavan, Joshua Simons, Jonathan Tannen, Edmund Tong, Kate Vredenburgh, and Jiejing Zhao. 2021. Fairness On The Ground: Applying Algorithmic Fairness Approaches To Production Systems. (2021), 12.
[5]
Agathe Balayn and Seda Gürses. 2021. Beyond Debiasing. Technical Report. European Digital Rights. https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf
[6]
Erin Banco and Darius Tahir. 2021. CDC under scrutiny after struggling to report Covid race, ethnicity data. https://www.politico.com/news/2021/03/09/hhs-cdc-covid-race-data-474554
[7]
Chelsea Barabas. 2019. Beyond Bias: Re-Imagining the Terms of ‘Ethical AI’ in Criminal Law. SSRN Electronic Journal(2019). https://doi.org/10.2139/ssrn.3377921
[8]
Chelsea Barabas. 2020. To Build a Better Future, Designers Need to Start Saying ‘No’. https://onezero.medium.com/refusal-a-beginning-that-starts-with-an-end-2b055bfc14be
[9]
Chelsea Barabas, JB Rubinovitz, Colin Doyle, and Karthik Dinakar. 2020. Studying Up: Reorienting the study of algorithmic fairness around issues of power. (2020), 10.
[10]
Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, and Hanna Wallach. 2021. Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs. arXiv:2103.06076 [cs] (March 2021). http://arxiv.org/abs/2103.06076 arXiv:2103.06076.
[11]
Solon Barocas and Karen Levy. 2019. Privacy Dependencies. SSRN Scholarly Paper ID 3447384. Social Science Research Network, Rochester, NY. https://papers.ssrn.com/abstract=3447384
[12]
Solon Barocas and Andrew D. Selbst. 2016. Big Data’s Disparate Impact. California Law Review 104 (2016), 671. https://heinonline.org/HOL/Page?handle=hein.journals/calr104&id=695&div=&collection=
[13]
Sid Basu, Ruthie Berman, Adam Bloomston, John Cambell, Anne Diaz, Nanako Era, Benjamin Evans, Sukhada Palkar, and Skyler Wharton. 2020. Measuring discrepancies in Airbnb guest acceptance rates using anonymized demographic data. Technical Report. AirBnB. https://news.airbnb.com/wp-content/uploads/sites/4/2020/06/Project-Lighthouse-Airbnb-2020-06-12.pdf
[14]
Ruha Benjamin. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Polity, Medford, MA.
[15]
Rena Bivens. 2017. The gender binary will not be deprogrammed: Ten years of coding gender on Facebook. New Media & Society 19, 6 (June 2017), 880–898. https://doi.org/10.1177/1461444815621527 Publisher: SAGE Publications.
[16]
Miranda Bogen, Aaron Rieke, and Shazeda Ahmed. 2020. Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 492–500.
[17]
Geoffrey C. Bowker and Susan Leigh Star. 1999. Sorting things out: classification and its consequences. MIT Press, Cambridge, Mass.
[18]
Lundy Braun, Anne Fausto-Sterling, Duana Fullwiley, Evelynn M. Hammonds, Alondra Nelson, William Quivers, Susan M. Reverby, and Alexandra E. Shields. 2007. Racial Categories in Medical Practice: How Useful Are They?PLOS Medicine 4, 9 (Sept. 2007), e271. https://doi.org/10.1371/journal.pmed.0040271 Publisher: Public Library of Science.
[19]
Simone Browne. 2015. Dark Matters: On the Surveillance of Blackness. Duke University Press. https://doi.org/10.1515/9780822375302 Publication Title: Dark Matters.
[20]
Consumer Financial Protection Bureau. 2014. Using publicly available information to proxy for unidentified race and ethnicity. (2014). https://www.consumerfinance.gov/data-research/research-reports/using-publicly-available-information-to-proxy-for-unidentified-race-and-ethnicity/
[21]
Moritz Büchi, Eduard Fosch-Villaronga, Christoph Lutz, Aurelia Tamò-Larrieux, and Shruthi Velidi. 2021. Making sense of algorithmic profiling: user perceptions on Facebook. Information, Communication & Society 0, 0 (Oct. 2021), 1–17. https://doi.org/10.1080/1369118X.2021.1989011 Publisher: Routledge _eprint: https://doi.org/10.1080/1369118X.2021.1989011.
[22]
José González Cabañas, Ángel Cuevas, Aritz Arrate, and Rubén Cuevas. 2021. Does Facebook use sensitive data for advertising purposes?Commun. ACM 64, 1 (Jan. 2021), 62–69. https://doi.org/10.1145/3426361
[23]
Hongyan Chang and Reza Shokri. 2021. On the Privacy Risks of Algorithmic Fairness. In 2021 IEEE European Symposium on Security and Privacy (EuroS P). 292–303. https://doi.org/10.1109/EuroSP51992.2021.00028
[24]
Jiahao Chen. 2018. Fair lending needs explainable models for responsible recommendation. arXiv:1809.04684 [cs, stat] (Sept. 2018). http://arxiv.org/abs/1809.04684 arXiv:1809.04684.
[25]
Jiahao Chen, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, and Madeleine Udell. 2019. Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved. Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19 (2019), 339–348. https://doi.org/10.1145/3287560.3287594 arXiv:1811.11154.
[26]
Le Chen, Ruijun Ma, Anikó Hannák, and Christo Wilson. 2018. Investigating the Impact of Gender on Rank in Resume Search Engines. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18. ACM Press, Montreal QC, Canada, 1–14. https://doi.org/10.1145/3173574.3174225
[27]
Danielle Keats Citron and Daniel J. Solove. 2021. Privacy Harms. SSRN Scholarly Paper ID 3782222. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3782222
[28]
Roderic Crooks and Morgan Currie. 2021. Numbers will not save us: Agonistic data practices. The Information Society 0, 0 (May 2021), 1–19. https://doi.org/10.1080/01972243.2021.1920081 Publisher: Routledge _eprint: https://doi.org/10.1080/01972243.2021.1920081.
[29]
Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination. Proceedings on Privacy Enhancing Technologies 2015, 1 (April 2015), 92–112. https://doi.org/10.1515/popets-2015-0007
[30]
Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial Bias in Hate Speech and Abusive Language Detection Datasets. arXiv:1905.12516 [cs] (May 2019). http://arxiv.org/abs/1905.12516 arXiv:1905.12516.
[31]
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. Proceedings of the International AAAI Conference on Web and Social Media 11, 1 (May 2017), 512–515. https://ojs.aaai.org/index.php/ICWSM/article/view/14955 Number: 1.
[32]
Robin Dembroff. 2018. Real Talk on the Metaphysics of Gender. Philosophical Topics 46, 2 (2018), 21–50. https://doi.org/10.5840/philtopics201846212
[33]
Nora A Draper and Joseph Turow. 2019. The corporate cultivation of digital resignation. New Media & Society 21, 8 (Aug. 2019), 1824–1839. https://doi.org/10.1177/1461444819833331 Publisher: SAGE Publications.
[34]
Marc N Elliott, Peter A Morrison, Allen Fremont, Daniel F McCaffrey, Philip Pantoja, and Nicole Lurie. 2009. Using the Census Bureau’s surname list to improve estimates of race/ethnicity and associated disparities. Health Services and Outcomes Research Methodology 9, 2(2009), 69. Publisher: Springer.
[35]
Virginia Eubanks. 2017. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, New York, NY.
[36]
European Parliament and Council of European Union. 2016. Regulation (EU) 2016/679 (General Data Protection Regulation). https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN
[37]
Tom Farrand, Fatemehsadat Mireshghallah, Sahib Singh, and Andrew Trask. 2020. Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy. In Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice(PPMLP’20). Association for Computing Machinery, New York, NY, USA, 15–19. https://doi.org/10.1145/3411501.3419419
[38]
Sina Fazelpour and Zachary C. Lipton. 2020. Algorithmic Fairness from a Non-ideal Perspective. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society(AIES ’20). Association for Computing Machinery, New York, NY, USA, 57–63. https://doi.org/10.1145/3375627.3375828
[39]
E. Fosch-Villaronga, A. Poulsen, R. A. Søraa, and B. H. M. Custers. 2021. A little bird told me your gender: Gender inferences in social media. Information Processing & Management 58, 3 (May 2021), 102541. https://doi.org/10.1016/j.ipm.2021.102541
[40]
Lisa Gitelman. 2013. Raw Data Is an Oxymoron. MIT Press. Google-Books-ID: Be5ZAgAAQBAJ.
[41]
Bryce W Goodman. 2016. A step towards accountable algorithms? algorithmic discrimination and the european union general data protection. In 29th conference on neural information processing systems (NIPS 2016), barcelona. NIPS foundation.
[42]
Ben Green and Salomé Viljoen. 2020. Algorithmic realism: expanding the boundaries of algorithmic thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 19–31. https://doi.org/10.1145/3351095.3372840
[43]
David Grossman. 2018. Amazon Fired Its Resume-Reading AI for Sexism. https://www.popularmechanics.com/technology/robots/a23708450/amazon-resume-ai-sexism/ Section: Robots.
[44]
Kevin Guyan. 2022. Queer Data: Using Gender, Sex and Sexuality Data for Action. Bloomsbury Publishing.
[45]
Ian Hacking. 1995. The looping effects of human kinds. In Causal cognition: A multidisciplinary debate. Clarendon Press/Oxford University Press, New York, NY, US, 351–394.
[46]
Sara Hajian and Josep Domingo-Ferrer. 2012. A Study on the Impact of Data Anonymization on Anti-discrimination. In 2012 IEEE 12th International Conference on Data Mining Workshops. IEEE, Brussels, Belgium, 352–359. https://doi.org/10.1109/ICDMW.2012.19
[47]
Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham. 2018. Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18. ACM Press, Montreal QC, Canada, 1–13. https://doi.org/10.1145/3173574.3173582
[48]
Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a Critical Race Methodology in Algorithmic Fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 501–512. https://doi.org/10.1145/3351095.3372826
[49]
Alex Hern. 2018. Google’s solution to accidental algorithmic racism: ban gorillas. The Guardian (Jan. 2018). https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people
[50]
Kashmir Hill. 2020. The Secretive Company That Might End Privacy as We Know It. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
[51]
Chih-Hsing Ho and Tyng-Ruey Chuang. 2019. Governance of Communal Data Sharing. Good Data (2019), 202–219.
[52]
Anna Lauren Hoffmann. 2020. Terms of inclusion: Data, discourse, violence. New Media & Society (Sept. 2020), 146144482095872. https://doi.org/10.1177/1461444820958725
[53]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19 (2019), 1–16. https://doi.org/10.1145/3290605.3300830 arXiv:1812.05239.
[54]
Lily Hu. 2020. What is ’Race’ in Algorithmic Discrimination on the Basis of Race?https://www.youtube.com/watch?v=m5GdAgZnpzA
[55]
Thomas Hupperich, Dennis Tatang, Nicolai Wilkop, and Thorsten Holz. 2018. An Empirical Study on Online Price Differentiation. In Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy(CODASPY ’18). Association for Computing Machinery, New York, NY, USA, 76–83. https://doi.org/10.1145/3176258.3176338
[56]
Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi Malvajerdi, and Jonathan Ullman. 2019. Differentially Private Fair Learning. In Proceedings of the 36th International Conference on Machine Learning. PMLR, 3000–3008. https://proceedings.mlr.press/v97/jagielski19a.html ISSN: 2640-3498.
[57]
Jon Keegan. 2021. Facebook Got Rid of Racial Ad Categories. Or Did It?https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Section: Citizen Browser.
[58]
Stephanie Kelley, Anton Ovchinnikov, David R. Hardoon, and Adrienne Heinrich. 2021. Anti-discrimination Laws, AI, and Gender Bias: A Case Study in Non-mortgage Fintech Lending. SSRN Scholarly Paper ID 3719577. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3719577
[59]
Os Keyes. 2018. The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (Nov. 2018), 88:1–88:22. https://doi.org/10.1145/3274357
[60]
Os Keyes. 2019. Counting the Countless. https://reallifemag.com/counting-the-countless/
[61]
Os Keyes, Zoë Hitzig, and Mwenza Blell. 2021. Truth from the machine: artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews 46, 1-2 (April 2021), 158–175. https://doi.org/10.1080/03080188.2020.1840224 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/03080188.2020.1840224.
[62]
Niki Kilbertus, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi, and Adrian Weller. 2018. Blind Justice: Fairness with Encrypted Sensitive Attributes. arXiv:1806.03281 [cs, stat] (June 2018). http://arxiv.org/abs/1806.03281 arXiv:1806.03281.
[63]
Jennifer King. 2019. ” Becoming Part of Something Bigger” Direct to Consumer Genetic Testing, Privacy, and Personal Disclosure. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–33.
[64]
Satya Kuppam, Ryan Mckenna, David Pujol, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. 2020. Fair Decision Making using Privacy-Protected Data. arXiv:1905.12744 [cs] (Jan. 2020). http://arxiv.org/abs/1905.12744 arXiv:1905.12744.
[65]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fairness. In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc.https://papers.nips.cc/paper/2017/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
[66]
Kalev Leetaru. 2018. Facebook As The Ultimate Government Surveillance Tool?https://www.forbes.com/sites/kalevleetaru/2018/07/20/facebook-as-the-ultimate-government-surveillance-tool/ Section: AI & Big Data.
[67]
Nancy López and Howard Hogan. 2021. What’s Your Street Race? The Urgency of Critical Race Theory and Intersectionality as Lenses for Revising the U.S. Office of Management and Budget Guidelines, Census and Administrative Data in Latinx Communities and Beyond. Genealogy 5, 3 (Sept. 2021), 75. https://doi.org/10.3390/genealogy5030075 Number: 3 Publisher: Multidisciplinary Digital Publishing Institute.
[68]
Alice E. Marwick and Danah Boyd. 2018. Privacy at the Margins| Understanding Privacy at the Margins—Introduction. International Journal of Communication 12, 0 (March 2018), 9. https://ijoc.org/index.php/ijoc/article/view/7053 Number: 0.
[69]
Douglas S. Massey, Jacob S. Rugh, Justin P. Steil, and Len Albright. 2016. Riding the Stagecoach to Hell: A Qualitative Analysis of Racial Discrimination in Mortgage Lending. City & Community 15, 2 (2016), 118–136. https://doi.org/10.1111/cico.12179 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cico.12179.
[70]
Paola Mavriki and Maria Karyda. 2019. Automated data-driven profiling: threats for group privacy. Information & Computer Security 28, 2 (Jan. 2019), 183–197. https://doi.org/10.1108/ICS-04-2019-0048 Publisher: Emerald Publishing Limited.
[71]
Marina Micheli, Marisa Ponti, Max Craglia, and Anna Berti Suman. 2020. Emerging models of data governance in the age of datafication. Big Data & Society 7, 2 (July 2020), 2053951720948087. https://doi.org/10.1177/2053951720948087 Publisher: SAGE Publications Ltd.
[72]
Jakub Mikians, László Gyarmati, Vijay Erramilli, and Nikolaos Laoutaris. 2013. Crowd-assisted search for price discrimination in e-commerce: first results. In Proceedings of the ninth ACM conference on Emerging networking experiments and technologies(CoNEXT ’13). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/2535372.2535415
[73]
Yeshimabeit Milner. 2019. Abolish Big Data. https://medium.com/@YESHICAN/abolish-big-data-ad0871579a41
[74]
Shira Mitchell, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2020. Algorithmic Fairness: Choices, Assumptions, and Definitions. https://doi.org/10.1146/annurev-statistics-042720-125902 Archive Location: world Publisher: Annual Reviews.
[75]
Brent Mittelstadt. 2017. From Individual to Group Privacy in Big Data Analytics. Philosophy & Technology 30, 4 (Dec. 2017), 475–494. https://doi.org/10.1007/s13347-017-0253-7
[76]
Ann Morning. 2014. Does Genomics Challenge the Social Construction of Race?:. Sociological Theory (Oct. 2014). https://doi.org/10.1177/0735275114550881 Publisher: SAGE PublicationsSage CA: Los Angeles, CA.
[77]
Khalil Gibran Muhammad. 2019. The Condemnation of Blackness: Race, Crime, and the Making of Modern Urban America, With a New Preface. Harvard University Press. Google-Books-ID: gqacDwAAQBAJ.
[78]
Eirini Ntoutsi, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria‐Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, Emmanouil Krasanakis, Ioannis Kompatsiaris, Katharina Kinder‐Kurlanda, Claudia Wagner, Fariba Karimi, Miriam Fernandez, Harith Alani, Bettina Berendt, Tina Kruegel, Christian Heinze, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis, and Steffen Staab. 2020. Bias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining and Knowledge Discovery 10, 3 (May 2020). https://doi.org/10.1002/widm.1356
[79]
Jonathan A. Obar. 2020. Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society 7, 1 (Jan. 2020), 2053951720935615. https://doi.org/10.1177/2053951720935615 Publisher: SAGE Publications Ltd.
[80]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (Oct. 2019), 447–453. https://doi.org/10.1126/science.aax2342
[81]
Rodrigo Ochigame, Chelsea Barabas, Karthik Dinakar, Madars Virza, and Joichi Ito. 2018. Beyond Legitimation: Rethinking Fairness, Interpretability, and Accuracy in Machine Learning. International Conference on Machine Learning (2018), 6.
[82]
Anne Oeldorf-Hirsch and Jonathan A. Obar. 2019. Overwhelming, Important, Irrelevant: Terms of Service and Privacy Policy Reading among Older Adults. In Proceedings of the 10th International Conference on Social Media and Society(SMSociety ’19). Association for Computing Machinery, New York, NY, USA, 166–173. https://doi.org/10.1145/3328529.3328557
[83]
Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. 2019. Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers in Big Data 2 (July 2019), 13. https://doi.org/10.3389/fdata.2019.00013
[84]
Partnership on AI. 2020. Algorithmic Risk Assessment and COVID-19: Why PATTERN Should Not Be Used. Technical Report. Partnership on AI. http://partnershiponai.org/wp-content/uploads/2021/07/Why-PATTERN-Should-Not-Be-Used.pdf
[85]
Partnership on AI. 2022. Welcome to the Artificial Intelligence Incident Database. https://incidentdatabase.ai/
[86]
Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency(FAT* ’19). Association for Computing Machinery, New York, NY, USA, 39–48. https://doi.org/10.1145/3287560.3287567
[87]
Kristin Pauker, Chanel Meyers, Diana T. Sanchez, Sarah E. Gaither, and Danielle M. Young. 2018. A review of multiracial malleability: Identity, categorization, and shifting racial attitudes. Social and Personality Psychology Compass 12, 6 (2018), e12392. https://doi.org/10.1111/spc3.12392 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/spc3.12392.
[88]
OiYan A. Poon, Jude Paul Matias Dizon, and Dian Squire. 2017. Count Me In!: Ethnic Data Disaggregation Advocacy, Racial Mattering, and Lessons for Racial Justice Coalitions. JCSCORE 3, 1 (April 2017), 91–124. https://doi.org/10.15763/issn.2642-2387.2017.3.1.91-124 Number: 1.
[89]
Lincoln Quillian, John J Lee, and Mariana Oliver. 2020. Evidence from Field Experiments in Hiring Shows Substantial Additional Racial Discrimination after the Callback. Social Forces 99, 2 (Nov. 2020), 732–759. https://doi.org/10.1093/sf/soaa026
[90]
Lincoln Quillian, Devah Pager, Ole Hexel, and Arnfinn H. Midtbøen. 2017. Meta-analysis of field experiments shows no change in racial discrimination in hiring over time. Proceedings of the National Academy of Sciences 114, 41 (Oct. 2017), 10870–10875. https://doi.org/10.1073/pnas.1706255114 Publisher: National Academy of Sciences Section: Social Sciences.
[91]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2019. Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. arXiv:1906.09208 [cs] (Dec. 2019). https://doi.org/10.1145/3351095.3372828 arXiv:1906.09208.
[92]
Stephanie Carroll Rainie, Tahu Kukutai, Maggie Walter, Oscar Luis Figueroa-Rodríguez, Jennifer Walker, and Per Axelsson. 2019. Indigenous data sovereignty. (2019). Publisher: African Minds and the International Development Research Centre (IDRC).
[93]
Stephanie Carroll Rainie, Jennifer Lee Schultz, Eileen Briggs, Patricia Riggs, and Nancy Lynn Palmanteer-Holder. 2017. Data as a Strategic Resource: Self-determination, Governance, and the Data Challenge for Indigenous Nations in the United States. International Indigenous Policy Journal 8, 2 (March 2017). https://doi.org/10.18584/iipj.2017.8.2.1
[94]
Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021. Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices. arXiv:2006.12358 [cs] (March 2021). https://doi.org/10.1145/3449081 arXiv:2006.12358.
[95]
Paola Ricaurte. 2019. Data Epistemologies, Coloniality of Power, and Resistance. Television & New Media(2019), 16.
[96]
Kaili Rimfeld and Margherita Malanchini. 2020. The A-Level and GCSE scandal shows teachers should be trusted over exams results. https://inews.co.uk/opinion/a-level-gcse-results-trust-teachers-exams-592499
[97]
Eva Rosen, Philip M. E. Garboden, and Jennifer E. Cossyleon. 2021. Racial Discrimination in Housing: How Landlords Use Algorithms and Home Visits to Screen Tenants. American Sociological Review 86, 5 (Oct. 2021), 787–822. https://doi.org/10.1177/00031224211029618 Publisher: SAGE Publications Inc.
[98]
Wendy D. Roth. 2016. The multiple dimensions of race. Ethnic and Racial Studies 39, 8 (June 2016), 1310–1338. https://doi.org/10.1080/01419870.2016.1140793
[99]
Alan Z. Rozenshtein. 2018. Surveillance Intermediaries. SSRN Scholarly Paper ID 2935321. Social Science Research Network, Rochester, NY. https://papers.ssrn.com/abstract=2935321
[100]
Bonnie Ruberg and Spencer Ruelos. 2020. Data for queer lives: How LGBTQ gender and sexuality identities challenge norms of demographics. Big Data & Society 7, 1 (Jan. 2020), 2053951720933286. https://doi.org/10.1177/2053951720933286 Publisher: SAGE Publications Ltd.
[101]
Aliya Saperstein. 2012. Capturing complexity in the United States: which aspects of race matter and when?Ethnic and Racial Studies 35, 8 (Aug. 2012), 1484–1502. https://doi.org/10.1080/01419870.2011.607504 Publisher: Routledge _eprint: https://doi.org/10.1080/01419870.2011.607504.
[102]
Morgan Klaus Scheuerman, Kandrea Wade, Caitlin Lustig, and Jed R. Brubaker. 2020. How We’ve Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (May 2020), 1–35. https://doi.org/10.1145/3392866
[103]
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19. ACM Press, Atlanta, GA, USA, 59–68. https://doi.org/10.1145/3287560.3287598
[104]
Laleh Seyyed-Kalantari, Guanxiong Liu, Matthew McDermott, Irene Y. Chen, and Marzyeh Ghassemi. 2021. CheXclusion: Fairness gaps in deep chest X-ray classifiers. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing 26 (2021), 232–243.
[105]
Riti Shimkhada, A. J. Scheitler, and Ninez A. Ponce. 2021. Capturing Racial/Ethnic Diversity in Population-Based Surveys: Data Disaggregation of Health Data for Asian American, Native Hawaiian, and Pacific Islanders (AANHPIs). Population Research and Policy Review 40, 1 (Feb. 2021), 81–102. https://doi.org/10.1007/s11113-020-09634-3
[106]
Dean Spade. 2015. Normal Life: Administrative Violence, Critical Trans Politics, and the Limits of Law. Duke University Press. https://doi.org/10.1515/9780822374794 Publication Title: Normal Life.
[107]
Luke Stark and Jevan Hutson. 2021. Physiognomic Artificial Intelligence. SSRN Scholarly Paper ID 3927300. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3927300
[108]
Latanya Sweeney. 2002. k-Anonymity: A Model For Protecting Privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10, 05 (Oct. 2002), 557–570. https://doi.org/10.1142/S0218488502001648 Publisher: World Scientific Publishing Co.
[109]
Byron Tau and Michelle Hackman. 2020. Federal Agencies Use Cellphone Location Data for Immigration Enforcement. Wall Street Journal (Feb. 2020). https://www.wsj.com/articles/federal-agencies-use-cellphone-location-data-for-immigration-enforcement-11581078600
[110]
Linnet Taylor. 2021. Public Actors Without Public Values: Legitimacy, Domination and the Regulation of the Technology Sector. Philosophy & Technology (Jan. 2021). https://doi.org/10.1007/s13347-020-00441-4
[111]
Nenad Tomasev, Kevin R. McKee, Jackie Kay, and Shakir Mohamed. 2021. Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. arXiv:2102.04257 [cs] (Feb. 2021). http://arxiv.org/abs/2102.04257 arXiv:2102.04257.
[112]
UK Information Commissioner’s Office. 2020. What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection/what-do-we-need-to-do-to-ensure-lawfulness-fairness-and-transparency-in-ai-systems/ Publisher: ICO.
[113]
U.S. Department of Justice. 2019. The First Step Act of 2018 : Risk and Needs Assessment System.
[114]
Jonathan Van Geuns, Ana Brandusescu, and Mozilla Insights. 2020. What Does it Mean? | Shifting Power Through Data Governance. Technical Report. Mozilla Foundation. https://foundation.mozilla.org/en/data-futures-lab/data-for-empowerment/shifting-power-through-data-governance/
[115]
Jessica Vasquez-Tokos and Priscilla Yamin. 2021. The racialization of privacy: racial formation as a family affair. Theory and Society 50, 5 (Aug. 2021), 717–740. https://doi.org/10.1007/s11186-020-09427-9
[116]
Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2 (Dec. 2017), 205395171774353. https://doi.org/10.1177/2053951717743530
[117]
Salome Viljoen. 2020. Democratic Data: A Relational Theory For Data Governance. SSRN Scholarly Paper ID 3727562. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3727562
[118]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2021. Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law. SSRN Scholarly Paper ID 3792772. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3792772
[119]
Maggie Walter. 2020. Delivering Indigenous Data Sovereignty. https://www.youtube.com/watch?v=NCsCZJ8ugPA
[120]
Gregory M. Walton, David Paunesku, and Carol S. Dweck. 2012. Expandable selves. In Handbook of self and identity, 2nd ed. The Guilford Press, New York, NY, US, 141–154.
[121]
Anne L. Washington. 2018. How to Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate. Colorado Technology Law Journal 17 (2018), 131. https://heinonline.org/HOL/Page?handle=hein.journals/jtelhtel17&id=145&div=&collection=
[122]
Brooke Foucault Welles. 2014. On minorities and outliers: The case for making Big Data small. Big Data & Society 1, 1 (April 2014), 2053951714540613. https://doi.org/10.1177/2053951714540613 Publisher: SAGE Publications Ltd.
[123]
Raphaële Xenidis. 2021. Tuning EU Equality Law to Algorithmic Discrimination: Three Pathways to Resilience. Maastricht Journal of European and Comparative Law 27 (Jan. 2021), 1023263X2098217. https://doi.org/10.1177/1023263X20982173
[124]
Alice Xiang. 2021. Reconciling legal and technical approaches to algorithmic bias. Tennessee Law Review 88, 3 (2021).
[125]
Lu Zhang, Yongkai Wu, and Xintao Wu. 2017. A Causal Framework for Discovering and Removing Direct and Indirect Discrimination. Proceedings of the 26th International Joint Conference on Artificial Intelligence (2017), 3929–3935. https://www.ijcai.org/proceedings/2017/549
[126]
Tukufu Zuberi and Eduardo Bonilla-Silva. 2008. White Logic, White Methods: Racism and Methodology. Rowman & Littlefield Publishers. Google-Books-ID: WTBtAAAAQBAJ.

Cited By

View all
  • (2024)Ending Affirmative Action Harms Diversity Without Improving Academic MeritProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694706(1-17)Online publication date: 29-Oct-2024
  • (2024)Integrating Equity in Public Sector Data-Driven Decision Making: Exploring the Desired Futures of Underserved StakeholdersProceedings of the ACM on Human-Computer Interaction10.1145/36869058:CSCW2(1-39)Online publication date: 8-Nov-2024
  • (2024)Improving Group Fairness Assessments with ProxiesACM Journal on Responsible Computing10.1145/36771751:4(1-21)Online publication date: 24-Jul-2024
  • Show More Cited By

Index Terms

  1. Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
        June 2022
        2351 pages
        ISBN:9781450393522
        DOI:10.1145/3531146
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 20 June 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. categorization
        2. demographic data
        3. discrimination
        4. fairness
        5. gender
        6. identity
        7. measurement
        8. race
        9. sensitive data
        10. sexuality

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        FAccT '22
        Sponsor:

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)192
        • Downloads (Last 6 weeks)22
        Reflects downloads up to 19 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Ending Affirmative Action Harms Diversity Without Improving Academic MeritProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694706(1-17)Online publication date: 29-Oct-2024
        • (2024)Integrating Equity in Public Sector Data-Driven Decision Making: Exploring the Desired Futures of Underserved StakeholdersProceedings of the ACM on Human-Computer Interaction10.1145/36869058:CSCW2(1-39)Online publication date: 8-Nov-2024
        • (2024)Improving Group Fairness Assessments with ProxiesACM Journal on Responsible Computing10.1145/36771751:4(1-21)Online publication date: 24-Jul-2024
        • (2024)Racial/Ethnic Categories in AI and Algorithmic Fairness: Why They Matter and What They RepresentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659050(2484-2494)Online publication date: 3-Jun-2024
        • (2024)Fairness without Sensitive Attributes via Knowledge SharingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659014(1897-1906)Online publication date: 3-Jun-2024
        • (2024)Real Risks of Fake Data: Synthetic Data, Diversity-Washing and Consent CircumventionProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659002(1733-1744)Online publication date: 3-Jun-2024
        • (2024)Lazy Data Practices Harm Fairness ResearchProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658931(642-659)Online publication date: 3-Jun-2024
        • (2024)Operationalizing the Search for Less Discriminatory Alternatives in Fair LendingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658912(377-387)Online publication date: 3-Jun-2024
        • (2024)Adaptive Fair Representation Learning for Personalized Fairness in Recommendations via Information AlignmentProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657709(427-436)Online publication date: 11-Jul-2024
        • (2024)Inherent Limitations of AI FairnessCommunications of the ACM10.1145/362470067:2(48-55)Online publication date: 25-Jan-2024
        • Show More Cited By

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media