[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

"It's the most fair thing to do but it doesn't make any sense": Perceptions of Mathematical Fairness Notions by Hiring Professionals

Published: 26 April 2024 Publication History

Abstract

We explore the alignment of organizational representatives involved in hiring processes with five different, commonly proposed fairness notions. In a qualitative study with 17 organizational professionals, for each notion, we investigate their perception of understandability, fairness, potential to increase diversity, and practical applicability in the context of early candidate selection in hiring. In this, we do not explicitly frame our questions as questions of algorithmic fairness, but rather relate them to current human hiring practice. As our findings show, while many notions are well understood, fairness, potential to increase diversity and practical applicability are rated differently, illustrating the importance of understanding the application domain and its nuances, and calling for more interdisciplinary and human-centered research into the perception of mathematical fairness notions.

Supplemental Material

ZIP File
Coding of the transcripts

References

[1]
Ifeoma Ajunwa. 2019. An auditing imperative for automated hiring.
[2]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. In Ethics of Data and Analytics. Auerbach Publications, 254--264.
[3]
David Armstrong, Ann Gosling, John Weinman, and Theresa Marteau. 1997. The place of inter-rater reliability in qualitative research: an empirical study. Sociology, 31, 3, 597--606.
[4]
Solon Barocas and Andrew D Selbst. 2016. Big data's disparate impact. Calif. L. Rev., 104, 671.
[5]
Rachel K. E. Bellamy et al. 2018. AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv:1810.01943 [cs], (Oct. 3, 2018). Retrieved Mar. 29, 2022 from http://arxiv.org/abs/18 10.01943 arXiv: 1810.01943.
[6]
Ruha Benjamin. 2019. Race After Technology. Wiley.
[7]
Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2017. A convex framework for fair regression. arXiv:1706.02409 [cs, stat], (June 7, 2017). Retrieved Jan. 21, 2022 from http://arxiv.org/abs/1706.02409 arXiv: 1706.02409.
[8]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2017. Fairness in criminal justice risk assessments: the state of the art. arXiv:1703.09207 [stat], (May 27, 2017). Retrieved Mar. 7, 2022 from http://arxiv.org /abs/1703.09207 arXiv: 1703.09207.
[9]
Reuben Binns. 2020. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency, 514--524.
[10]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. 'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions. preprint. SocArXiv, (Jan. 31, 2018).
[11]
Sarah Bird, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. 2020. Fairlearn: A toolkit for assessing and improving fairness in AI. Tech. rep. MSR-TR-2020--32. Microsoft, (May 2020). https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolki t-for-assessing-and-improving-fairness-in-ai/.
[12]
J Stewart Black and Patrick van Esch. 2020. Ai-enabled recruiting: what is it and how should a manager use it? Business Horizons, 63, 2, 215--226.
[13]
Miranda Bogen and Aaron Rieke. 2018. Help wanted: an examination of hiring algorithms, equity, and bias.
[14]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 2, (Jan. 2006), 77--101.
[15]
Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward algorithmic accountability in public services: a qualitative study of affected community perspectives on Proc. ACM Hum.-Comput. Interact., Vol. 8, No. CSCW1, Article 83. Publication date: April 2024. Perceptions of Mathematical Fairness Notions by Hiring Professionals 83:29 algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI '19: CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, (May 2, 2019), 1--12. isbn: 978--1--4503--5970--2.
[16]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77--91.
[17]
Toon Calders and Sicco Verwer. 2010. Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21, 2, (Sept. 2010), 277--292.
[18]
John L Campbell, Charles Quincy, Jordan Osserman, and Ove K Pedersen. 2013. Coding in-depth semistructured interviews: problems of unitization and intercoder reliability and agreement. Sociological methods & research, 42, 3, 294--320.
[19]
Tomas Chamorro-Premuzic and Reece Akhtar. 2017. Should companies use ai to assess job candidates? (2017). https://hbr.org/2019/05/should-companies-use-ai-to-assess-job-candidates.
[20]
[n. d.] Chief diversity officer appointments continue surge in 2022. https://businesschief.com/sustainability/chief-d iversity-officer-appointments-continue-surge-in-2022. Accessed 24--12--2022. ().
[21]
Alexandra Chouldechova. 2016. Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. arXiv:1610.07524 [cs, stat], (Oct. 24, 2016). Retrieved Jan. 24, 2022 from http://arxiv.org/abs/1610.07524 arXiv: 1610.07524.
[22]
Sherice N Clarke, S Sushil, Katherine Dennis, Ung-Sang Lee, Andrea Gomoll, and Zaynab Gates. 2023. Developing shared ways of seeing data: the perils and possibilities of achieving intercoder agreement. International Journal of Qualitative Methods, 22, 16094069231160973.
[23]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining, 797--806.
[24]
Bo Cowgill. 2019. Bias and productivity in humans and machines. Columbia Business School Research Paper Forth- coming.
[25]
Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. 2019. Flexibly fair representation learning by disentanglement. In International conference on machine learning. PMLR, 1436--1445.
[26]
Catherine d'Ignazio and Lauren F. Klein. 2020. Data Feminism. The MIT Press.
[27]
Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, and Haiyi Zhu. 2022. Exploring how machine learning practitioners (try to) use fairness toolkits. In 2022 ACM Conference on Fairness, Accountability, and Transparency. FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, (June 21, 2022), 473--484. isbn: 978--1--4503--9352--2.
[28]
[n. d.] Diversity wins: how inclusion matters. https://www.mckinsey.com/featured-insights/diversity-and-inclusio n/diversity-wins-how-inclusion-matters. Accessed 24--12--2022. ().
[29]
Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces. IUI '19: 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, (Mar. 17, 2019), 275--285. isbn: 978--1--4503--6272--6.
[30]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel. 2011. Fairness through awareness. arXiv:1104.3913 [cs], (Nov. 28, 2011). Retrieved Jan. 24, 2022 from http://arxiv.org/abs/1104.3913 arXiv: 1104.3913.
[31]
Jennifer Fereday and Eimear Muir-Cochrane. 2006. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. International journal of qualitative methods, 5, 1, 80--92.
[32]
Benjamin Fish, Jeremy Kun, and Ádám D Lelkes. 2016. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM international conference on data mining. SIAM, 144--152.
[33]
Benjamin Fish and Luke Stark. 2022. It's not fairness, and it's not fair: the failure of distributional equality and the promise of relational equality in complete-information hiring games. In Equity and Access in Algorithms, Mechanisms, and Optimization. EAAMO '22: Equity and Access in Algorithms, Mechanisms, and Optimization. ACM, Arlington VA USA, (Oct. 6, 2022), 1--15. isbn: 978--1--4503--9477--2.
[34]
Anthony W Flores, Kristin Bechtel, and Christopher T Lowenkamp. 2016. False positives, false negatives, and false analyses: a rejoinder to machine bias: there's software used across the country to predict future criminals. and it's biased against blacks. Fed. Probation, 80, 38.
[35]
Marion Fortin and Martin R. Fellenz. 2008. Hypocrisies of fairness: towards a more reflexive ethical base in organizational justice research and practice. Journal of Business Ethics, 78, 415--433. Proc. ACM Hum.-Comput. Interact., Vol. 8, No. CSCW1, Article 83. Publication date: April 2024. 83:30 Priya Sarkar and Cynthia C. S. Liem
[36]
Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2016. On the (im)possibility of fairness. arXiv:1609.07236 [cs, stat], (Sept. 23, 2016). Retrieved Mar. 10, 2022 from http://arxiv.org/abs/1609.07236 arXiv: 1609.07236.
[37]
Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness testing: testing software for discrimination. In Proceedings of the 2017 11th Joint meeting on foundations of software engineering, 498--510.
[38]
[n. d.] Getting serious about diversity: enough already with the business case. https://hbr.org/2020/11/getting-serio us-about-diversity-enough-already-with-the-business-case. Accessed 24--12--2022. ().
[39]
Leo A Goodman. 1961. Snowball sampling. The annals of mathematical statistics, 148--170.
[40]
Government of The Netherlands. 2021. New legislation will improve gender diversity on corporate boards. (Sept. 2021). https://www.government.nl/latest/news/2021/09/29/new-legislation-will-improve-gender-diversity-on-co rporate-boards.
[41]
Nina Grgic-Hlaca, Elissa M. Redmiles, Krishna P. Gummadi, and Adrian Weller. 2018. Human perceptions of fairness in algorithmic decision making: a case study of criminal risk prediction. In Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18. the 2018 World Wide Web Conference. ACM Press, Lyon, France, 903--912. isbn: 978--1--4503--5639--8.
[42]
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised learning. arXiv:1610.02413 [cs], (Oct. 7, 2016). Retrieved Dec. 8, 2021 from http://arxiv.org/abs/1610.02413 arXiv: 1610.02413.
[43]
Annemarie M.F. Hiemstra, Eva Derous, Alec W. Serlie, and Marise P. Born. 2012. Ethnicity effects in graduates' résumé content. Applied psychology.
[44]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: what do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI '19: CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, (May 2, 2019), 1--16. isbn: 978--1--4503--5970--2.
[45]
Daniel J Hruschka, Deborah Schwartz, Daphne Cobb St. John, Erin Picone-Decaro, Richard A Jenkins, and James W Carey. 2004. Reliability in coding open-ended data: lessons learned from hiv behavioral research. Field methods, 16, 3, 307--331.
[46]
Han-Yin Huang and Cynthia C. S. Liem. 2022. Social Inclusion in Curated Contexts: Insights from Museum Practices. (May 10, 2022). arXiv: 2205.05192[cs].
[47]
Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, Virtual Event Canada, (Mar. 3, 2021), 375--385. isbn: 978--1--4503--8309--7. 445901.
[48]
Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. 2016. Fairness in learning: classic and contextual bandits. arXiv:1605.07139 [cs, stat], (Nov. 7, 2016). Retrieved Mar. 7, 2022 from http://arxiv.org/abs/1605.0 7139 arXiv: 1605.07139.
[49]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In 2009 2nd International Conference on Computer, Control and Communication. 2009 2nd International Conference on Computer, Control and Communica- tion (IC$). IEEE, Karachi, Pakistan, (Feb. 2009), 1--6. isbn: 978--1--4244--3313--1.
[50]
Maria Kasinidou, Styliani Kleanthous, P?nar Barlas, and Jahna Otterbacher. 2021. I agree with the decision, but they didn't deserve this: future developers' perception of fairness in algorithmic decisions. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, Virtual Event Canada, (Mar. 3, 2021), 690--700. isbn: 978--1--4503--8309--7.
[51]
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. [n. d.] Preventing fairness gerrymandering:auditing and learning for subgroup fairness, 9.
[52]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. Advances in neural information processing systems, 30.
[53]
Diederik P Kingma and Max Welling. 2014. Stochastic gradient vb and the variational auto-encoder. In Second International Conference on Learning Representations, ICLR. Vol. 19, 121.
[54]
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2018. Human decisions and machine predictions. The quarterly journal of economics, 133, 1, 237--293.
[55]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807 [cs, stat], (Nov. 17, 2016). Retrieved Jan. 18, 2022 from http://arxiv.org/abs/1609.05807 arXiv: 1609.05807.
[56]
Cory Knobel and Geoffrey C Bowker. 2011. Values in design. Communications of the ACM, 54, 7, 26--28. Proc. ACM Hum.-Comput. Interact., Vol. 8, No. CSCW1, Article 83. Publication date: April 2024. Perceptions of Mathematical Fairness Notions by Hiring Professionals 83:31
[57]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. Advances in neural information processing systems, 30.
[58]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5, 1, (Jan. 1, 2018), 2053951718756684. Publisher: SAGE Publications Ltd.
[59]
Min Kyung Lee and Su Baykal. 2017. Algorithmic mediation in group decisions: fairness perceptions of algorithmi- cally mediated vs. discussion-based social division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. CSCW '17: Computer Supported Cooperative Work and Social Computing. ACM, Portland Oregon USA, (Feb. 25, 2017), 1035--1048. isbn: 978--1--4503--4335-0.
[60]
Min Kyung Lee, Ji Tae Kim, and Leah Lizarondo. 2017. A human-centered approach to algorithmic services: considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. CHI '17: CHI Conference on Human Factors in Computing Systems. ACM, Denver Colorado USA, (May 2, 2017), 3365--3376. isbn: 978--1--4503--4655--9.
[61]
Min Kyung Lee et al. 2019. Webuildai: participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction, 3, CSCW, 1--35.
[62]
Lan Li, Tina Lassiter, Joohee Oh, and Min Kyung Lee. 2021. Algorithmic hiring in practice: recruiter and HR professional's perspectives on AI use in hiring. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. ACM, Virtual Event USA, (July 21, 2021), 166--176. isbn: 978--1--4503--8473--5.
[63]
Lan Li, Tina Lassiter, Joohee Oh, and Min Kyung Lee. 2021. Algorithmic hiring in practice: recruiter and hr professional's perspectives on ai use in hiring. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 166--176.
[64]
Cynthia C. S. Liem, Markus Langer, Andrew Demetriou, Annemarie M. F. Hiemstra, Achmadnoer Sukma Wicaksana, Marise Ph. Born, and Cornelis J. König. 2018. Psychology meets machine learning: interdisciplinary perspectives on algorithmic job candidate screening. In Explainable and Interpretable Models in Computer Vision and Machine Learning. The Springer Series on Challenges in Machine Learning. Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Xavier Baró, Ya?mur Güçlütürk, Umut Güçlü, and Marcel A. J. van Gerven, (Eds.) Springer, 197--253. isbn: 978--3--319--98130--7.
[65]
Zachary C. Lipton, Alexandra Chouldechova, and Julian McAuley. 2019. Does mitigating ML's impact disparity require treatment disparity? arXiv:1711.07076 [cs, stat], (Jan. 11, 2019). Retrieved Jan. 18, 2022 from http://arxiv.org /abs/1711.07076 arXiv: 1711.07076.
[66]
Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. 2019. Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning. PMLR, 4114--4124.
[67]
Joshua R. Loftus, Chris Russell, Matt J. Kusner, and Ricardo Silva. 2018. Causal reasoning for algorithmic fairness. arXiv:1805.05859, (May 15, 2018). Retrieved May 5, 2022 from http://arxiv.org/abs/1805.05859 arXiv: 1805.05859.
[68]
Daria Loi, Christine T Wolf, Jeanette L Blomberg, Raphael Arar, and Margot Brereton. 2019. Co-designing ai futures: integrating ai ethics, social computing, and design. In Companion publication of the 2019 on designing interactive systems conference 2019 companion, 381--384.
[69]
Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2017. The variational fair autoencoder. arXiv:1511.00830 [cs, stat], (Aug. 9, 2017). Retrieved May 4, 2022 from http://arxiv.org/abs/1511.00830 arXiv: 1511.00830.
[70]
Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, and Hanna Wallach. 2022. Assessing the fairness of ai systems: ai practitioners' processes, challenges, and needs for support. Proc. ACM Hum.-Comput. Interact., 6, CSCW1, Article 52, (Apr. 2022), 26 pages.
[71]
Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. CHI '20: CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, (Apr. 21, 2020), 1--14. isbn: 978--1--4503--6708-0.
[72]
Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. 2020. On the applicability of ML fairness notions. arXiv:2006.16745 [cs, stat], (Oct. 19, 2020). Retrieved Nov. 3, 2021 from http://arxiv.org/abs/2006.16745 arXiv: 2006.16745.
[73]
Milagros Miceli, Julian Posada, and Tianling Yang. 2022. Studying up machine learning data: why talk about bias when we mean power? In Proceedings of the ACM on Human-Computer Interaction. Vol. 6. Proc. ACM Hum.-Comput. Interact., Vol. 8, No. CSCW1, Article 83. Publication date: April 2024. 83:32 Priya Sarkar and Cynthia C. S. Liem
[74]
Shira Mitchell, Eric Potash, Solon Barocas, Alexander D'Amour, and Kristian Lum. 2021. Prediction-based decisions and fairness: a catalogue of choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8, 1, (Mar. 7, 2021), 141--163. arXiv: 1811.07867.
[75]
Deirdre K Mulligan, Joshua A Kroll, Nitin Kohli, and Richmond Y Wong. 2019. This thing called fairness: disciplinary confusion realizing a value in technology. Proceedings of the ACM on Human-Computer Interaction, 3, CSCW, 1--36.
[76]
Razieh Nabi and Ilya Shpitser. [n. d.] Fair inference on outcomes, 10.
[77]
Arvind Narayanan. 2018. Translation tutorial: 21 fairness definitions and their politics. Conference on Fairness, Accountability, and Transparency. (2018). https://www.youtube.com/watch?v=jIXIuYdnyyk.
[78]
Safiya Umoja Noble. 2018. Algorithms of Oppression. NYU Press.
[79]
Cathy O'Neil. 2016. Weapons of Math Destruction. Crown Books.
[80]
Cliodhna O'Connor and Helene Joffe. 2020. Intercoder reliability in qualitative research: debates and practical guidelines. International journal of qualitative methods, 19, 1609406919899220.
[81]
Samir Passi and Solon Barocas. 2019. Problem formulation and fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency. FAT* '19: Conference on Fairness, Accountability, and Transparency. ACM, Atlanta GA USA, (Jan. 29, 2019), 39--48. isbn: 978--1--4503--6125--5.
[82]
Samir Passi and Steven J Jackson. 2018. Trust in data science: collaboration, translation, and accountability in corporate data science projects. Proceedings of the ACM on Human-Computer Interaction, 2, CSCW, 1--28.
[83]
Judea Pearl et al. 2000. Models, reasoning and inference. Cambridge, UK: CambridgeUniversityPress, 19, 2.
[84]
Andrea Pemberton and Jennifer Kisamore. 2023. Assessing burnout in diversity and inclusion professionals. Equality, Diversity and Inclusion, 42, 1, 38--52.
[85]
Chad Perry and Oystein Jensen. 2001. Approaches to combining induction and deduction in one research study. In Conference of the Australian and New Zealand Marketing Academy, Auckland, New Zealand.
[86]
Emma Pierson. 2018. Demographics and discussion influence views on algorithmic fairness. (Mar. 4, 2018). Retrieved Dec. 23, 2022 from http://arxiv.org/abs/1712.09124 arXiv: 1712.09124[cs].
[87]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. FAT* '20: Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, (Jan. 27, 2020), 469--481. isbn: 978--1--4503--6936--7.
[88]
Lionel P Robert, Casey Pierce, Liz Marquis, Sangmi Kim, and Rasha Alahmad. 2020. Designing fair ai for managing employees in organizations: a review, critique, and design agenda. Human--Computer Interaction, 35, 5--6, 545--575.
[89]
Nripsuta Ani Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C. Parkes, and Yang Liu. 2019. How do fairness definitions fare?: examining public attitudes towards algorithmic definitions of fairness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. AIES '19: AAAI/ACM Conference on AI, Ethics, and Society. ACM, Honolulu HI USA, (Jan. 27, 2019), 99--106. isbn: 978--1--4503--6324--2.
[90]
Jakob Schoeffer, Niklas Kuehl, and Yvette Machowski. 2022. ?there is not enough information": on the effects of explanations on perceptions of informational fairness and trustworthiness in automated decision-making. In 2022 ACM Conference on Fairness, Accountability, and Transparency. FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, (June 21, 2022), 1616--1628. isbn: 978--1--4503--9352--2.
[91]
Hong Shen, Leijie Wang, Wesley H. Deng, Ciell Brusse, Ronald Velgersdijk, and Haiyi Zhu. 2022. The model card authoring toolkit: toward community-centered, deliberation-driven AI design. In 2022 ACM Conference on Fairness, Accountability, and Transparency. FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, (June 21, 2022), 440--451. isbn: 978--1--4503--9352--2.
[92]
Katie Shilton et al. 2018. Values and ethics in human-computer interaction. Foundations and Trends® in Human-- Computer Interaction, 12, 2, 107--171.
[93]
Katie Shilton, Jes A Koepfler, and Kenneth R Fleischmann. 2014. How to see values in social computing: methods for studying values dimensions. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing, 426--435.
[94]
Megha Srivastava, Hoda Heidari, and Andreas Krause. 2019. Mathematical notions vs. human perception of fairness: a descriptive approach to fairness for machine learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. KDD '19: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, Anchorage AK USA, (July 25, 2019), 2459--2468. isbn: 978--1--4503--6201--6.
[95]
Behnam Taebi. 2016. Bridging the gap between social acceptance and ethical acceptability. English. Risk Analysis: an international journal.
[96]
Behnam Taebi, Jan H. Kwakkel, and Céline Kermisch. 2020. Governing climate risks in the face of normative uncertainties. English. Wiley Interdisciplinary Reviews: Climate Change (Online), 11, 5. Proc. ACM Hum.-Comput. Interact., Vol. 8, No. CSCW1, Article 83. Publication date: April 2024. Perceptions of Mathematical Fairness Notions by Hiring Professionals 83:33
[97]
Christine Teelken and Karin Kee. 2023. Male ?play-garden' versus female ?tightrope walking': an exploration of gendered embodiment in dutch higher education. Studies in Higher Education, 0, 0, 1--14. 3.2208158.
[98]
Margery Austin Turner, Michael Fix, and Raymond J Struyk. 1991. Opportunities denied, opportunities diminished: Racial discrimination in hiring. The Urban Insitute.
[99]
Sharon van Geldere, Rozemarijn Stadens, and Linnet Taylor. 2022. Anti-discrimination data collection in academia: an exploration of survey methodology practices outside of The Netherlands. The Young Academy, Amsterdam.
[100]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. CHI '18: CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC Canada, (Apr. 21, 2018), 1--14. isbn: 978--1--4503--5620--6.
[101]
John Vines, Rachel Clarke, Peter Wright, John McCarthy, and Patrick Olivier. 2013. Configuring participation: on how we involve people in design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 429--438.
[102]
Sara Wachter-Boettcher. 2017. Ai recruiting tools do not eliminate bias. Time Magazine.
[103]
Ruotong Wang, F Maxwell Harper, and Haiyi Zhu. 2020. Factors influencing perceived fairness in algorithmic decision-making algorithm outcomes, development procedures, and individual differences, 14.
[104]
Gregor Wolbring and Aspen Lillywhite. 2023. Burnout through the lenses of equity/equality, diversity and inclusion and disabled people: a scoping review. Societies, 13, 5.
[105]
Allison Woodruff, Sarah E. Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. 2018. A qualitative exploration of perceptions of algorithmic fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. CHI '18: CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC Canada, (Apr. 21, 2018), 1--14. isbn: 978--1--4503--5620--6.
[106]
Blake Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian, and Nathan Srebro. 2017. Learning non-discriminatory predictors. arXiv:1702.06081 [cs], (Nov. 1, 2017). Retrieved Jan. 19, 2022 from http://arxiv.org/abs/1702.06081 arXiv: 1702.06081.
[107]
Janice D. Yoder. 1991. Rethinking tokenism: looking beyond numbers. Gender & Society, 5, 2.
[108]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. Proceedings of the 26th International Conference on World Wide Web, (Apr. 3, 2017), 1171--1180. arXiv: 1610.08452. .3052660.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW1
CSCW
April 2024
6294 pages
EISSN:2573-0142
DOI:10.1145/3661497
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 April 2024
Published in PACMHCI Volume 8, Issue CSCW1

Check for updates

Author Tags

  1. algorithmic fairness
  2. hiring and early candidate selection
  3. operationalization
  4. personnel selection
  5. user studies

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 170
    Total Downloads
  • Downloads (Last 12 months)170
  • Downloads (Last 6 weeks)46
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media