[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Feminist AI: Can We Expect Our AI Systems to Become Feminist?

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

The rise of AI-based systems has been accompanied by the belief that these systems are impartial and do not suffer from the biases that humans and older technologies express. It becomes evident, however, that gender and racial biases exist in some AI algorithms. The question is where the bias is rooted—in the training dataset or in the algorithm? Is it a linguistic issue or a broader sociological current? Works in feminist philosophy of technology and behavioral economics reveal the gender bias in AI technologies as a multi-faceted phenomenon, and the linguistic explanation as too narrow. The next step moves from the linguistic aspects to the relational ones, with postphenomenology. One of the analytical tools of this theory is the “I-technology-world” formula that models our relations with technologies, and through them—with the world. Realizing that AI technologies give rise to new types of relations in which the technology has an “enhanced technological intentionality”, a new formula is suggested: “I-algorithm-dataset.” In the third part of the article, four types of solutions to the gender bias in AI are reviewed: ignoring any reference to gender, revealing the considerations that led the algorithm to decide, designing algorithms that are not biased, or lastly, involving humans in the process. In order to avoid gender bias, we can recall a feminist basic understanding—visibility matters. Users and developers should be aware of the possibility of gender and racial biases, and try to avoid them, bypass them, or exterminates them altogether.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. Previous generations of AI that were based on pre-programmed models were criticized for encoded gender bias into the software – see for example (Suchman 1994); (Adam 1998).

  2. The effect of gender-biased translation algorithms continues and intensifies as such algorithms keep producing more biased texts, which are in turn fed into the algorithm as new training data (Zou and Schiebinger 2018).

  3. This is the logic of Generative Adversarial Networks (GAN) structures, where one algorithm provides feedback to the other.

References

  • Adam, A. (1998). Artificial knowing: gender and the thinking machine. London: Routledge.

    Google Scholar 

  • Bath, C. (2009). “Searching for methodology: feminist technology design in computer science.” In Proceedings of the 5th European symposium ongender & ICT digital cultures: Participation - Empowerment - Diversity, March 5–7, 2009, University of Bremen, Bremen.

  • Buolamwini, J., & Gebru, T. (2018). Gender shades: intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.

  • Caliskan, A., Bryson, J. J., & Arvind, N. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.

    Article  Google Scholar 

  • Courtland, R. (2018). The bias detective. Nature, 558, 357–360.

    Article  Google Scholar 

  • Dastin, J. (2018). Amazon scrapped a secret AI recruitment tool that showed bias against women. Reuters 10 October 2018.

  • Datta, A. , M. C. Tschantz, and A. Datta. 2015. “Automated experiments on ad privacy settings: a tale of opacity, choice, and discrimination.” Proceedings on Privacy Enhancing Technologies. 92–112. doi:https://doi.org/10.1515/popets-2015-0007.

  • Datta, A., Tschantz, M. C., & Datta, A. (2002). Transforming technology: a critical theory revisited. New York: Oxford University Press.

    Google Scholar 

  • Dave, P. (2018). "Fearful of bias, Google blocks gender-based pronouns from new AI tool," Reuters, November 27, 2018.

  • Deng, M. (2014). One size fits few: Artificial hearts leave many out. LiveScience, September 4, 2014.

  • Feenberg, A. (2017). Technosystem: the social life of reason. Cambridge: Harvard University Press.

    Book  Google Scholar 

  • Fisman, R., & Luca, M. (2016). Fixing discrimination in online marketplaces. Harvard Business Review, 94(12), 88–95.

  • Gigerenzer, G., & Todd, P. M. (1999). "Fast and frugal heuristics: The adaptive toolbox." Simple Heuristics that Make us Smart. Oxford University Press, pp 3-34.

  • Han, H., A. K. Jain, S. Shan, & X. Chen. 2017. Heterogeneous face attribute estimation: a deep multi-task learning approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(11), 2597–2609.

  • Ihde, D. (1979). Technics and praxis: a philosophy of technology. Dordrecht: Reidel Publishing Company.

    Google Scholar 

  • Ihde, D. (1990). Technology and the Lifeworld: from Garden to Earth. Bloomington: Indiana University Press.

    Google Scholar 

  • Ihde, D. (2012). Experimental phenomenology: Multistabilities (second ed.). Albany: State University of New York Press.

    Google Scholar 

  • Kricheli-Katz, T., & Regev, T. (2016). How many cents on the dollar? Women and men in product markets. Science Advances, 2(2), e1500599.

    Article  Google Scholar 

  • Lambrecht, A., & Tucker, C. E. (2018). Algorithmic Bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads. SSRN. https://doi.org/10.2139/ssrn.2852260.

  • Lomas, N. (2018). "IBM launches cloud tool to detect AI bias and explain automated decisions," TechCrunch, September 19, 2018.

  • Marcus, G. (2018). “The deepest problem with deep learning.” Medium, December 1 2018. https://medium.com/@GaryMarcus/the-deepest-problemwith-deep-learning-91c5991f5695.

  • Michelfelder, Diane P., Galit Wellner, and Heather Wiltse. 2017. “Designing differently: toward a methodology for an ethics of feminist technology design.” In The Ethics of Technology: Methods and Approaches, by Sven Ove Hansson, 193–218. London and New York: Rowman and Littlefield.

  • Prey, R. (2018). Nothing personal: algorithmic individuation on music streaming platforms. Media, Culture & Society, 40(7), 1086–1100.

    Article  Google Scholar 

  • Simon, H. A. (1987). Making management decisions: The role of intuition and emotion. The Academy of Management Perspectives, 1(1), 57–64.

  • Simon, H. A. (1990). Invariants of human behavior. Annual Review of Psychology, 41(1), 1–20.

  • Schwartz Cowan, R. (1976). The “industrial revolution” in the home: household technology and social change in the 20th century. Technology and Culture, 17(1), 1–23.

    Article  Google Scholar 

  • Suchman, L. (1994). Do categories have politics? The language/action perspective reconsidered. Computer Supported Cooperative Work (CSCW), 2(3), 177–190.

    Article  Google Scholar 

  • Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44–54.

    Article  Google Scholar 

  • Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.

    Article  Google Scholar 

  • Verbeek, P.-P. (2008a). Cyborg intentionality: rethinking the phenomenology of human–technology relations. Phenomenology and Cognitive Science, 7, 387–395.

    Article  Google Scholar 

  • Verbeek, Peter-Paul. 2008b. “Morality in design: design ethics and the morality of technological artifacts.” In Philosophy and design: from engineering to architecture, by Pieter E. Vermaas, Peter Kroes, Andrew Light and Steven A. Moore, 91–103. Dordrecht: Springer.

  • Verbeek, P.-P. (2011). Moralizing technology: understanding and designing the morality of things. Chicago: The University of Chicago Press.

    Book  Google Scholar 

  • Wajcman, J. (2009). Feminist theories of technology. Cambridge Journal of Economics. https://doi.org/10.1093/cje/ben057.

  • Wellner, G. (2016). A postphenomenological inquiry of cell phones: genealogies, meanings, and becoming. Lanham: Lexington Books.

  • Wellner, G. (2018a). Posthuman imagination: from modernity to augmented reality. Journal of Posthuman Studies, 2(1), 45–66.

    Article  Google Scholar 

  • Wellner, Galit. 2018b. “From cellphones to machine learning. A shift in the role of the user in algorithmic writing.” In Towards a philosophy of digital media, by Alberto Romele and Enrico Terrone, 205–224. Cham: Palgrave MacMillan.

  • Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in algorithmic and human decision-making: is there a double standard? Philosophy & Technology, 1–23.

  • Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist—it’s time to make it fair. Nature, 559, 324–326.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Galit Wellner.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wellner, G., Rothman, T. Feminist AI: Can We Expect Our AI Systems to Become Feminist?. Philos. Technol. 33, 191–205 (2020). https://doi.org/10.1007/s13347-019-00352-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-019-00352-z

Keywords

Navigation