[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3308560.3317590acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Managing Bias in AI

Published: 13 May 2019 Publication History

Abstract

Recent awareness of the impacts of bias in AI algorithms raises the risk for companies to deploy such algorithms, especially because the algorithms may not be explainable in the same way that non-AI algorithms are. Even with careful review of the algorithms and data sets, it may not be possible to delete all unwanted bias, particularly because AI systems learn from historical data, which encodes historical biases. In this paper, we propose a set of processes that companies can use to mitigate and manage three general classes of bias: those related to mapping the business intent into the AI implementation, those that arise due to the distribution of samples used for training, and those that are present in individual input samples. While there may be no simple or complete solution to this issue, best practices can be used to reduce the effects of bias on algorithmic outcomes.

References

[1]
Witten, H., Frank, E., and Hall, M.A. (2011). Data Mining: Practical Machine Learning Tools and Techniques, 3rd ed., Morgan Kaufmann Publishers Inc., San Francisco, CA.
[2]
McKinsey Global Institute (2017). “Artificial Intelligence: The Next Digital Frontier?”
[3]
Goodman, B. and Flaxman, S. (2016). “European Union regulations on algorithmic decision-making and a 'right to explanation'”. ICML Workshop on Human Interpretability in Machine Learning, New York, NY.
[4]
Kahn, J. (2018). “Artificial Intelligence Has Some Explaining to Do”. Bloomberg Businessweek, 12 Dec 2018.
[5]
Bolukbasi. T., Chang, K., Zou, J., Saligrama, A., and Kalai, A. (2016). “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings”. Proceedings of the 30th International Conference on Neural Information Processing Systems.Barcelona, Spain.
[6]
Angwin, J., Larson, J, Mattu, S., and Kirchner, L. (2016). “Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks”. ProPublica, 23 May 2016.
[7]
Ensign, D., Friedler, S.A., Neville, S., Scheidegger, C. and Venkatasubramanian, S. (2018). “Runaway Feedback Loops in Predictive Policing”. Proceedings of Machine Learning Research.
[8]
Hao, K. (2019). “This is How AI Bias Really Happens -— and Why It's so Hard to Fix”. MIT Technology Review, 4 Feb 2019.
[9]
Dastin, J. (2018). “Amazon scraps secret AI recruiting tool that showed bias against women”. Reuters Business News, 10 Oct 2018.
[10]
Del Balso, M. and Hermann, J. (2017). “Meet Michelangelo: Uber's Machine Learning Platform”. Uber Engineering, 5 Sep 2017.
[11]
Baylor. D. (2017). “TFX: A TensorFlow-Based Production-Scale Machine Learning Platform”. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, Nova Scotia, Canada.
[12]
Vincent, J. (2016). “Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day”. The Verge, 24 Mar 2016.
[13]
Bursztein, E. (2018). “Attacks against machine learning - an overview”. https://elie.net/blog/ai/attacks-against-machine-learning-an-overview/.
[14]
Buolamwini, J. (2019). “Aritificial Intelligence Has a Problem with Gender and Racial Bias”. TIME, 7 Feb 2019.
[15]
Ribeiro, M.T., Singh, S., and Guestrin, C. (2016). “’Why Should I Trust You?’: Explaining the Predictions of Any Classifier”. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CA.
[16]
Google. "TensorFlow Hub," https://www.tensorflow.org/hub.
[17]
H2O. “Open Source Leader in AI and ML”. h2o, https://www.h2o.ai/.
[18]
European Commission. “Rights for Citizens - European Commission”. https://ec.europa.eu/info/law/law-topic/data-protection/reform/rights-citizens_en.
[19]
Board of Governors of the Federal Reserve System (2011). “The Fed - Supervisory Letter SR 11-7 on Guidance on Model Risk Management”. https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm.
[20]
Suciu, O., Marginean, R., Kaya, Y., Daume. H. III, and Dumitras, T. (2018). “When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks”.27th USENIX Security Symposium, Baltimore, MD.
[21]
IBM. “AI Fairness 360”. http://aif360.mybluemix.net/.
[22]
H2O. “Interpreting a Model — Using Driverless AI”. https://www.h2o.ai/wp-content/uploads/2017/09/driverlessai/interpreting.html.
[23]
Ghanta, S. and Talagala N. (2018). “MLOps Health: Taking the Pulse of ML in Production”. ITOpsTimes, 10 July 2018.
[24]
Chen, I.Y., Johansson, F.D., and Sontag, D. (2018). “Why Is My Classifier Discriminatory?”.32nd Conference on Neural Information Processing Systems, Montreal, Canada.
[25]
Kroll, J., Huey, J., Barocas, S, Felten, E., Reidenberg, J., Robinson, D., and Yu, H. (2017). “Accountable Algorithms”. University of Pennsylvania Law Review, vol. 165, pp. 633-705.
[26]
Cramer, H., Garcia-Gathright, J., Springer, A, and Reddy, S. (2018). “Assessing and addressing algorithmic bias in practice”. ACM Interactions, vol. XXV, no. 6, pp. 58-63.
[27]
Bier, D. (2017). “E-Verify Has Delayed or Cost Half a Million Jobs for Legal Workers”. CATO Institute, 16 May 2017. https://www.cato.org/blog/e-verify-has-held-or-cost-jobs-half-million-legal-workers.
[28]
Suhartono, H., Levin, A., and Johnsson, J. (2018). “Why Did Lion Air Flight 610 Crash? New Report Details Struggle”. Bloomberg, 27 Nov 2018. https://www.bloomberg.com/news/articles/2018-11-27/lion-air-pilots-struggle-detailed-in-preliminary-crash-report.
[29]
0'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown. New York, NY.
[30]
Buolamwini, J. and Gebru, T. (2018). “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, New York, NY.
[31]
Buranyi, S. (2017). “Rise of the racist robots - how AI is learning all our worse impulses”. The Guardian, 8 Aug 2017. https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses.
[32]
Nicholas Diakopoulos and Sorelle Friedler (2016). How to Hold Algorithms Accountable, MIT Technology Review, November 17, 2016. https://www.technologyreview.com/s/602933/how-to-hold-algorithms-accountable/
[33]
Garfinkel, S., Matthews, J., Shapiro, S., and Smith, J. (2017). Toward Algorithmic Transparency and Accountability. Communications of the ACM. Vol. 60, No. 9, Page 5, Sept. 2017.
[34]
ACM US Public Policy Council (USACM) (2017). Statement on Algorithmic Transparency and Accountability. January 12, 2017. https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf
[35]
World Wide Web Foundation (2017). Algorithmic accountability report, 2017. https://webfoundation.org/docs/2017/07/Algorithms_Report_WF.pdf
[36]
Reisman, D., Schultz, J., Crawford, K., and Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now institute. Apr. 2018. https://ainowinstitute.org/aiareport2018.pdf
[37]
Caplan, R., Donovan, J., Hanson, L., and Matthews, J. (2018). Algorithmic accountability primer. Data & Society. Apr. 2018. https://datasociety.net/wp-content/uploads/2018/04/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf
[38]
Friedman, B. and Nissenbaum, H. (1996). Bias in computer systems. ACM Trans. Inf. Syst. 14, 3, 330–347.
[39]
Olteanu, A., Castillo, C, Diaz, F., and Kiciman, E. (2016). Social data: Biases, methodological pitfalls, and ethical boundaries.
[40]
Baeza-Yates, R. (2016). Data and algorithmic bias in the web. Proceedings of the 8th ACM Conference on Web Science. ACM. New York.

Cited By

View all
  • (2024)Algorithmic Issues, Challenges, and Theoretical Concerns of ChatGPTApplications, Challenges, and the Future of ChatGPT10.4018/979-8-3693-6824-4.ch003(56-74)Online publication date: 28-May-2024
  • (2024)Navigating AI Biases in EducationAI Applications and Strategies in Teacher Education10.4018/979-8-3693-5443-8.ch005(135-160)Online publication date: 25-Oct-2024
  • (2024)Promoting Fairness and Ethical Practices in AI-Based Performance Management SystemsAdvancements in Intelligent Process Automation10.4018/979-8-3693-5380-6.ch007(155-178)Online publication date: 20-Sep-2024
  • Show More Cited By

Index Terms

  1. Managing Bias in AI
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      WWW '19: Companion Proceedings of The 2019 World Wide Web Conference
      May 2019
      1331 pages
      ISBN:9781450366755
      DOI:10.1145/3308560
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      In-Cooperation

      • IW3C2: International World Wide Web Conference Committee

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 13 May 2019

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Artificial intelligence
      2. bias
      3. production monitoring

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      WWW '19
      WWW '19: The Web Conference
      May 13 - 17, 2019
      San Francisco, USA

      Acceptance Rates

      Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3,027
      • Downloads (Last 6 weeks)286
      Reflects downloads up to 06 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Algorithmic Issues, Challenges, and Theoretical Concerns of ChatGPTApplications, Challenges, and the Future of ChatGPT10.4018/979-8-3693-6824-4.ch003(56-74)Online publication date: 28-May-2024
      • (2024)Navigating AI Biases in EducationAI Applications and Strategies in Teacher Education10.4018/979-8-3693-5443-8.ch005(135-160)Online publication date: 25-Oct-2024
      • (2024)Promoting Fairness and Ethical Practices in AI-Based Performance Management SystemsAdvancements in Intelligent Process Automation10.4018/979-8-3693-5380-6.ch007(155-178)Online publication date: 20-Sep-2024
      • (2024)The Role of Artificial Intelligence in Business ManagementUtilizing AI and Smart Technology to Improve Sustainability in Entrepreneurship10.4018/979-8-3693-1842-3.ch008(117-133)Online publication date: 18-Mar-2024
      • (2024)Technical Considerations for Designing, Developing, and Implementing AI Systems in AfricaExamining the Rapid Advance of Digital Technology in Africa10.4018/978-1-6684-9962-7.ch005(86-104)Online publication date: 23-Feb-2024
      • (2024)Generative AI for Customizable Learning ExperiencesSustainability10.3390/su1607303416:7(3034)Online publication date: 5-Apr-2024
      • (2024)The feeling of being classified: raising empathy and awareness for AI bias through perspective-taking in VRFrontiers in Virtual Reality10.3389/frvir.2024.13402505Online publication date: 4-Mar-2024
      • (2024)Cultural Bias in Large Language Models: A Comprehensive Analysis and Mitigation StrategiesJournal of Transcultural Communication10.1515/jtc-2023-0019Online publication date: 16-Sep-2024
      • (2024)Appraising Regulatory Framework Towards Artificial General Intelligence (AGI) Under Digital HumanismInternational Journal of Digital Law and Governance10.1515/ijdlg-2024-00151:2(269-312)Online publication date: 4-Dec-2024
      • (2024)Recommendations for the creation of benchmark datasets for reproducible artificial intelligence in radiologyInsights into Imaging10.1186/s13244-024-01833-215:1Online publication date: 14-Oct-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media