[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Free access

The effects of mixing machine learning and human judgment

Published: 24 October 2019 Publication History

Abstract

Collaboration between humans and machines does not necessarily lead to better outcomes.

References

[1]
Angwin, J., Larson, J. Machine bias. ProPublica (May 23, 2016).
[2]
Case, N. How to become a centaur. J. Design and Science (Jan. 2018).
[3]
Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 5, 2 (2017), 153--163.
[4]
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S. and Huq, A. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD Intern. Conf. Knowledge Discovery and Data Mining. ACM Press, 2017, 797--806.
[5]
Critcher, C.R. and Gilovich, T. Incidental environmental anchors. J. Behavioral Decision Making 21, 3 (2008), 241--251.
[6]
Dressel, J. and Farid, H. The accuracy, fairness, and limits of predicting recidivism. Science Advances 4, 1 (2018), eaao5580.
[7]
Englich, B., Mussweiler, T. and Strack, F. Playing dice with criminal sentences: the influence of irrelevant anchors on experts' judicial decision making. Personality and Social Psychology Bulletin 32, 2 (2006), 188--200.
[8]
Furnham, A. and Boo, H.C. A literature review of the anchoring effect. The J. Socio-Economics 40, 1 (2011), 35--42.
[9]
Goldstein, I.M., Lawrence, J. and Miner, A.S. Human-machine collaboration in cancer and beyond: The Centaur Care Model. JAMA Oncology 3, 10 (2017), 1303.
[10]
Green, K.C. and Armstrong, J.S. Evidence on the effects of mandatory disclaimers in advertising. J. Public Policy & Marketing 31, 2 (2012), 293--304.
[11]
Horvitz, E. and Paek, T. Complementary computing: policies for transferring callers from dialog systems to human receptionists. User Modeling and User-Adapted Interaction 17, 1--2 (2007), 159--182.
[12]
Johnson, R.C. Overcoming AI bias with AI fairness. Commun. ACM (Dec. 6, 2018).
[13]
Jukier, R. Inside the judicial mind: exploring judicial methodology in the mixed legal system of Quebec. European J. Comparative Law and Governance (Feb. 2014).
[14]
Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.
[15]
Mussweiler, T. and Strack, F. Numeric judgments under uncertainty: the role of knowledge in anchoring. J. Experimental Social Psychology 36, 5 (2000), 495--518.
[16]
Northcraft, G.B. and Neale, M.A. Experts, amateurs, and real estate: an anchoring-and-adjustment perspective on property pricing decisions. Organizational Behavior and Human Decision Processes 39, 1 (1987), 84--97.
[17]
Shaw, A.D., Horton, J.J. and Chen, D.L. Designing incentives for inexpert human raters. In Proceedings of the ACM Conf. Computer-supported Cooperative Work. ACM Press, 2011, 275--284.
[18]
State v Loomis, 2016.
[19]
Tversky, A. and Kahneman, D. Judgment under uncertainty: Heuristics and biases. Science 185, 4157 (1974), 1124--1131.
[20]
Wansink, B., Kent, R.J. and Hoch, S.J. An anchoring and adjustment model of purchase quantity decisions. J. Marketing Research 35, 1 (1998), 71.

Cited By

View all
  • (2024)A Decision Theoretic Framework for Measuring AI RelianceProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658901(221-236)Online publication date: 3-Jun-2024
  • (2024)Framework for human–XAI symbiosis: extended self from the dual-process theory perspectiveJournal of Business Analytics10.1080/2573234X.2024.23963667:4(224-255)Online publication date: 25-Sep-2024
  • (2024)When combinations of humans and AI are useful: A systematic review and meta-analysisNature Human Behaviour10.1038/s41562-024-02024-1Online publication date: 28-Oct-2024
  • Show More Cited By

Index Terms

  1. The effects of mixing machine learning and human judgment

      Recommendations

      Reviews

      Jonathan K. Millen

      Automated risk assessment systems are often used in situations that require human judgment. One motivation for doing this is to remove human bias. Even when the automated system has been shown to be more accurate than human assessments, a team combining system and human decisions has occasionally been proven to be better for some applications and collaboration modes. The presented experiment involves criminal recidivism assessments using a well-known algorithmic system, COMPAS. Human subjects were recruited according to their interest, rather than their expertise, in criminal justice. As expected, COMPAS was more accurate by itself than the humans, given the same data from real court cases. The pertinent question, however, is whether, and in what way, the human results are influenced by being told the COMPAS results prior to making their own assessments. In the first trial, humans were told the COMPAS recidivism risk scores, and their own scores were (on the average) different and less accurate. In the second trial, the experimenters investigated an "anchoring" effect by providing COMPAS results that were deliberately altered higher or lower. The average human scores differed significantly in the direction of the altered COMPAS scores that they were given. These results are not world shaking, and the article is short, less than seven pages. Yet it was as absorbing as a mystery novel, and raised all sorts of questions that aroused the hope of further work. For example, the cited success of teaming involves a feedback loop between the humans and the system, which was not tried here. Also, would the experiment have come out differently with expert humans on the team There is so much more to learn.

      Access critical reviews of Computing literature here

      Become a reviewer for Computing Reviews.

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Communications of the ACM
      Communications of the ACM  Volume 62, Issue 11
      November 2019
      136 pages
      ISSN:0001-0782
      EISSN:1557-7317
      DOI:10.1145/3368886
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 24 October 2019
      Published in CACM Volume 62, Issue 11

      Permissions

      Request permissions for this article.

      Check for updates

      Qualifiers

      • Research-article
      • Popular
      • Refereed

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)518
      • Downloads (Last 6 weeks)118
      Reflects downloads up to 11 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Decision Theoretic Framework for Measuring AI RelianceProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658901(221-236)Online publication date: 3-Jun-2024
      • (2024)Framework for human–XAI symbiosis: extended self from the dual-process theory perspectiveJournal of Business Analytics10.1080/2573234X.2024.23963667:4(224-255)Online publication date: 25-Sep-2024
      • (2024)When combinations of humans and AI are useful: A systematic review and meta-analysisNature Human Behaviour10.1038/s41562-024-02024-1Online publication date: 28-Oct-2024
      • (2024)On prediction-modelers and decision-makers: why fairness requires more than a fair prediction modelAI & SOCIETY10.1007/s00146-024-01886-3Online publication date: 16-Mar-2024
      • (2024)Human-AI Teaming: Following the IMOI FrameworkArtificial Intelligence in HCI10.1007/978-3-031-60611-3_27(387-406)Online publication date: 29-Jun-2024
      • (2023)The End of the Policy Analyst? Testing the Capability of Artificial Intelligence to Generate Plausible, Persuasive, and Useful Policy AnalysisDigital Government: Research and Practice10.1145/36045705:1(1-35)Online publication date: 18-Aug-2023
      • (2023)Human, Do You Think This Painting is the Work of a Real Artist?International Journal of Human–Computer Interaction10.1080/10447318.2023.223297840:18(5174-5191)Online publication date: 11-Jul-2023
      • (2023)Fairness Perceptions of Artificial Intelligence: A Review and Path ForwardInternational Journal of Human–Computer Interaction10.1080/10447318.2023.221089040:1(4-23)Online publication date: 26-May-2023
      • (2022)Just Resource Allocation? How Algorithmic Predictions and Human Notions of Justice InteractProceedings of the 23rd ACM Conference on Economics and Computation10.1145/3490486.3538305(1184-1242)Online publication date: 12-Jul-2022
      • (2022)Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental LearningProceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511138(794-806)Online publication date: 22-Mar-2022
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Digital Edition

      View this article in digital edition.

      Digital Edition

      Magazine Site

      View this article on the magazine site (external)

      Magazine Site

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media