[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Legal requirements on explainability in machine learning

Published: 01 June 2021 Publication History

Abstract

Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.

References

[1]
Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, and Lampos V Predicting judicial decisions of the European court of human rights: a natural language processing perspective PeerJ Comput Sci 2016 2 e93
[2]
Alonso C (2012) La motivation des décisions juridictionnelles : exigence(s) du droit au procès équitable. In Regards sur le droit au procès équitable. Presses de l’Université Toulouse 1 Capitole
[3]
Ashley KD and Brüninghaus S Automatically classifying case texts and predicting outcomes Artif Intell Law 2009 17 2 125-165
[4]
Autin J-L La motivation des actes administratifs unilatéraux, entre tradition nationale et évolution des droits européens Revue française d’administration publique 2011 1 85-99
[5]
Bibal A, Frénay B (2016) Interpretability of machine learning models and representations: an introduction. In: Proceedings of the European symposium on artificial neural networks, computational intelligence and machine learning (ESANN), Bruges, Belgium, pp 77–82
[6]
Branting LK Data-centric and logic-based models for automated legal problem solving Artif Intell Law 2017 25 1 5-27
[7]
Breiman L Random forests Mach Learn 2001 45 1 5-32
[8]
Doshi-Velez F, Kortz M (2017) Accountability of AI under the law: the role of explanation. arXiv preprint arXiv:1711.01134
[9]
Dua D, Graff C (2017) UCI machine learning repository. http://archive.ics.uci.edu/ml
[10]
Edwards L and Veale M Enslaving the algorithm: from a “right to an explanation” to a “right to better decisions”? IEEE Secur Priv 2018 16 3 46-54
[11]
Fisher A, Rudin C, Dominici F (2018) All models are wrong but many are useful: variable importance for black-box, proprietary, or misspecified prediction models, using model class reliance. arXiv preprint arXiv:1801.01489
[12]
Frénay B, Hofmann D, Schulz A, Biehl M, Hammer B (2014) Valid interpretation of feature relevance for linear data mappings. In: IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp 149–156
[13]
Goodman B, Flaxman S (2016) EU regulations on algorithmic decision-making and a “right to explanation”. In: ICML workshop on human interpretability in machine learning, New York, USA
[14]
Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, and Pedreschi D A survey of methods for explaining black box models ACM Comput Surv 2018 51 5 1-42
[15]
John G. H, Kohavi R, Pfleger K (1994) Irrelevant features and the subset selection problem. In: International conference on machine learning (ICML), pp 121–129
[16]
Kodratoff Y The comprehensibility manifesto AI Commun 1994 7 2 83-85
[17]
Kohavi R and John GH Wrappers for feature subset selection Artif Intell 1997 97 1–2 273-324
[18]
Lepri B, Oliver N, Letouzé E, Pentland A, and Vinck P Fair, transparent, and accountable algorithmic decision-making processes Philos Technol 2018 31 4 611-627
[19]
Li J, Zhang G, Yu L, and Meng T Research and design on cognitive computing framework for predicting judicial decisions J Signal Process Syst 2018 91 1159-1167
[20]
Lipton ZC (2016) The mythos of model interpretability. In: ICML workshop on human interpretability of machine learning, New York, USA
[21]
Luo B, Feng Y, Xu J, Zhang X, Zhao D (2017) Learning to predict charges for criminal cases with legal basis. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp. 2727–2736
[22]
Malgieri G and Comandé G Why a right to legibility of automated decision-making exists in the general data protection regulation Int Data Priv Law 2017 7 4 243-265
[23]
Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: Proceedings of the conference on fairness, accountability, and transparency (FAT), pp 279–288
[24]
Palau RM, Moens MF (2009) Argumentation mining: the detection, classification and structure of arguments in text. In: Proceedings of the international conference on artificial intelligence and law (ICAIL), pp 98–107
[25]
Pasquale F Black box society, the secret algorithms that control money and information 2015 New York Harvard University Press
[26]
Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD, pp. 1135–1144
[27]
Rudin C Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead Nat Mach Intell 2019 1 5 206-2015
[28]
Selbst AD and Barocas S The intuitive appeal of explainable machines Fordham Law Rev 2018 87 1085-1139
[29]
Selbst AD and Powles J Meaningful information and the right to explanation Int Data Privacy Law 2017 7 4 233-242
[30]
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 618–626
[31]
Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint aDrXiv:1312.6034
[32]
Singla P, Domingos P (2005) Discriminative training of markov logic networks. In: National conference on artificial intelligence (AAAI), pp 868–873
[33]
Tibshirani R Regression shrinkage and selection via the lasso J Roy Stat Soc Ser B (Methodol) 1996 58 1 267-288
[34]
Ustun B and Rudin C Supersparse linear integer models for optimized medical scoring systems Mach Learn 2016 102 3 349-391
[35]
Ustun B, Traca S, Rudin C (2013a) Supersparse linear integer models for interpretable classification. arXiv preprint arXiv:1306.6677
[36]
Ustun B, Traca S, Rudin C (2013b) Supersparse linear integer models for predictive scoring systems. In: Proceedings of AAAI late breaking track
[37]
Wachter S, Mittelstadt B, and Floridi L Why a right to explanation of automated decision-making does not exist in the general data protection regulation Int Data Privacy Law 2017 7 2 76-99
[38]
Wiener C La motivation des décisions administratives en droit comparé Revue internationale de droit comparé 1969 21 21 779-795
[39]
Ye H, Jiang X, Luo Z, Chao W (2018) Interpretable charge predictions for criminal cases: learning to generate court views from fact descriptions. In: Proceedings of the conference of the North American chapter of the association for computational linguistics: human language technologies, pp 1854–1864
[40]
Yu L and Liu H Efficient feature selection via analysis of relevance and redundancy J Mach Learn Res 2004 5 1205-1224
[41]
Zhong H, Zhipeng G, Tu C, Xiao C, Liu Z, Sun M (2018) Legal judgment prediction via topological learning. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp 3540–3549

Cited By

View all
  • (2024)Addressing Data Challenges to Drive the Transformation of Smart CitiesACM Transactions on Intelligent Systems and Technology10.1145/366348215:5(1-65)Online publication date: 7-Nov-2024
  • (2024)An Empirical Study on Compliance with Ranking Transparency in the Software Documentation of EU Online PlatformsProceedings of the 46th International Conference on Software Engineering: Software Engineering in Society10.1145/3639475.3640112(46-56)Online publication date: 14-Apr-2024
  • (2024)Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and EngineeringACM Computing Surveys10.1145/362623456:7(1-35)Online publication date: 9-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Artificial Intelligence and Law
Artificial Intelligence and Law  Volume 29, Issue 2
Jun 2021
171 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 June 2021

Author Tags

  1. Interpretability
  2. Explainability
  3. Machine learning
  4. Law

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Addressing Data Challenges to Drive the Transformation of Smart CitiesACM Transactions on Intelligent Systems and Technology10.1145/366348215:5(1-65)Online publication date: 7-Nov-2024
  • (2024)An Empirical Study on Compliance with Ranking Transparency in the Software Documentation of EU Online PlatformsProceedings of the 46th International Conference on Software Engineering: Software Engineering in Society10.1145/3639475.3640112(46-56)Online publication date: 14-Apr-2024
  • (2024)Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and EngineeringACM Computing Surveys10.1145/362623456:7(1-35)Online publication date: 9-Apr-2024
  • (2024)AI is Entering Regulated Territory: Understanding the Supervisors' Perspective for Model Justifiability in Financial Crime DetectionProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642326(1-21)Online publication date: 11-May-2024
  • (2024)An extension of iStar for Machine Learning requirements by following the PRISE methodologyComputer Standards & Interfaces10.1016/j.csi.2023.10380688:COnline publication date: 1-Mar-2024
  • (2024)Explanatory artificial intelligence (YAI): human-centered explanations of explainable AI and complex dataData Mining and Knowledge Discovery10.1007/s10618-022-00872-x38:5(3141-3168)Online publication date: 1-Sep-2024
  • (2024)Bringing order into the realm of Transformer-based language models for artificial intelligence and lawArtificial Intelligence and Law10.1007/s10506-023-09374-732:4(863-1010)Online publication date: 1-Dec-2024
  • (2024)The black box problem revisited. Real and imaginary challenges for automated legal decision makingArtificial Intelligence and Law10.1007/s10506-023-09356-932:2(427-440)Online publication date: 1-Jun-2024
  • (2024)Joining metadata and textual features to advise administrative courts decisions: a cascading classifier approachArtificial Intelligence and Law10.1007/s10506-023-09348-932:1(201-230)Online publication date: 1-Mar-2024
  • (2023)Stability guarantees for feature attributions with multiplicative smoothingProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668846(62388-62413)Online publication date: 10-Dec-2023
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media