[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Leveraging Integer Linear Programming to Learn Optimal Fair Rule Lists

  • Conference paper
  • First Online:
Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR 2022)

Abstract

Fairness and interpretability are fundamental requirements for the development of responsible machine learning. However, learning optimal interpretable models under fairness constraints has been identified as a major challenge. In this paper, we investigate and improve on a state-of-the-art exact learning algorithm, called CORELS, which learns rule lists that are certifiably optimal in terms of accuracy and sparsity. Statistical fairness metrics have been integrated incrementally into CORELS in the literature. This paper demonstrates the limitations of such an approach for exploring the search space efficiently before proposing an Integer Linear Programming method, leveraging accuracy, sparsity and fairness jointly for better pruning. Our thorough experiments show clear benefits of our approach regarding the exploration of the search space.

Julien Ferry—First author.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 55.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 69.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Source code of this enhanced version of the \(\texttt {FairCORELS}\) Python package is available on https://github.com/ferryjul/fairCORELSV2. The use of the CPLEX solver is possible but not mandatory, as our released code also embeds an open-source solver (whose configuration has been tuned to handle our pruning problem efficiently). This solver is Mistral-2.0 [17, 19], in its version used for the Minizinc Challenge 2020.

References

  1. Aïvodji, U., Ferry, J., Gambs, S., Huguet, M.J., Siala, M.: Learning fair rule lists (2019). arXiv preprint http://arxiv.org/abs/1909.03977

  2. Aïvodji, U., Ferry, J., Gambs, S., Huguet, M.J., Siala, M.: Faircorels, an open-source library for learning fair rule lists. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 4665–4669. CIKM 2021, Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3459637.3481965

  3. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., Rudin, C.: Learning certifiably optimal rule lists. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 35–44. KDD 2017, Association for Computing Machinery (2017). https://doi.org/10.1145/3097983.3098047

  4. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., Rudin, C.: Learning certifiably optimal rule lists for categorical data. J. Mach. Learn. Res. 18(234), 1–78 (2018). http://jmlr.org/papers/v18/17-716.html

  5. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias: there’s software used across the country to predict future criminals and it’s biased against blacks. propublica (2016). ProPublica, 23 May 2016

    Google Scholar 

  6. Barocas, S., Hardt, M., Narayanan, A.: Fairness and machine learning (2019). http://www.fairmlbook.org

  7. Caton, S., Haas, C.: Fairness in machine learning: a survey (2020). arXiv preprint http://arxiv.org/abs/2010.04053

  8. Chikalov, I., et al.: Logical analysis of data: theory, methodology and applications, pp. 147–192. Springer, Berlin Heidelberg (2013). https://doi.org/10.1007/978-3-642-28667-4_3

  9. Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017). https://doi.org/10.1089/big.2016.0047

    Article  Google Scholar 

  10. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017). arXiv preprint http://arxiv.org/abs/1702.08608

  11. Dua, D., Graff, C.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml

  12. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226. ITCS 2012, Association for Computing Machinery, New York (2012). https://doi.org/10.1145/2090236.2090255

  13. Freitas, A.A.: Comprehensible classification models: a position paper. SIGKDD Explor. Newsl. 15(1), 1–10 (2014). https://doi.org/10.1145/2594473.2594475

    Article  Google Scholar 

  14. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation’’. AI Mag. 38(3), 50–57 (2017)

    Google Scholar 

  15. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018). https://doi.org/10.1145/3236009

    Article  Google Scholar 

  16. Hardt, M., Price, E., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, vol. 29. Curran Associates, Inc. (2016). https://proceedings.neurips.cc/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf

  17. Hebrard, E.: Mistral, a constraint satisfaction library. Proc. Third Int. CSP Solver Competition 3(3), 31–39 (2008)

    Google Scholar 

  18. Hebrard, E., Siala, M.: Explanation-based weighted degree. In: Salvagnin, D., Lombardi, M. (eds.) CPAIOR 2017. LNCS, vol. 10335, pp. 167–175. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59776-8_13

    Chapter  Google Scholar 

  19. Hebrard, E., Siala, M.: Solver engine (2017). https://www.cril.univ-artois.fr/CompetitionXCSP17/files/Mistral.pdf

  20. Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012). https://doi.org/10.1007/s10115-011-0463-8

    Article  Google Scholar 

  21. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018). https://doi.org/10.1145/3236386.3241340

    Article  Google Scholar 

  22. Miettinen, K.: Nonlinear Multiobjective Optimization, International Series in Operations Research & Management Science, vol. 12. Springer, Boston (2012). https://doi.org/10.1007/978-1-4615-5563-6

    Book  Google Scholar 

  23. Rivest, R.L.: Learning decision lists. Mach. Learn. 2(3), 229–246 (1987). https://doi.org/10.1007/BF00058680

    Article  Google Scholar 

  24. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x

    Article  Google Scholar 

  25. Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., Zhong, C.: Interpretable machine learning: fundamental principles and 10 grand challenges (2021). arXiv preprint http://arxiv.org/abs/2103.1125

  26. Verma, S., Rubin, J.: Fairness definitions explained. In: Proceedings of the International Workshop on Software Fairness, pp. 1–7. FairWare 2018, Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3194770.3194776

  27. Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180. WWW 2017, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE (2017). https://doi.org/10.1145/3038912.3052660

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julien Ferry .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Aïvodji, U., Ferry, J., Gambs, S., Huguet, MJ., Siala, M. (2022). Leveraging Integer Linear Programming to Learn Optimal Fair Rule Lists. In: Schaus, P. (eds) Integration of Constraint Programming, Artificial Intelligence, and Operations Research. CPAIOR 2022. Lecture Notes in Computer Science, vol 13292. Springer, Cham. https://doi.org/10.1007/978-3-031-08011-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08011-1_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08010-4

  • Online ISBN: 978-3-031-08011-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics