8000 GitHub - ahousley/alibi: Algorithms for monitoring and explaining machine learning models
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

ahousley/alibi

 
 

Repository files navigation

Alibi Logo

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.

Goals

  • Provide high quality reference implementations of black-box ML model explanation algorithms
  • Define a consistent API for interpretable ML methods
  • Support multiple use cases (e.g. tabular, text and image data classification, regression)
  • Implement the latest model explanation, concept drift, algorithmic bias detection and other ML model monitoring and interpretation methods

Installation

Alibi can be installed from PyPI:

pip install alibi

Examples

Anchor method applied to the InceptionV3 model trained on ImageNet:

Prediction: Persian Cat Anchor explanation
Persian Cat Persian Cat Anchor

Contrastive Explanation method applied to a CNN trained on MNIST:

Prediction: 4 Pertinent Negative: 9 Pertinent Positive: 4
mnist_orig mnsit_pn mnist_pp

Trust scores applied to a softmax classifier trained on MNIST:

trust_mnist

About

Algorithms for monitoring and explaining machine learning models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Makefile 0.2%
0