Ouvrir Menu Fermer Menu



PyXAI (Python eXplainable AI) is a Python library (version 3.6 or later) allowing to bring explanations of various forms from classifiers resulting of machine learning techniques. More precisely, several types of explanations for the classification task of a given instance X can be computed:

  • Abductive explanations for X are intended to explain why X has been classified in the way it has been classified by the ML model (thus, addressing the “Why?” question).
  • Contrastive explanations for X is to explain why X has not been classified by the ML model as the user expected it (thus, addressing the “Why not?” question).

Models are the resulting objects of an experimental ML protocol through a chosen cross-validation method (for example, the result of a training phase on a classifier). Importantly, in PyXAI, there is a complete separation between the learning and the explaining phases: you produce/load/save models, and you find explanations of some instances from these models. Currently, with PyXAI, you can use methods to find explanations from different supervised learning approaches for classification tasks and from any libraries:

  • Decision Tree (DT)
  • Random Forest (RF)
  • Boosted Trees (Gradient boosting) (BT)

In addition to finding explanations, PyXAI also contains methods that perform operations (production, saving, loading) on models and instances. Currently, these helping methods are available using two ML libraries:

  • Scikit-learn: a software machine learning library
  • XGBoost: an optimized distributed gradient boosting library

Note that it is quite possible to find explanations of models coming from other libraries.

Please visit the following links for more information: