Link Search Menu Expand Document
PyXAI
Papers Video GitHub EXPEKCTATION About

FAQ

What is PyXAI?

PyXAI (Python eXplainable AI) is a Python library (version 3.6 or later) allowing to bring formal explanations suited to (regression or classification) tree-based ML models (Decision Trees, Random Forests, Boosted Trees, ...). In contrast to many approaches to XAI (SHAP, LIME, ...), PyXAI generates explanations that are post-hoc, local, and correct. Being correct (aka sound or faithful) indicates that the explanations that are provided actually reflect the exact behaviour of the model by guaranteeing certain properties about the explanations generated. They can be of several types:

  • Abductive explanations for an instance $X$ are intended to explain why $X$ has been classified in the way it has been classified by the ML model (thus, addressing the “Why?” question). In the regression case, abductive explanations for $X$ are intended to explain why the regression value of $X$ belongs to a given interval.
  • Contrastive explanations for $X$ are intended to explain why $X$ has not been classified by the ML model as the user expected it (thus, addressing the “Why not?” question).

PyXAI also includes algorithms for correcting tree based models when their predictions conflict with pieces of user knowledge. This more tricky facet of XAI is seldom offered by existing XAI systems. When some domain knowledge is available and a prediction contradicts it, the model must be corrected. Rectification is a principled approach for such a correction operation.

Finally, PyXAI can correct the model if this last one contradict the user knowledge.

Which models are supported?

Currently, PyXAI can deal with several libraries and several ML models. You can find below a table that summarizes current compatibility.

Type Scikit-learn Xgboost LightGBM
Decision Tree DecisionTreeClassifier
Random Forest RandomForestClassifier
Boosted Tree XGBClassifier
XGBRegressor
LGBMRegressor

How to clean datasets?

PyXAI offers a tool to simplify this step. It allows to modify the dataset:

  • feature deletion
  • feature encoding (ordinal, one-hot, label)
  • selection of the target feature
  • possible conversion of a multi-class classification problems into a binary classification one distinction between numerical and categorical features

Please read this page for more information.

How to create models?

PyXAI offers a module that helps you to create models from different libraries. It can perform cross-validation for instance. As an example, the following lines of code create a Random Forest model using the library Scikit-Learn.

from PyXAI import Learning

learner = Learning.Xgboost("../dataset/iris.csv")
model = learner.evaluate(method=Learning.HOLD_OUT, output=Learning.RF)

This page explains how to create models.

How to import models?

PyXAI can import ML models from different libraries. This step is useful if the model is already learnt and saved on your computer. As an example, the following lines of code import a Scikit-learn model saved using package pickle (see the scikit-learn documentation).

with open("example.model", 'rb') as file:
    learner = pickle.load(file)

from pyxai import Tools, Learning, Explainer
learner, model = Learning.import_models(learner)

Please, read this page for more details.

How to select instances?

PyXAI simplifies the selection of instances, depending some characteristics (good/wrong prediction, classification…). This example shows hot to select some instances. Each selected instance is returned as a tuple (instance, prediction):

from PyXAI import Learning

# Create the model 
learner = Learning.Xgboost("../dataset/iris.csv")
model = learner.evaluate(method=Learning.HOLD_OUT, output=Learning.RF)

# Select instances
instances = learner.get_instances(models,n=3) # Select 3 instances
instances = learner.get_instances(models,correct=False,n=3) # Select 3 wrongly classified instances
instances = learner.get_instances(models,indexes=Learning.TEST, n=None, predictions=[0]) # Select all instances from the test set with prediction 0.

Please, read this page for more details.

What is a PyXAI explanation?

On PyXAI, explanations are local, ie., depends on a given instance. The explanations is a set of conditions related to features of the instance (aka $x_i < t_i$, $x_i >= t_i$, $x_j=t_j$) depending the type of the feature (numerical, categorical, boolean). PyXAI only computes formal explanations, that is, and contrary to agnostic approaches (LIME, Shap), a PyXAI explanation is really an explanation:

  • For abductive explanation, each instance that shares the same conditions is classified in the same way.
  • For contrastive explanations, it is sufficient to change the truth value of all conditions in order to modify the classification of the instance.

This page explains all principles related to PyXAI explanations.

How to explain decision made by ML models?

The core module of PyXAI is the explainer one. Given a model and an instance, you can explain the decision made by the ML model. The following example provides all steps done to create, select instance and explain the prediction made on it. It uses different kinds of explanations.

# Create the model 
learner = Learning.Xgboost("../dataset/iris.csv")
model = learner.evaluate(method=Learning.HOLD_OUT, output=Learning.RF)

# Select instance
instance,prediction = learner.get_instances(models,n=1, correct=True) # Select 1 well classified instance

explainer = Explainer.initialize(model, instance) # Initialize the explainer

dr = explainer.direct_reason()
print("direct reason ", explainer.to_features(dr))

mr = explainer.majoritary_reason()
print("Majoritary reason ", explainer.to_features(mr))

mr = explainer.minimal_majoritary_reason(time_limit=10)
if explainer.elapsed_time == Explainer.TIMEOUT:
    print("Approximated minimal majoritary reason ", explainer.to_features(mr))
else: 
    print("minimal majoritary reason ", explainer.to_features(mr))

Many pages are related to explanations. Please, select them in the menu.

How to rectify a model?

In PyXAI, you can rectify a model using an explanation as the premises of a decision rule:

explainer = Explainer.initialize(model, instance=instance)
reason = explainer.sufficient_reason(n=1)
model = explainer.rectify(conditions=reason, label=1)

Please, read this page for more information.

How to discard some features from computed explanations?

It may be the case that the explainee does not want some features in the explanation. For instance, if he does not understand it or if it is not actionable. You can exclude feature named Feature 1 like that:

explainer = Explainer.initialize(model, instance) # initialize the explainer
explainer.set_excluded_features(["Feature 1"])    # exclude this feature from explanations

Other kinds of preferences are available. Please, read this page for more information.