Link Search Menu Expand Document
PyXAI
Papers Video GitHub EXPEKCTATION About

Rectification

Verifying a model requires to be able to test whether the predictions made by the model are correct or not, and this often asks for leveraging the skills of an expert. Whenever the prediction made is viewed as incorrect or, more generally, when it conflicts with the expert knowledge, a more challenging issue is to figure out how the ML model should be modified to ensure that the prediction made will be correct afterwards, and that the predictor will comply with the expert knowledge. Rectification is a principled approach for such a correction operation, i.e., it is characterized by a set of rationality postulates.

Whenever the user disagrees with a prediction made by the ML model or with an explanation returned by PyXAI, she/he may provide PyXAI with a classification rule, supposed to be reliable enough and which can be used to rectify the model. A classification rule is a rule that indicates thanks to its conclusion part (the right-hand side of the rule) how to classify any instance matching its premises part (the left-hand side of the rule). For example, the following classification rule must be obeyed: whenever the annual income of the client is lower than 30, the loan application must be rejected.

Such a classification rule captures some domain knowledge that should be incorporated into the ML model in order to achieve better predictions, while preserving the explanation capacities of the model (see the [Theories](/pyxai/documentation/explainer/theories/) page for more information). By construction, the rectification of an ML model by a classification rule is an ML model of the same kind, which makes the same predictions as those of the ML model at start, except for the instances for which other predictions are demanded by the rule, and for them, the resulting model provides the predictions that are required. Thus, rectification is a conservative approach to the correction of a model (no retraining is made, and only the predictions that are questioned are modified). Notably, from a computation complexity point of view, rectifying a tree-based model can be achieved in time polynomial in the size of the input (the representation of the model and the classification rule used to correct it). This makes the approach practical enough.

In PyXAI, you can rectify a model using a classification rule given by some conditions (the left-hand side) and a label (the right-hand side).

# We consider an instance with label 0.
explainer = Explainer.initialize(model, instance=instance)
reason = explainer.sufficient_reason(n=1)
model = explainer.rectify(conditions=reason, label=1)
<Explainer Object>.rectify(*, conditions, label):
This method rectifies a model using the classification rule conditions => label and simplifies the resulting model. Note that theories can help simplifying the model.
conditions List of Integer: Conditions (premises part of the classification rule) in the form of a binary representation (list of literals). See the Concepts for more information about the binary representation.
label Integer: Label representing the conclusion part of the classification rule.

For the rectification operation, PyXAI supports only the DT (Decision Trees) and RF (Random Forests) models dedicated to binary classification. The algorithms to rectify BT (Boosted Trees) models and to handle multi-class problems are still under development and will be available in the next versions of PyXAI.


Table of contents