PyXAI includes a module
Explainer which provides different methods for explaining decisions made by ML models.
The Concepts page explains:
- How to initialize this module (
- The concepts on which it is based (mainly the binary representation of an instance)
- How to display an explanation (
The other pages within this section present several miscellaneous features:
- Theories are representation of pieces of knowledge about the dataset. PyXAI offers the possibility of encoding a theory calculating explanations in order to avoid the computation of impossible explanations.
Explaineroffers the possibility to process user preferences (prefer some explanations to others and exclude some features): Preferences
- How to use a time limit when calculating explanations ? Time Limit
Several kinds of explanations can be computed according to the properties they hold (direct, sufficient, majoritary, tree-specific, contrastive):
- How to compute explanations for classification tasks ? Explaining Classification
- How to compute explanations for regression tasks ? Explaining Regression
To finish the Visualization Of Explanations page present the PyXAI’s Graphical User Interface (GUI).