The PyXAI library offers the possibility to process user preferences. Different kinds of preferences are handled:
- The user may prefer some explanations to others.
- The user may exclude some features from explanations.
More information about preferences can be found in the paper On Preferred Abductive Explanations for Decision Trees and Random Forests .
In this case, the user has to provide a weight for each feature, representing its disutility (or cost). The PyXAI library offers different options:
PreferredReasonMethod.SHAPLEY: Only available with Scikit-learn. It uses Shapley values to discriminate features. See this paper for more information.
PreferredReasonMethod.FEATURE_IMPORTANCE: Only available with Scikit-learn. It uses the f-importance of features to discriminate them. See this paper for more information.
PreferredReasonMethod.WORD_FREQUENCY: It uses the wordfreq package to discriminate features. The more frequent is a word in a feature name, the more likely it is to be understood by users.
PreferredReasonMethod.WEIGHTS: The user defines the the weights to be used.
PreferredReasonMethod.INCLUSION_PREFERRED: The user has to define a partition over all features
- (a list of list). The first elements of the partition are preferred to the second ones which are preferred to the third ones and so on.
Eecluding some features from explanations can be achieved using the function
set_excluded_features. The function
unset_excluded_features allows to restore the initial state (where default there is no excluded features).
|<Explainer Object>.set_excluded_features(self, excluded):
|Sets the features that the user does not want to see in explanations. You must give the name of the features.
Tuple: A list of feature names.
It may happen that excluded features prevent from computing explanations. In this case, the method that computes the explanation will return
|Unset the features set with the