Only Boosted Trees (generated using XGBoost or LightGBM) are currently taken into account for the regression task. The algorithms for deriving explanations for regression computed using other ML models are still under development and should be available in the next versions of PyXAI.
A major difference between a classification task and a regression one achieved by a ML model is that in the latter case the exact value taken by $f(x)$ does not really matter. Would this value be $f(x)\pm\epsilon$ for a sufficiently small real number $\epsilon$ instead of $f(x)$, this would not be a big deal. Mathematically speaking, this means that what does matter is that the value of $f(x)$ belongs to some interval $I$. Accordingly, you must provide an interval to the explainer in order to compute explanations. This is done thanks to the function
set_interval available in the
|<ExplainerRegressionBT Object>.set_interval(lower_bound, upper_bound):
|Set the interval used to compute the explanation.
Float: the lower bound of the interval.
Float: the upper bound of the interval.
- Since the direct reason is unique and exactly explains the predicted value, there is no need to specify any interval to derive it.
- Of course, the interval depends on the predicted value $f(x)$ that must belongs to the interval in order to compute realistic explanations.