• Doctorant:
  • Ryma Boumazouza
  • Financement : Artois, Région HdF
  • Thèse soutenue le :
  • 8 déc. 2022 • Salle des thèses

This thesis studies the problem of explaining individual predictions of black-box machine learning models. This problem is addressed in both single and multi-label classification. Firstly, we introduce an explanation approach representing a combination of SAT solving and numerical measures to develop a model-agnostic method for providing both symbolic and score-based explanations. The idea is to take a single-label classifier, together with an instance, and produce a propositional formula that we will use to generate our explanations. We consider both cases where we can have the logical representation of the model as a whole or an approximation based on a surrogate model. In the second case, a crucial component of the proposed approach is to approximate the model with another (simpler) one that does admit a tractable logical representation to efficiently enumerate explanations. To comply with the original predictor, the selected surrogate model needs to ensure fidelity. Subsequently, this trained model is used to generate symbolic and numerical explanations. In a second time, we consider a SAT framework with the aim of using SAT solvers as the problem solving engine. Given an unsatisfiable formula corresponding to a negative prediction, modern SAT solvers are able to report the cores generating an inconsistency. In this contribution, we provide two complementary types of symbolic explanations of unsatisfiability called sufficient reasons and counterfactuals centered around Minimal Unsatisfiable Subsets (MUS) and Minimal Correction Subsets (MCS) respectively. Secondly, we have worked on defining measures of the quality of an explanation and of a variable contribution to properly assess how relevant they are as it becomes necessary to focus on those providing more insights. Next, we have worked on defining possible explanation mechanisms to explain the outcomes of multi-label classifiers. We have introduced explanations at different granularity levels which go from structural relationships between labels to the selection of features. Finally, we were interested in feature-level importance scores for how much a given input feature contributes to a multi-label model’s output. This contribution looks into two different possibilities of using existing methods for single-label as oracles or using feature attributions obtained from symbolic explanations. In order to evaluate the quality of feature attributions, we extend the properties of sensitivity, data-stability to the multi-label setting in addition to a new property specific to multi-label classification we call label-explanation correlation.

Composition du jury

Rapporteurs :

  • Marie-Jeanne Lesot, Université de Sorbonne
  • Sylvain Lagrue, Université de Compiègne

Examinateur :

  • Christine Solnon, INSA de Lyon

Encadrement :

  • Bertrand Mazure, Université d’Artois (Directeur)
  • Karim Tabia, Université d’Artois (Co-Directeur)
  • Fahima Cheikh-Alili, Université d’Artois (Encadrant)