Ouvrir Menu Fermer Menu


EXPlainable artificial intelligence: a KnowlEdge CompilaTion FoundATION

EXPEKCTATION is the name of a research and teaching chair in AI (ANR-19-CHIA-0005-01) , funded by ANR, the French Agency for Research, for four years

The EXPEKCTATION project is about explainable and robust AI. It aims to devise global model-agnostic approaches for interpretable and robust machine learning using knowledge compilation: we seek for generic pre-processing techniques capable of extracting from any black-box predictor a corresponding white-box which can be used to provide various forms of explanations and to address verification queries. In the project, we plan to focus on the post hoc interpretability issue: one will consider ML models that are not intrinsically interpretable and analyze the models once they have been trained. We plan also to focus on the global interpretability issue (i.e., to explain the entire model behavior, which is not the same as explaining an individual prediction).

Clearly, the translation of the black-box into a white-box can be computationally demanding. Notably, if the white-box model is an arithmetic circuit, it can be very large. Furthermore, inferring explanations from a white-box model can also be computationally demanding, as many abduction problems are NP-hard. Fortunately, once the black-box model has been trained, there is no need to modify it each time a new input to predict must be considered. Thus, the corresponding white-box / circuit can be pre-processed so as to facilitate the generation of explanations of the predictions, independently of the corresponding inputs. Knowledge compilation (KC) appears as a very promising approach in this respect.

The main purpose of the EXPEKCTATION project is to take advantage of KC techniques, for which we have a strong expertise, to address fundamental issues in the objective of explainable and robust AI. Two main issues will be considered:

  • Which representation languages admit tractable algorithms for inferring various forms of explanations and support many verification queries?
  • How can we extract a tractable representation from a black-box predictor?




Useful links