EXPEKCTATION

Ouvrir Menu Fermer Menu

About

EXPlainable artificial intelligence: a KnowlEdge CompilaTion FoundATION

EXPEKCTATION is the name of a research and teaching chair in AI (ANR-19-CHIA-0005-01) , funded by ANR, the French Agency for Research.

The EXPEKCTATION project concerns the development of approaches to explainable AI for interpretable and robust machine learning, using constraint-based automated reasoning methods, in particular knowledge compilation. We are looking for preprocessing techniques able to associate a black box predictor with a surrogate white box, that be used to provide various forms of explanation and to answer verification queries about the corresponding black box. The goal is to get AI systems that the user can trust. We plan to focus on the problem of post-hoc interpretability: we will examine learning models that are not intrinsically interpretable and we will analyze the models once learned. Since the corresponding white box can be preprocessed in order to facilitate the generation of explanations of the predictions, independently of the associated inputs, knowledge compilation appears as a very promising approach in this respect.

Among the research questions that will be addressed, we want to determine the learning models and associated white box representation languages that admit “efficient” algorithms for deriving explanations and supporting verification queries. We will study the computational complexity of various types of explanations. We also plan to develop and evaluate algorithms for these tasks. Finally, we want to study how to produce explanations that are as intelligible as possible, taking into account criteria intrinsic to the explanations (size, number, structure, etc.) but also criteria extrinsic to them (the context of the explanation task, the end user).

News

  • Dec 23: Steve Bellart and Lounes Bounia defended their Phd Thesis.
  • Jul 23: our papers On Contrastive Explanations for Tree-Based Classifiers and Rectifying Binary Classifiers are accepted to ECAI'23.
  • Jun 23: our papers Computing Abductive Explanations for Boosted Regression Trees and On Translations between ML Models for XAI Purposes are accepted to IJCAI'23.
  • Jun 23: our paper Approximating Probabilistic Explanations via Supermodular Minimization is accepted to UAI'23.
  • Apr 23: our work on computing explanations for tree-based classifiers is presented at the Workshop on Machine Learning, Interpretability, and Logic organized by IDEAL (The Institute for Data, Econometrics, Algorithms, and Learning, Chicago).
  • Feb 23: our paper Computing Abductive Explanations for Boosted Trees is accepted to AISTATS'23.
  • Dec 22: our paper On the explanatory power of Boolean decision trees is published in DKE.
  • Nov 22: The first release of our library PyXAI is available.
  • Sept 22: Ismaïl Baaj joins the group.
  • Jun 22: Olivier Peltre joins the group.
  • Jul 22: our papers On Preferred Abductive Explanations for Decision Trees and Random Forests, On the Complexity of Enumerating Prime Implicants from Decision-DNNF Circuits and On Quantifying Literals in Boolean Logic and its Applications to Explainable AI (Extended Abstract) are accepted to IJCAI'22.
  • Jul 22: our paper A New Exact Solver for (Weighted) Max#SAT is accepted to SAT'22.
  • Jan 22: Nicolas Szczepanski joins the group.
  • Dec 21: our papers (in French) Sur le pouvoir explicatif des arbres de décision and Les raisons majoritaires : des explications abductives pour les forêts aléatoires are accepted to the conference EGC'22. They are nominated as best academic papers.
  • Nov 21: our paper Trading Complexity for Sparsity in Random Forest Explanations is accepted to AAAI'22.
  • Oct 20: Steve Bellart and Louenas Bounia start their PhDs.
  • Sept 20: Beginning of the project.

Members

 

Useful links