• PhD Student:
  • Louenas Bounia
  • Funding : ANR
  • PhD defended on :
  • Dec 22, 2023

A major problem today is that of explainable and robust AI (alias XAI): it is a question of being able to justify to the (human) user of an AI system the decisions made or the predictions suggested by the system, and also of being able to assess how reliable the system is.

An important research trend in this direction consists in associating with the AI system (seen as a black box), a white (or transparent) box, in the form of a circuit having the same behavior as the black box in terms of its inputs and outputs. We can then use the white box to address the explanatory queries that are asked, and to estimate the robustness of the decisions / predictions made.

Several questions arise. Among them: which type of white box is suited to which type of black box? Which encodings to go from a black box to a white box? What useful information must be preserved in order to be able to subsequently handle explanation or verification queries? What is the computational cost of answering such queries depending on the type of circuit chosen and the representation chosen for the circuit?

Since the white box is independent of the black box inputs, it can be preprocessed (compiled) in order to facilitate both the generation of explanations of the predictions made by the black box and the evaluation of its reliability.

The research questions listed above will be considered in the thesis, that will focus on modeling aspects (connexions black box / white box, encodings and properties of the encodings).