A major issue today in AI is that of explainable and robust AI (XAI): the objective is to be able to justify to a (human) user of an AI system the decisions taken or the predictions suggested by the system, and also of being able to assess how reliable the system is.

An important research trend in this direction consists in associating with the AI system (seen as a black box), a white (or transparent) box, in the form of a circuit having the same behavior as the black box in terms of inputs / outputs. We can then use the white box to respond to the explanatory queries that are considered and to estimate the robustness of the decisions or the predictions made.

Several questions arise. In particular: what type of white box for what type of black box? What encodings to go from the black box to the white box? What useful information must be preserved by the encoding in order to be able to carry out explanatory or verification requests? What is the cost of these requests according to the type of circuit selected and the representation chosen for the circuit?

Since the white box is independent of the inputs of the black box, it can be preprocessed (compiled) in order to facilitate the generation of explanations for the predictions made by the black box and the evaluation of the reliability of the latter.

The research questions listed above will be addressed in the two proposed theses. The first will focus on the modeling aspects (black box / white box links, encodings and properties thereof). The second on the representational aspects (which representations for circuits?) and the algorithmic aspects for explanation and verification.

These theses will be prepared at the CRIL lab (Lens Computer Research Center, UMR CNRS 8188 / université d’Artois – www.cril.fr), as part of an ANR research and teaching chair, called EXPEKCTATION. The doctoral students who will be recruited will participate in the activities of the international research project CNRS MAKC (an international laboratory, without walls, created for 5 years from 2020), carried out by the CRIL in collaboration with the University of California in Los Angeles (UCLA). They will benefit from a 3-year fixed-term contract from CNRS and will be able, if they wish, to teach at the University of Artois as temporary teachers. For each thesis, the taking up of the position may start on September 1st, 2020 (but it may be postponed if necessary).

The skills sought are in data science and artificial intelligence (in a broad spectrum, including computer science and applied mathematics). Supervision will be provided by the group of CRIL researchers involved in the ANR EXPEKCTATION project.

CRIL is a human-sized lab, specialized in artificial intelligence, well-endowed and internationally recognized in its areas of expertise. CRIL participates in the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE).

For more details on the Ph.D. theses subjects, their supervision, or on the CRIL lab, feel free to contact Pierre Marquis.

Interested candidates should apply, before June 8th 2020, at - https://emploi.cnrs.fr/Offres/Doctorant/UMR8188-PIEMAR-003/Default.aspx?lang=EN - https://emploi.cnrs.fr/Offres/Doctorant/UMR8188-PIEMAR-004/Default.aspx?lang=EN