Explainable AI is a field that has emerged with the boom in machine learning and the opacity of the most precise models (such as deep neural networks). Human users of AI systems indeed need safeguards to avoid taking erroneous predictions as correct, and the ability to provide explanations of the predictions made is one way of rejecting such predictions. This need is critical when AI components are used in sensitive applications, and the explanation requirement is a matter of regulation in Europe (with the RGPD since 2018 and with the AI Act now). For these reasons, we had chosen to put the theme of “Explainable AI” at the center of CRIL’s scientific project for the 2020-2024 contract (extended to 2025 following the COVID pandemic). Indeed, it seemed to us that the unit’s expertise in research questions concerning data, knowledge and constraints could be profitably mobilized to create synergies and be at the origin of original research in explainable AI.