• Funding : Artois, Région HdF
  • Start year :
  • 2023

Recently, learning from natural language explanations has received considerable attention in the AI community. The idea of learning from explanation is especially appealing in few-shot learning, i.e. learning with limited training data where relying on label co-occurrence statistics is not feasible for training, and ExplainableAI, i.e. explaining decision-making process in black-box deep learning models. Natural language explanations can provide knowledge about a given task and guide models to perform reasoning when supervision is limited. While Large-scale language models (LLMs) such as GPT3, ChatGPT, OPT or BLOOM have demonstrated a remarkable capability to capture commonsense knowledge, their capability of performing intricate reasoning with explanations remains marginal. This thesis aims to develop methods for generating explanations from language models that are effectively suitable for high-level reasoning within a few-shot learning perspective. We will also investigate how leveraging LLMs for generating explanations of predictions in black-box deep learning models.