• Funding : ANRT, Happiso
  • Start year :
  • 2026

Recommendation systems and conversational assistants aim to engage in natural language dialogue with users, providing them with recommendations while ensuring a high acceptance rate. To achieve this, it is essential to implement mechanisms that encourage users to take ownership of the recommendations, enabling them to understand the options presented. On the one hand, persuasive elements must be integrated to influence user behavior and decisions, thereby strengthening acceptance of the suggestions. On the other hand, the decisions must be accompanied by clear and compelling explanations, carefully crafted to facilitate understanding and increase trust in the system. With the emergence of major language models such as GPT-4, LLaMA 3, and Mistral, the performance of virtual assistants and recommendation systems has seen a significant improvement. LLMs stand out for their exceptional ability to generate highly persuasive content, fostering better understanding of recommendations and resulting in high user acceptance rates. Nevertheless, despite these promising advances, current LLM-based systems have certain limitations. In particular, they can mislead users in their persuasion mechanisms due to the hallucinatory phenomenon inherent in these models, as well as their inability to take into account the user’s cognitive state and environmental context. This limitation is partly explained by the absence of a true theory of mind, that is, the ability to model the beliefs, intentions, desires, and emotions of others. Without this capability, LLMs struggle to adapt their explanations or recommendations in a personalized and context-sensitive way. This limits their ability to engage in effective interactions and provide truly relevant justifications, thus harming the quality of the user experience and the system’s reliability. This is particularly problematic in sensitive business applications, especially in the legal field, where the margin for error must be virtually nonexistent. A misinterpretation or an inappropriate recommendation can have serious consequences. Therefore, this thesis aims to address these limitations by developing a persuasive and responsible virtual assistant specifically designed for managing the files of bailiffs.