• Funding : ANR, Artois
  • PhD defended on :
  • Nov 21, 2025

The management of contradictory information presents significant challenges in knowledge representation and reasoning. Inconsistency can arise from merging information retrieved from different sources or from the presence of rules with exceptions. However, reasoning from inconsistent knowledge bases is not trivial, as contradictions can lead to arbitrary deductions. In propositional logic, any conclusion can be deduced from a contradiction. As a consequence, it is essential to use appropriate inconsistency handling mechanisms to enable reasoning in the presence of inconsistent or conflicting information.

The literature has studied the problem of contradictory information in several contexts. One of them is that of partially ordered information, where some pieces of information might have incomparable degrees of reliability. As a result, one cannot decide which piece of information to select in the presence of inconsistency, making reasoning a challenging task. Few tractable methods were proposed to deal with this specific situation. Furthermore, dealing with incomparabilities might cause the loss of some consequences, leading to missing some query answers. The aim of this thesis is to investigate and extend tractable inconsistency-tolerant semantics. More specifically, we plan to propose new methods to efficiently handle inconsistency in the case of partially ordered information.

We also plan to investigate the application of such methods to handle conflicts in access control models. Access control models are very important means of protecting personal data. In this case, inconsistency may arise due to the simultaneous use of permission and prohibition rules. Many models were proposed to handle this type of conflicts. However, they either proceed to add a large set of constraints to avoid the conflicts, or require human assistance to overcome them. Moreover, these models lack the capacity to provide explanations for the taken decisions. The aim of applying inconsistency-tolerant semantics in this case is to first ensure the continuous and safe addition and modification of rules in a model, while dealing with inconsistency, and then equip these models with explainability mechanisms. Finally, we propose to extend these security policy management solutions to incorporate the notion of explainability.

Jury committee

  • Sihem BELABBES – University of Paris 8 – Supervisor
  • Salem BENFERHAT – Artois University – Supervisor
  • Karim TABIA – Artois University – Supervisor
  • Clara BERTOLISSI – INSA Centre-Val de Loire – Rapporteur
  • Anthony HUNTER – University College London – Rapporteur
  • Meghyn BIENVENU – CNRS, Bordeaux University – Examiner
  • Sébastien DESTERCKE – CNRS, Université de technologie de Compiègne – Examiner
  • Laura GIORDANO – University of Eastern Piedmont – Examiner