CRIL
Centre de Recherche en Informatique de Lens
Recherches en intelligence artificielle et applications
  • Présentation
    • Thématiques
    • Organigramme
    • Actualités
  • Axes de recherche
    • Données
    • Connaissances
    • Contraintes
  • Activités
    • Thèses & HDR
    • Projets/Collaborations/Actions
    • Séminaires
  • Productions scientifiques
    • Publications
    • Logiciels
    • Prix/Récompenses
  • Membres
    • Annuaire
    • Parité - égalité
  • Informations pratiques
    • Accès
    • Contact
  • Recrutements
  • Français
    • English

Learning Interpretable Circuits

    • Co-directeur(s) de thèse :
    • Frédéric Koriche
    • Jean-Marie Lagniez
    • Stefan Mengel
    • Financement : CRIL

    Overview

    In recent years, there has been a growing interest in the design of statistical learning algorithms for interpretable models, which are not only accurate but also understandable by human users. A related and desirable property is explainability, which refers to the computational ability of predictive models to explain their predictions in intelligible terms.

    In this setting, decision trees are of paramount importance, as they can be easily read by recursively breaking a choice into sub-choices until a decision is reached. Although the problem of learning decision trees is NP-hard, several optimization approaches have been proposed for identifying a decision tree that minimizes some regularized risk objective (e.g. [1,2,3]).

    Beyond decision trees, various classes of (boolean or arithmetic) circuits are also endowed with interpretability and explainability properties. These models are indeed able to efficiently handle a wide variety of explanation queries, by exploiting the decomposability property on the logical and gates, and the determinism of or gates [4]. Well-known classes of interpretable circuits include ordered decision diagrams, affine decision trees, and decomposable negation normal form representations.

    Although the problem of learning circuits has been studied in the literature (e.g. [5,6]), very little is known about learning interpretable circuits. The focus of this thesis proposal is to address this issue, by exploring several directions of research. One of them is to identify the learnability of several subclasses of interpretable circuits. Another important perspective is to exploit constraint-based or MILP solvers for extracting optimal - or quasi-optimal - interpretable circuits.. Finally, since decision circuits can easily be extended to semi-ring networks, a final topic is to extend results about binary classification to other learning tasks, such as regression or density estimation.

    References

    [1] Nina Narodytska, Alexey Ignatiev, Filipe Pereira, João Marques-Silva: Learning Optimal Decision Trees with SAT. IJCAI 2018: 1362-1368.

    [2] Emir Demirovic, Anna Lukina, Emmanuel Hebrard, Jeffrey Chan, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Peter J. Stuckey: MurTree: Optimal Decision Trees via Dynamic Programming and Search. J. Mach. Learn. Res. 23: 26:1-26:47 (2022).

    [3] Valentina Zantedeschi, Matt J. Kusner, Vlad Niculae: Learning Binary Decision Trees by Argmin Differentiation. ICML 2021: 12298-12309.

    [4] Gilles Audemard, Frédéric Koriche, Pierre Marquis: On Tractable XAI Queries based on Compiled Representations. KR 2020: 838-849.

    [5] Nathan Linial, Yishay Mansour, Noam Nisan. Constant depth circuits, Fourier transform, and learnability. FOCS 1989: 574–579.

    [6] Eran Malach, Shai Shalev-Shwartz. Learning Boolean Circuits with Neural Networks. CoRR abs/1910.11923 (2019).

    Profile

    The candidate should have a Master or similar degree in computer science. As the thesis proposal lies at the intersection of explainable artificial intelligence, combinatorial optimization, and machine learning, the candidate should have a strong background in some of these topics.

    CNRS

    • Nous contacter
    • +33 (0)3 21 79 17 23
    • gestion@cril.univ-artois.fr
    • Nos adresses
      pour correspondance
    • FAC UFR des Sciences Jean Perrin
      Rue Jean Souvraz SP 18
      F-62307 Lens Cedex
      France
    •  
       
    • IUT IUT de Lens
      Rue de l’Université SP 16       
      F-62307 Lens Cedex
      France

    Artois

    Crédits et mentions légales | Intranet