• Funding : Artois, Université de Tunis
  • Start year :
  • 2025

Joint work with Tunis University

Graphs are pervasive in various domains such as social networks, chemistry, and recommender systems due to their ability to model complex relationships. The emergence of Graph Neural Networks (GNNs) has enabled significant progress in learning graph representations for tasks like node classification, link prediction, and community detection. However, two major challenges remain: graph dynamics and model explainability.

Real-world graphs are dynamic, continuously evolving through the addition or removal of nodes and edges. This dynamic nature complicates the updating of GNN models while preserving accuracy and scalability, especially for large heterogeneous graphs with frequent updates. Current approaches struggle to efficiently handle these aspects in real-time, making incremental and scalable learning essential. Another challenge lies in explainability. Like many deep learning models, GNNs often operate as “black boxes,” which limits their adoption in sensitive domains such as healthcare or finance, where transparency and trust are crucial. Existing explanation techniques (e.g., GNNExplainer, PGExplainer, GraphLime) provide insights but are often complex, difficult to interpret, and mainly focused on structural aspects, overlooking the heterogeneity of nodes and edges.

This thesis aims to develop an incremental and explainable learning framework for heterogeneous and evolving graphs, structured around three main objectives:

  • Incremental Learning on Dynamic Graphs: Design models that can efficiently capture temporal and structural changes in real time by combining GNNs with recurrent neural networks and distributed techniques for scalability.

  • Explainability of GNNs: Propose methods to deliver clear and intuitive explanations by integrating graph mining techniques and logic-based solvers, ensuring that explanations remain meaningful for heterogeneous and dynamic graphs.

  • This thesis will benefit from the framework of the CHIST-ERA ATLAS project (GeoAI-based AugmenTation of muLti-source urbAn GIS), which will provide good conditions to develop new machine learning models and graph neural networks (GNN), to further investigate the issue of explainability, and to validate the proposed approaches on data related to GIS systems.