Arrow Research search

Author name cluster

Marc Plantevit

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

NeurIPS Conference 2025 Conference Paper

On Logic-based Self-Explainable Graph Neural Networks

  • Alessio Ragno
  • Marc Plantevit
  • Céline Robardet

Graphs are complex, non-Euclidean structures that require specialized models, such as Graph Neural Networks (GNNs), Graph Transformers, or kernel-based approaches, to effectively capture their relational patterns. This inherent complexity makes explaining GNNs decisions particularly challenging. Most existing explainable AI (XAI) methods for GNNs focus on identifying influential nodes or extracting subgraphs that highlight relevant motifs. However, these approaches often fall short of clarifying how such elements contribute to the final prediction. To overcome this limitation, logic-based explanations aim to derive explicit logical rules that reflect the model's decision-making process. Current logic-based methods are limited to post-hoc analyzes and are predominantly applied to graph classification, leaving a significant gap in intrinsically explainable GNN architectures. In this paper, we explore the potential of integrating logic reasoning directly into graph learning. We introduce LogiX-GIN, a novel, self-explainable GNN architecture that incorporates logic layers to produce interpretable logical rules as part of the learning process. Unlike post-hoc methods, LogiX-GIN provides faithful, transparent, and inherently interpretable explanations aligned with the model's internal computations. We evaluate LogiX-GIN across several graph-based tasks and show that it achieves competitive predictive performance while delivering clear, logic-based insights into its decision-making process.

ECAI Conference 2024 Conference Paper

A Simple Yet Effective Interpretable Bayesian Personalized Ranking for Cognitive Diagnosis

  • Arthur Batel
  • Idir Benouaret
  • Joan Fruitet
  • Marc Plantevit
  • Céline Robardet

In the field of education, the automatic assessment of student profiles has become a crucial objective, driven by the rapid expansion of online tutoring systems and computerized adaptive testing. These technologies aim to democratize education and enhance student assessment by providing detailed insights into student profiles, which are essential for accurately predicting the outcomes of exercises, such as solving various types of mathematical equations. We aim to develop a model capable of predicting responses to a large set of questions within the Multi-Target Prediction framework while ensuring that this model is explainable, allowing us to quantify student performance in specific knowledge areas. Existing cognitive diagnosis algorithms often struggle to meet the dual requirement of accurately predicting exercise outcomes and maintaining interpretability. To address this challenge, we propose an alternative to the complexity of current advanced machine learning models. Instead, we introduce a direct yet highly effective Bayesian Personalized Ranking algorithm, called CD-BPR, which incorporates interpretability as a core learning objective. Extensive experiments demonstrate that CD-BPR not only performs better in predicting exercise outcomes but also provides superior interpretability of estimated student profiles, thus fulfilling both key requirements.

ECAI Conference 2024 Conference Paper

Transparent Explainable Logic Layers

  • Alessio Ragno
  • Marc Plantevit
  • Céline Robardet
  • Roberto Capobianco

Explainable AI seeks to unveil the intricacies of black box models through post-hoc strategies or self-interpretable models. In this paper, we tackle the problem of building layers that are intrinsically explainable through logic rules. In particular, we address current state-of-the-art methods’ lack of fidelity and expressivity by introducing a transparent explainable logic layer (TELL). We propose to constrain a feed-forward layer with positive weights, which, combined with particular activation functions, offer the possibility of a direct translation into logic rules. Additionally, this approach overcomes the limitations of previous models, linked to their applicability to binary data only, by proposing a new way to automatically threshold real values and incorporate the obtained predicates into logic rules. We show that, compared to state-of-the-art, TELL achieves similar classification performances and, at the same time, provides higher explanatory power, measured by the agreement between models’ outputs and the activation of the logic explanations. In addition, TELL offers a broader spectrum of applications thanks to the possibility of its use on real data.

IJCAI Conference 2022 Conference Paper

What Does My GNN Really Capture? On Exploring Internal GNN Representations

  • Luca Veyrin-Forrer
  • Ataollah Kamal
  • Stefan Duffner
  • Marc Plantevit
  • Céline Robardet

Graph Neural Networks (GNNs) are very efficient at classifying graphs but their internal functioning is opaque which limits their field of application. Existing methods to explain GNN focus on disclosing the relationships between input graphs and model decision. In this article, we propose a method that goes further and isolates the internal features, hidden in the network layers, that are automatically identified by the GNN and used in the decision process. We show that this method makes possible to know the parts of the input graphs used by GNN with much less bias that SOTA methods and thus to bring confidence in the decision process.