Highlights Conference 2025 Conference Abstract
The Expressive Power of Temporal GNNs Through the Lens of Logic
- Marco Sälzer
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
Highlights Conference 2025 Conference Abstract
NeurIPS Conference 2025 Conference Paper
In recent years, the expressive power of various neural architectures---including graph neural networks (GNNs), transformers, and recurrent neural networks---has been characterised using tools from logic and formal language theory. As the capabilities of basic architectures are becoming well understood, increasing attention is turning to models that combine multiple architectural paradigms. Among them particularly important, and challenging to analyse, are temporal extensions of GNNs, which integrate both spatial (graph-structure) and temporal (evolution over time) dimensions. In this paper, we initiate the study of logical characterisation of temporal GNNs by connecting them to two-dimensional product logics. We show that the expressive power of temporal GNNs depends on how graph and temporal components are combined. In particular, temporal GNNs that apply static GNNs recursively over time can capture all properties definable in the product logic of (past) propositional temporal logic PTL and the modal logic K. In contrast, architectures such as graph-and-time TGNNs and global TGNNs can only express restricted fragments of this logic, where the interaction between temporal and spatial operators is syntactically constrained. These provide us with the first results on the logical expressiveness of temporal GNNs.
ICLR Conference 2025 Conference Paper
We analyse the complexity of the satisfiability problem, or similarly feasibility problem, (trSAT) for transformer encoders (TE), which naturally occurs in formal verification or interpretation, collectively referred to as formal reasoning. We find that trSAT is undecidable when considering TE as they are commonly studied in the expressiveness community. Furthermore, we identify practical scenarios where trSAT is decidable and establish corresponding complexity bounds. Beyond trivial cases, we find that quantized TE, those restricted by fixed-width arithmetic, lead to the decidability of trSAT due to their limited attention capabilities. However, the problem remains difficult, as we establish scenarios where trSAT is NEXPTIME-hard and others where it is solvable in NEXPTIME for quantized TE. To complement our complexity results, we place our findings and their implications in the broader context of formal reasoning.
IJCAI Conference 2025 Conference Paper
In this paper, we investigate verification of quantized Graph Neural Networks (GNNs), where some fixed-width arithmetic is used to represent numbers. We introduce the linear-constrained validity (LVP) problem for verifying GNNs properties, and provide an efficient translation from LVP instances into a logical language. We show that LVP is in PSPACE, for any reasonable activation functions. We provide a proof system. We also prove PSPACE-hardness, indicating that while reasoning about quantized GNNs is feasible, it remains generally computationally challenging.
IJCAI Conference 2024 Conference Paper
We propose a modal logic in which counting modalities appear in linear inequalities. We show that each formula can be transformed into an equivalent graph neural network (GNN). We also show that a broad class of GNNs can be transformed efficiently into a formula, thus significantly improving upon the literature about the logical expressiveness of GNNs. We also show that the satisfiability problem is PSPACE-complete. These results bring together the promise of using standard logical methods for reasoning about GNNs and their properties, particularly in applications such as GNN querying, equivalence checking, etc. We prove that such natural problems can be solved in polynomial space.
Highlights Conference 2023 Conference Abstract
Deep learning methods, especially their most prominent representative (deep) neural networks, have proved to be very successful in a broad range of applications, delivering impressive results. Unsurprisingly, deep learning methods are also used in safety-critical applications like early disease detection or driving assistants, which naturally leads to the need for safety certificates. Currently very popular are so called Graph Neural Networks (GNN), a neural network model for computing functions over graphs. While there is ongoing work on (formal) verification of GNN there is no work so far giving reliable decidability or complexity bounds of corresponding problems Thus, there are currently no fundamental results that frame and guide the development of GNN verification algorithms. We present our contributions to this topic, establishing first (un-)decidability results regarding the verification problem of common safety properties of so called Message Passing Neural Networks (MPNN), one of the most popular GNN variants. Additionally, we discuss strong conjectures, leading to lower and upper complexity bounds for the same problems, and we rank the implications of our results for the development of corresponding sound and complete verification algorithms. Contributed talk given by Marco Sälzer
ICLR Conference 2023 Conference Paper
Output reachability and adversarial robustness are among the most relevant safety properties of neural networks. We show that in the context of Message Passing Neural Networks (MPNN), a common Graph Neural Network (GNN) model, formal verification is impossible. In particular, we show that output reachability of graph-classifier MPNN, working over graphs of unbounded size, non-trivial degree and sufficiently expressive node labels, cannot be verified formally: there is no algorithm that answers correctly (with yes or no), given an MPNN, whether there exists some valid input to the MPNN such that the corresponding output satisfies a given specification. However, we also show that output reachability and adversarial robustness of node-classifier MPNN can be verified formally when a limit on the degree of input graphs is given a priori. We discuss the implications of these results, for the purpose of obtaining a complete picture of the principle possibility to formally verify GNN, depending on the expressiveness of the involved GNN models and input-output specifications.
TIME Conference 2023 Conference Paper
We present a first notion of a time-aware robustness property for Temporal Graph Neural Networks (TGNN), a recently popular framework for computing functions over continuous- or discrete-time graphs, motivated by recent work on time-aware attacks on TGNN used for link prediction tasks. Furthermore, we discuss promising verification approaches for the presented or similar safety properties and possible next steps in this direction of research.
MFCS Conference 2021 Conference Paper
The modal μ-calculus can only express bisimulation-invariant properties. It is a simple consequence of Kleene’s Fixpoint Theorem that on structures with finite bisimulation quotients, the fixpoint iteration of any formula converges after finitely many steps. We show that the converse does not hold: we construct a word with an infinite bisimulation quotient that is locally regular so that the iteration for any fixpoint formula of the modal μ-calculus on it converges after finitely many steps. This entails decidability of μ-calculus model-checking over this word. We also show that the reason for the discrepancy between infinite bisimulation quotients and trans-finite fixpoint convergence lies in the fact that the μ-calculus can only express regular properties.
GandALF Workshop 2020 Workshop Paper
Local fixpoint iteration describes a technique that restricts fixpoint iteration in function spaces to needed arguments only. It has been studied well for first-order functions in abstract interpretation and also in model checking. Here we consider the problem for least and greatest fixpoints of arbitrary type order. We define an abstract algebra of simply typed higher-order functions with fixpoints that can express fixpoint evaluation problems as they occur routinely in various applications, including program verification. We present an algorithm that realises local fixpoint iteration for such higher-order fixpoints, prove its correctness and study its optimisation potential in the context of several applications.
Highlights Conference 2020 Conference Abstract
We investigate a word-structure with infinite bisimulation quotient that guarantees finite convergence of all fixpoints definable in the modal mu-calculus. To underline that this result is caused by the logic and not the specific structure we show that an extension of the modal mu-calculus, called the higher-order fixpoint logic, achieves non-finite convergence over the same word-structure.