Arrow Research search

Author name cluster

Mikaël Monet

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

Highlights Conference 2022 Conference Abstract

(Weighted) Counting of Matchings in Graph Families of Unbounded Treewidth

  • Mikaël Monet

For a fixed graph family G, we are interested in the following computational problem, denoted PrMatch(G): - The input is an undirected graph G=(V, E) of G, together with (rational) probability values p(e) for each e \in E. - The output is the probability that G contains a simple path of length 2, when every edge e of G has a probability of existence p(e), independently from the others Equivalently, the output is one minus the probability to obtain a *matching* of G, i. e. , a subset of edges such that no two edges share an incident vertex. Note that we can set the edge probabilities to be 0, so we may assume that the family C is closed under subgraphs. It is known that if G has bounded treewidth, then PrMatch(G) can be solved in polynomial time. The result we want to present is that, essentially, bounding the treewidth is the only way to make this problem tractable. More precisely, we show that, for any graph family G that does not have bounded treewidth, and where high-treewidth graphs can be efficiently constructed, the problem PrMatch(G) is intractable (specifically, #P-hard under RP-reductions). During the presentation, we will present the proof for a simpler version of this result. We will show a reduction from the #P-hard problem of counting matchings in 3-regular planar graphs, which works by constructing polynomially many instances for the problem PrMatch(G) and obtaining a system of linear equations. We will show how, if the equations are invertible, we can recover the number of matchings of the original graph via multiple oracle calls to the problem. We will also discuss at a high level the intuitive idea and difficulties of the general proof, which uses the grid minor extraction theorem. This result extends an earlier result by Amarilli, Bourhis, and Senellart at PODS'16 where the same result is shown for a more complicated property in first-order logic; and it relates to the known intractability of computing tractable lineages over such graph families. This is ongoing work that is not yet published.

MFCS Conference 2022 Conference Paper

Weighted Counting of Matchings in Unbounded-Treewidth Graph Families

  • Antoine Amarilli
  • Mikaël Monet

We consider a weighted counting problem on matchings, denoted PrMatching(𝒢), on an arbitrary fixed graph family 𝒢. The input consists of a graph G ∈ 𝒢 and of rational probabilities of existence on every edge of G, assuming independence. The output is the probability of obtaining a matching of G in the resulting distribution, i. e. , a set of edges that are pairwise disjoint. It is known that, if 𝒢 has bounded treewidth, then PrMatching(𝒢) can be solved in polynomial time. In this paper we show that, under some assumptions, bounded treewidth in fact characterizes the tractable graph families for this problem. More precisely, we show intractability for all graph families 𝒢 satisfying the following treewidth-constructibility requirement: given an integer k in unary, we can construct in polynomial time a graph G ∈ 𝒢 with treewidth at least k. Our hardness result is then the following: for any treewidth-constructible graph family 𝒢, the problem PrMatching(𝒢) is intractable. This generalizes known hardness results for weighted matching counting under some restrictions that do not bound treewidth, e. g. , being planar, 3-regular, or bipartite; it also answers a question left open in [Amarilli et al. , 2016]. We also obtain a similar lower bound for the weighted counting of edge covers.

AAAI Conference 2021 Conference Paper

The Tractability of SHAP-Score-Based Explanations for Classification over Deterministic and Decomposable Boolean Circuits

  • Marcelo Arenas
  • Pablo Barceló
  • Leopoldo Bertossi
  • Mikaël Monet

Scores based on Shapley values are widely used for providing explanations to classification results over machine learning models. A prime example of this is the influential SHAPscore, a version of the Shapley value that can help explain the result of a learned model on a specific entity by assigning a score to every feature. While in general computing Shapley values is a computationally intractable problem, it has recently been claimed that the SHAP-score can be computed in polynomial time over the class of decision trees. In this paper, we provide a proof of a stronger result over Boolean models: the SHAP-score can be computed in polynomial time over deterministic and decomposable Boolean circuits. Such circuits, also known as tractable Boolean circuits, generalize a wide range of Boolean circuits and binary decision diagrams classes, including binary decision trees, Ordered Binary Decision Diagrams (OBDDs) and Free Binary Decision Diagrams (FBDDs). We also establish the computational limits of the notion of SHAP-score by observing that, under a mild condition, computing it over a class of Boolean models is always polynomially as hard as the model counting problem for that class. This implies that both determinism and decomposability are essential properties for the circuits that we consider, as removing one or the other renders the problem of computing the SHAP-score intractable (namely, #P-hard).

NeurIPS Conference 2020 Conference Paper

Model Interpretability through the lens of Computational Complexity

  • Pablo Barceló
  • Mikaël Monet
  • Jorge Pérez
  • Bernardo Subercaseaux

In spite of several claims stating that some models are more interpretable than others --e. g. , "linear models are more interpretable than deep neural networks"-- we still lack a principled notion of interpretability that allows us to formally compare among different classes of models. We make a step towards such a theory by studying whether folklore interpretability claims have a correlate in terms of computational complexity theory. We focus on post-hoc explainability queries that, intuitively, attempt to answer why individual inputs are classified in a certain way by a given model. In a nutshell, we say that a class C1 of models is more interpretable than another class C2, if the computational complexity of answering post-hoc queries for models in C2 is higher than for C1. We prove that this notion provides a good theoretical counterpart to current beliefs on the interpretability of models; in particular, we show that under our definition and assuming standard complexity-theoretical assumptions (such as P! =NP), both linear and tree-based models are strictly more interpretable than neural networks. Our complexity analysis, however, does not provide a clear-cut difference between linear and tree-based models, as we obtain different results depending on the particular {post-hoc explanations} considered. Finally, by applying a finer complexity analysis based on parameterized complexity, we are able to prove a theoretical result suggesting that shallow neural networks are more interpretable than deeper ones.

ICLR Conference 2020 Conference Paper

The Logical Expressiveness of Graph Neural Networks

  • Pablo Barceló
  • Egor V. Kostylev
  • Mikaël Monet
  • Jorge Pérez 0001
  • Juan L. Reutter
  • Juan Pablo Silva

The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC2, a well-studied fragment of first order logic. FOC2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.