Arrow Research search

Author name cluster

Samuel Kolb

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

10 papers
2 author rows

Possible papers

10

AIJ Journal 2023 Journal Article

Learning MAX-SAT from contextual examples for combinatorial optimisation

  • Mohit Kumar
  • Samuel Kolb
  • Stefano Teso
  • Luc De Raedt

Combinatorial optimisation problems are ubiquitous in artificial intelligence. Designing the underlying models, however, requires substantial expertise, which is a limiting factor in practice. The models typically consist of hard and soft constraints, or combine hard constraints with an objective function. We introduce a novel setting for learning combinatorial optimisation problems from contextual examples. These positive and negative examples show – in a particular context – whether the solutions are good enough or not. We develop our framework using the MAX-SAT formalism as it is a simple yet powerful setting having these features. We study the learnability of MAX-SAT models. Our theoretical results show that high-quality MAX-SAT models can be learned from contextual examples in the realisable and agnostic settings, as long as the data satisfies an intuitive “representativeness” condition. We also contribute two implementations based on our theoretical results: one leverages ideas from syntax-guided synthesis while the other makes use of stochastic local search techniques. The two implementations are evaluated by recovering synthetic and benchmark models from contextual examples. The experimental results support our theoretical analysis, showing that MAX-SAT models can be learned from contextual examples. Among the two implementations, the stochastic local search learner scales much better than the syntax-guided implementation while providing comparable or better models.

AAAI Conference 2021 System Paper

Democratizing Constraint Satisfaction Problems through Machine Learning

  • Mohit Kumar
  • Samuel Kolb
  • Clement Gautrais
  • Luc De Raedt

Constraint satisfaction problems (CSPs) are used widely, especially in the field of operations research, to model various real world problems like scheduling or planning. However, modelling a problem as a CSP is not trivial, it is labour intensive and requires both modelling and domain expertise. The emerging field of constraint learning deals with this problem by automatically learning constraints from a given dataset. While there are several interesting approaches for constraint learning, these works are hard to access for a non-expert user. Furthermore, different approaches have different underlying formalism and require different setups before they can be used. This demo paper combines these researches and brings it to non-expert users in the form of an interactive Excel plugin. To do this, we translate different formalism for specifying CSPs into a common language, which allows multiple constraint learners to coexist, making this plugin more powerful than individual constraint learners. Moreover, we integrate learning of CSPs from data with solving them, making it a self sufficient plugin. For the developers of different constraint learners, we provide an API that can be used to integrate their work with this plugin by implementing a handful of functions.

IJCAI Conference 2021 Conference Paper

Hybrid Probabilistic Inference with Logical and Algebraic Constraints: a Survey

  • Paolo Morettin
  • Pedro Zuidberg Dos Martires
  • Samuel Kolb
  • Andrea Passerini

Real world decision making problems often involve both discrete and continuous variables and require a combination of probabilistic and deterministic knowledge. Stimulated by recent advances in automated reasoning technology, hybrid (discrete+continuous) probabilistic reasoning with constraints has emerged as a lively and fast growing research field. In this paper we provide a survey of existing techniques for hybrid probabilistic inference with logic and algebraic constraints. We leverage weighted model integration as a unifying formalism and discuss the different paradigms that have been used as well as the expressivity-efficiency trade-offs that have been investigated. We conclude the survey with a comparative overview of existing implementations and a critical discussion of open challenges and promising research directions.

AAAI Conference 2020 Conference Paper

Learning MAX-SAT from Contextual Examples for Combinatorial Optimisation

  • Mohit Kumar
  • Samuel Kolb
  • Stefano Teso
  • Luc De Raedt

Combinatorial optimization problems are ubiquitous in artificial intelligence. Designing the underlying models, however, requires substantial expertise, which is a limiting factor in practice. The models typically consist of hard and soft constraints, or combine hard constraints with a preference function. We introduce a novel setting for learning combinatorial optimisation problems from contextual examples. These positive and negative examples show – in a particular context – whether the solutions are good enough or not. We develop our framework using the MAX-SAT formalism. We provide learnability results within the realizable and agnostic settings, as well as HASSLE, an implementation based on syntax-guided synthesis and showcase its promise on recovering synthetic and benchmark instances from examples.

AAAI Conference 2020 Conference Paper

Learning Weighted Model Integration Distributions

  • Paolo Morettin
  • Samuel Kolb
  • Stefano Teso
  • Andrea Passerini

Weighted model integration (WMI) is a framework for probabilistic inference over distributions with discrete and continuous variables and structured supports. Despite the growing popularity of WMI, existing density estimators ignore the problem of learning a structured support, and thus fail to handle unfeasible configurations and piecewise-linear relations between continuous variables. We propose LARIAT, a novel method to tackle this challenging problem. In a first step, our approach induces an SMT(LRA) formula representing the support of the structured distribution. Next, it combines the latter with a density learned using a state-of-the-art estimation method. The overall model automatically accounts for the discontinuous nature of the underlying structured distribution. Our experimental results with synthetic and real-world data highlight the promise of the approach.

UAI Conference 2020 Conference Paper

Ordering Variables for Weighted Model Integration

  • Vincent Derkinderen
  • Evert Heylen
  • Pedro Zuidberg Dos Martires
  • Samuel Kolb
  • Luc De Raedt

State-of-the-art probabilistic inference algorithms, such as variable elimination and search-based approaches, rely heavily on the order in which variables are marginalized. Finding the optimal ordering is an NP-complete problem. This computational hardness has led to heuristics to find adequate variable orderings. However, these heuristics have mostly been targeting discrete random variables. We show how variable ordering heuristics from the discrete domain can be ported to the discrete-continuous domain. We equip the state-of-the-art F-XSDD(BR) solver for discrete-continuous problems with such heuristics. Additionally, we propose a novel heuristic called bottom-up min-fill (BU-MiF), yielding a solver capable of determining good variable orderings without having to rely on the user to provide such an ordering. We empirically demonstrate its performance on a set of benchmark problems.

UAI Conference 2019 Conference Paper

How to Exploit Structure while Solving Weighted Model Integration Problems

  • Samuel Kolb
  • Pedro Zuidberg Dos Martires
  • Luc De Raedt

Weighted model counting (WMC) is a state-of-the-art technique for probabilistic inference in discrete domains. WMC has recently been extended towards weighted model integration (WMI) in order to handle discrete and continuous distributions alike. While a number of WMI solvers have been introduced, their relationships, strengths and weaknesses are not yet well understood. WMI solving consists of two sub-problems: 1) finding convex polytopes; and 2) integrating over them efficiently. We formalize the first step as $\lambda$-SMT and discuss what strategies solvers apply to solve both the $\lambda$-SMT and the integration problem. This formalization allows us to compare state-of-the-art solvers and their behaviour across different types of WMI problems. Moreover, we identify factorizability of WMI problems as a key property that emerges in the context of probabilistic programming. Problems that can be factorized can be solved more efficiently. However, current solvers exploiting this property restrict themselves to WMI problems with univariate conditions and fully factorizable weight functions. We introduce a new algorithm, F-XSDD, that lifts these restrictions and can exploit factorizability in WMI problems with multivariate conditions and partially factorizable weight functions. Through an empirical evaluation, we show the effectiveness of our approach.

IJCAI Conference 2019 Conference Paper

The pywmi Framework and Toolbox for Probabilistic Inference using Weighted Model Integration

  • Samuel Kolb
  • Paolo Morettin
  • Pedro Zuidberg Dos Martires
  • Francesco Sommavilla
  • Andrea Passerini
  • Roberto Sebastiani
  • Luc De Raedt

Weighted Model Integration (WMI) is a popular technique for probabilistic inference that extends Weighted Model Counting (WMC) -- the standard inference technique for inference in discrete domains -- to domains with both discrete and continuous variables. However, existing WMI solvers each have different interfaces and use different formats for representing WMI problems. Therefore, we introduce pywmi (http: //pywmi. org), an open source framework and toolbox for probabilistic inference using WMI, to address these shortcomings. Crucially, pywmi fixes a common internal format for WMI problems and introduces a common interface for WMI solvers. To assist users in modeling WMI problems, pywmi introduces modeling languages based on SMT-LIB. v2 or MiniZinc and parsers for both. To assist users in comparing WMI solvers, pywmi includes implementations of several state-of-the-art solvers, a fast approximate WMI solver, and a command-line interface to solve WMI problems. Finally, to assist developers in implementing new solvers, pywmi provides Python implementations of commonly used subroutines.

IJCAI Conference 2018 Conference Paper

Efficient Symbolic Integration for Probabilistic Inference

  • Samuel Kolb
  • Martin Mladenov
  • Scott Sanner
  • Vaishak Belle
  • Kristian Kersting

Weighted model integration (WMI) extends weighted model counting (WMC) to the integration of functions over mixed discrete-continuous probability spaces. It has shown tremendous promise for solving inference problems in graphical models and probabilistic programs. Yet, state-of-the-art tools for WMI are generally limited either by the range of amenable theories, or in terms of performance. To address both limitations, we propose the use of extended algebraic decision diagrams (XADDs) as a compilation language for WMI. Aside from tackling typical WMI problems, XADDs also enable partial WMI yielding parametrized solutions. To overcome the main roadblock of XADDs -- the computational cost of integration -- we formulate a novel and powerful exact symbolic dynamic programming (SDP) algorithm that seamlessly handles Boolean, integer-valued and real variables, and is able to effectively cache partial computations, unlike its predecessor. Our empirical results demonstrate that these contributions can lead to a significant computational reduction over existing probabilistic inference algorithms.

IJCAI Conference 2018 Conference Paper

Learning SMT(LRA) Constraints using SMT Solvers

  • Samuel Kolb
  • Stefano Teso
  • Andrea Passerini
  • Luc De Raedt

We introduce the problem of learning SMT(LRA) constraints from data. SMT(LRA) extends propositional logic with (in)equalities between numerical variables. Many relevant formal verification problems can be cast as SMT(LRA) instances and SMT(LRA) has supported recent developments in optimization and counting for hybrid Boolean and numerical domains. We introduce SMT(LRA) learning, the task of learning SMT(LRA) formulas from examples of feasible and infeasible instances, and we contribute INCAL, an exact non-greedy algorithm for this setting. Our approach encodes the learning task itself as an SMT(LRA) satisfiability problem that can be solved directly by SMT solvers. INCAL is an incremental algorithm that achieves exact learning by looking only at a small subset of the data, leading to significant speed-ups. We empirically evaluate our approach on both synthetic instances and benchmark problems taken from the SMT-LIB benchmarks repository.