Arrow Research search

Author name cluster

Dimos Tsouros

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

JAIR Journal 2025 Journal Article

Combining Constraint Programming and Machine Learning: From Current Progress to Future Opportunities

  • Quentin Cappart
  • Tias Guns
  • Michele Lombardi
  • Gilles Pesant
  • Dimos Tsouros

The integration of constraint programming (CP) together with machine learning (ML) has emerged as a promising direction for tackling complex decision-making and combinatorial optimization problems. While CP offers expressive modeling capabilities and formal guarantees, ML provides adaptive methods for learning from data and generalizing across instances. This survey presents a comprehensive overview of recent advances in combining CP and ML. We first show how ML has been used to improve the CP toolbox, both in modeling and in the efficiency of solving. Then, we examine how CP can support ML, particularly in providing structure, guarantees, and symbolic reasoning capabilities. Finally, we identify key open challenges inherent to such hybrid approaches and outline promising directions for future research. This survey provides a first conceptual and structured review of recent advancements in this emerging field, aiming to serve as a resource for practitioners and researchers in both the CP and ML communities. To keep the progress up to date, a curated list of references is hosted on an accompanying repository (https://github.com/corail-research/CPML-paper-list) and is open to community contributions.

ECAI Conference 2025 Conference Paper

CP-Bench: Evaluating Large Language Models for Constraint Modelling

  • Kostis Michailidis
  • Dimos Tsouros
  • Tias Guns

Constraint Programming (CP) is widely used to solve combinatorial problems, but its core process, namely constraint modelling, requires significant expertise and is considered to be a bottleneck for wider adoption. Aiming to alleviate this bottleneck, recent studies have explored using Large Language Models (LLMs) to transform combinatorial problem descriptions into executable constraint models. However, the existing evaluation datasets for constraint modelling are often limited to small, homogeneous, or domain-specific instances, which do not capture the diversity of real-world scenarios. This work addresses this gap by introducing CP-Bench, a novel benchmark that includes a diverse set of well-known combinatorial problems sourced from the CP community, structured explicitly for evaluating LLM-driven CP modelling. With this dataset, and given the variety of constraint modelling frameworks, we compare and evaluate the modelling capabilities of LLMs for three distinct constraint modelling systems, which vary in abstraction level and underlying syntax. Notably, the results show higher performance when modelling with a high-level Python-based framework. Additionally, we systematically evaluate the use of prompt-based and inference-time compute methods across different LLMs, which further increase accuracy, reaching up to 70% on this highly challenging benchmark.

AAAI Conference 2025 Conference Paper

Generalizing Constraint Models in Constraint Acquisition

  • Dimos Tsouros
  • Senne Berden
  • Steven Prestwich
  • Tias Guns

Constraint Acquisition (CA) aims to widen the use of constraint programming by assisting users in the modeling process. However, most CA methods suffer from a significant drawback: they learn a single set of individual constraints for a specific problem instance, but cannot generalize these constraints to the parameterized constraint specifications of the problem. In this paper, we address this limitation by proposing GenCon, a novel approach to learn parameterized constraint models capable of modeling varying instances of the same problem. To achieve this generalization, we make use of statistical learning techniques at the level of individual constraints. Specifically, we propose to train a classifier to predict, for any possible constraint and parameterization, whether the constraint belongs to the problem. We then show how, for some classes of classifiers, we can extract decision rules to construct interpretable constraint specifications. This enables the generation of ground constraints for any parameter instantiation. Additionally, we present a generate-and-test approach that can be used with any classifier, to generate the ground constraints on the fly. Our empirical results demonstrate that our approach achieves high accuracy and is robust to noise in the input instances.

NeurIPS Conference 2025 Conference Paper

Solver-Free Decision-Focused Learning for Linear Optimization Problems

  • Senne Berden
  • Ali Mahmutoğulları
  • Dimos Tsouros
  • Tias Guns

Mathematical optimization is a fundamental tool for decision-making in a wide range of applications. However, in many real-world scenarios, the parameters of the optimization problem are not known a priori and must be predicted from contextual features. This gives rise to predict-then-optimize problems, where a machine learning model predicts problem parameters that are then used to make decisions via optimization. A growing body of work on decision-focused learning (DFL) addresses this setting by training models specifically to produce predictions that maximize downstream decision quality, rather than accuracy. While effective, DFL is computationally expensive, because it requires solving the optimization problem with the predicted parameters at each loss evaluation. In this work, we address this computational bottleneck for linear optimization problems, a common class of problems in both DFL literature and real-world applications. We propose a solver-free training method that exploits the geometric structure of linear optimization to enable efficient training with minimal degradation in solution quality. Our method is based on the insight that a solution is optimal if and only if it achieves an objective value that is at least as good as that of its adjacent vertices on the feasible polytope. Building on this, our method compares the estimated quality of the ground-truth optimal solution with that of its precomputed adjacent vertices, and uses this as loss function. Experiments demonstrate that our method significantly reduces computational cost while maintaining high decision quality.