Arrow Research search

Author name cluster

Senne Berden

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
2 author rows

Possible papers

8

JAIR Journal 2026 Journal Article

Score Function Gradient Estimation to Widen the Applicability of Decision-Focused Learning

  • Mattia Silvestri
  • Senne Berden
  • Gaetano Signorelli
  • Ali İrfan Mahmutoğulları
  • Jayanta Mandi
  • Brandon Amos
  • Tias Guns
  • Michele Lombardi

Background: Real-world optimization problems often contain parameters that are unknown at solving time. For example, in delivery problems, these parameters may be travel times or customer demands. A common strategy in such scenarios is to first predict the parameter values from contextual features using a machine learning model, and then solve the resulting optimization problem. To train the machine learning model, two paradigms can be distinguished. In prediction-focused learning, the model is trained to maximize predictive accuracy. However, this can lead to suboptimal decision-making, because it does not account for how prediction errors affect the quality of the downstream decisions. To address this, decision-focused learning (DFL) minimizes a task loss that captures how the predictions affect decision quality. Objectives: One challenge in DFL is that the task loss has zero-valued gradients when the optimization problem is combinatorial, which hinders gradient-based training. For this reason, state-of-the-art DFL methods use surrogate losses and problem smoothing. However, these methods make specific assumptions about the problem structure (e.g., linear or convex problems with unknown parameters occurring only in the objective function). The goal of our work is to overcome these limitations and extend the applicability of DFL. Method: We propose an alternative DFL approach that makes only minimal assumptions by combining stochastic smoothing with score function gradient estimation. This makes the approach broadly applicable, including to problems with nonlinear objectives, uncertainty in the constraints, and two-stage stochastic optimization problems. Results: Our experiments show that our method matches or outperforms specialized methods for the problems they are designed for, while also extending to settings where no existing method is applicable. In addition, our method always outperforms models trained with prediction-focused learning. Conclusions: In this work we demonstrate that by combining stochastic smoothing and score function gradient estimation to estimate the gradients of a smoothed loss, we can train a machine learning model in a DFL fashion without assuming any structural property of the optimization problem. This approach extends the applicability of DFL to a wider range of optimization problems, including those with uncertainty in the constraints. At the same time, it achieves performance that is competitive with or superior to existing DFL methods when they are applicable.

NeurIPS Conference 2025 Conference Paper

Feasibility-Aware Decision-Focused Learning for Predicting Parameters in the Constraints

  • Jayanta Mandi
  • Marianne Defresne
  • Senne Berden
  • Tias Guns

When some parameters of a constrained optimization problem (COP) are uncertain, this gives rise to a predict-then-optimize (PtO) problem, comprising two stages: the \textit{prediction} of the unknown parameters from contextual information and the subsequent \textit{optimization} using those predicted parameters. Decision-focused learning (DFL) implements the first stage by training a machine learning (ML) model to optimize the quality of the decisions made using the predicted parameters. When the predicted parameters occur in the constraints, they can lead to infeasible solutions. Therefore, it is important to simultaneously manage both feasibility and decision quality. We develop a DFL framework for predicting constraint parameters in a generic COP. While prior works typically assume that the underlying optimization problem is a linear program (LP) or integer LP (ILP), our approach makes no such assumption. We derive two novel loss functions based on maximum likelihood estimation (MLE): the first one penalizes infeasibility (by penalizing predicted parameters that lead to infeasible solutions), while the second one penalizes suboptimal decisions (by penalizing predicted parameters that make the true optimal solution infeasible). We introduce a single tunable parameter to form a weighted average of the two losses, allowing decision-makers to balance suboptimality and feasibility. We experimentally demonstrate that adjusting this parameter provides decision-makers control over this trade-off. Moreover, across several COP instances, we show that adjusting the tunable parameter allows a decision-maker to prioritize either suboptimality or feasibility, outperforming the performance of existing baselines in either objective.

AAAI Conference 2025 Conference Paper

Generalizing Constraint Models in Constraint Acquisition

  • Dimos Tsouros
  • Senne Berden
  • Steven Prestwich
  • Tias Guns

Constraint Acquisition (CA) aims to widen the use of constraint programming by assisting users in the modeling process. However, most CA methods suffer from a significant drawback: they learn a single set of individual constraints for a specific problem instance, but cannot generalize these constraints to the parameterized constraint specifications of the problem. In this paper, we address this limitation by proposing GenCon, a novel approach to learn parameterized constraint models capable of modeling varying instances of the same problem. To achieve this generalization, we make use of statistical learning techniques at the level of individual constraints. Specifically, we propose to train a classifier to predict, for any possible constraint and parameterization, whether the constraint belongs to the problem. We then show how, for some classes of classifiers, we can extract decision rules to construct interpretable constraint specifications. This enables the generation of ground constraints for any parameter instantiation. Additionally, we present a generate-and-test approach that can be used with any classifier, to generate the ground constraints on the fly. Our empirical results demonstrate that our approach achieves high accuracy and is robust to noise in the input instances.

ECAI Conference 2025 Conference Paper

Minimizing Surrogate Losses for Decision-Focused Learning Using Differentiable Optimization

  • Jayanta Mandi
  • Ali Irfan Mahmutogullari
  • Senne Berden
  • Tias Guns

Decision-focused learning (DFL) trains a machine learning (ML) model to predict parameters of an optimization problem, to directly minimize decision regret, i. e. , maximize decision quality. Gradient-based DFL requires computing the derivative of the solution to the optimization problem with respect to the predicted parameters. However, for many optimization problems, such as linear programs (LPs), the gradient of the regret with respect to the predicted parameters is zero almost everywhere. Existing gradient-based DFL approaches for LPs try to circumvent this issue in one of two ways: (a) smoothing the LP into a differentiable optimization problem by adding a quadratic regularizer and then minimizing the regret directly or (b) minimizing surrogate losses that have informative (sub)gradients. In this paper, we show that the former approach still results in zero gradients, because even after smoothing the regret remains constant across large regions of the parameter space. To address this, we propose minimizing surrogate losses, even when a differentiable optimization layer is used and regret can be minimized directly. Our experiments demonstrate that minimizing surrogate losses allows differentiable optimization layers to achieve regret comparable to or better than surrogate-loss based DFL methods. Further, we demonstrate that this also holds for DYS-Net, a recently proposed differentiable optimization technique for LPs, that computes approximate solutions and gradients through operations that can be performed using feedforward neural network layers. Because DYS-Net executes the forward and the backward pass very efficiently, by minimizing surrogate losses using DYS-Net, we are able to attain regret on par with the state-of-the-art while reducing training time by a significant margin.

NeurIPS Conference 2025 Conference Paper

Solver-Free Decision-Focused Learning for Linear Optimization Problems

  • Senne Berden
  • Ali Mahmutoğulları
  • Dimos Tsouros
  • Tias Guns

Mathematical optimization is a fundamental tool for decision-making in a wide range of applications. However, in many real-world scenarios, the parameters of the optimization problem are not known a priori and must be predicted from contextual features. This gives rise to predict-then-optimize problems, where a machine learning model predicts problem parameters that are then used to make decisions via optimization. A growing body of work on decision-focused learning (DFL) addresses this setting by training models specifically to produce predictions that maximize downstream decision quality, rather than accuracy. While effective, DFL is computationally expensive, because it requires solving the optimization problem with the predicted parameters at each loss evaluation. In this work, we address this computational bottleneck for linear optimization problems, a common class of problems in both DFL literature and real-world applications. We propose a solver-free training method that exploits the geometric structure of linear optimization to enable efficient training with minimal degradation in solution quality. Our method is based on the insight that a solution is optimal if and only if it achieves an objective value that is at least as good as that of its adjacent vertices on the feasible polytope. Building on this, our method compares the estimated quality of the ground-truth optimal solution with that of its precomputed adjacent vertices, and uses this as loss function. Experiments demonstrate that our method significantly reduces computational cost while maintaining high decision quality.

JAIR Journal 2024 Journal Article

Decision-Focused Learning: Foundations, State of the Art, Benchmark and Future Opportunities

  • Jayanta Mandi
  • James Kotary
  • Senne Berden
  • Maxime Mulamba
  • Victor Bucarey
  • Tias Guns
  • Ferdinando Fioretto

Decision-focused learning (DFL) is an emerging paradigm that integrates machine learning (ML) and constrained optimization to enhance decision quality by training ML models in an end-to-end system. This approach shows significant potential to revolutionize combinatorial decision-making in real-world applications that operate under uncertainty, where estimating unknown parameters within decision models is a major challenge. This paper presents a comprehensive review of DFL, providing an in-depth analysis of both gradient-based and gradient-free techniques used to combine ML and constrained optimization. It evaluates the strengths and limitations of these techniques and includes an extensive empirical evaluation of eleven methods across seven problems. The survey also offers insights into recent advancements and future research directions in DFL.

AAAI Conference 2024 Conference Paper

Learning to Learn in Interactive Constraint Acquisition

  • Dimosthenis Tsouros
  • Senne Berden
  • Tias Guns

Constraint Programming (CP) has been successfully used to model and solve complex combinatorial problems. However, modeling is often not trivial and requires expertise, which is a bottleneck to wider adoption. In Constraint Acquisition (CA), the goal is to assist the user by automatically learning the model. In (inter)active CA, this is done by interactively posting queries to the user, e.g. does this partial solution satisfy your (unspecified) constraints or not. While interactive CA methods learn the constraints, the learning is related to symbolic concept learning, as the goal is to learn an exact representation. However, a large number of queries is required to learn the model, which is a major limitation. In this paper, we aim to alleviate this limitation by tightening the connection of CA and Machine Learning (ML), by, for the first time in interactive CA, exploiting statistical ML methods. We propose to use probabilistic classification models to guide interactive CA queries to the most promising parts. We discuss how to train classifiers to predict whether a candidate expression from the bias is a constraint of the problem or not, using both relation-based and scope-based features. We then show how the predictions can be used in all layers of interactive CA: the query generation, the scope finding, and the lowest-level constraint finding. We experimentally evaluate our proposed methods using different classifiers and show that our methods greatly outperform the state of the art, decreasing the number of queries needed to converge by up to 72%.

AAAI Conference 2023 System Paper

Sudoku Assistant – an AI-Powered App to Help Solve Pen-and-Paper Sudokus

  • Tias Guns
  • Emilio Gamba
  • Maxime Mulamba
  • Ignace Bleukx
  • Senne Berden
  • Milan Pesa

The Sudoku Assistant app is an AI assistant that uses a combination of machine learning and constraint programming techniques, to interpret and explain a pen-and-paper Sudoku scanned with a smartphone. Although the demo is about Sudoku, the underlying techniques are equally applicable to other constraint solving problems like timetabling, scheduling, and vehicle routing.