Arrow Research search

Author name cluster

David Allouche

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
2 author rows

Possible papers

7

AAAI Conference 2026 Conference Paper

Assignment Problems in Cost Function Networks

  • Guidio Sewa
  • David Allouche
  • Simon de Givry
  • George Katsirelos
  • Pierre Montalbano
  • Thomas Schiex

To efficiently solve exact discrete optimization problems, branch and bound algorithms require tight bounds. In constraint programming, for optimization, soft arc consistencies typically derive much stronger bounds than those offered by domain or bound consistencies applied to a cost variable. The reason is that soft local consistencies exchange marginal cost information between variables whereas domain consistencies rely only on shrinking domains, which is less informative. However, CP solvers equipped with soft arc consistencies have so far offered limited support for efficient processing of global constraints. In this work, we show how we can efficiently enforce soft local consistency over the AllDifferent constraint, relying on algorithms for the Linear Assignment Problem (LAP). We implement this propagator in toulbar2, the state-of-the-art weighted CP solver exploiting soft local consistencies for bounding. We show that, equipped with this new propagator, toulbar2 outperforms state-of-the-art domain consistency-based CP as well as integer programming solvers for the Quadratic Assignment Problem and shows better performance for miniCOP instances of the 2024 XCSP competition with AllDifferent constraints.

UAI Conference 2017 Conference Paper

Iterative Decomposition Guided Variable Neighborhood Search for Graphical Model Energy Minimization

  • Abdelkader Ouali
  • David Allouche
  • Simon de Givry
  • Samir Loudni
  • Yahia Lebbah
  • Lakhdar Loukil

range of areas such as image analysis, speech recognition, bioinformatics, and ecology. Graphical models factorize a global probability distribution/energy function as the product/sum of local functions. A major inference task, known as MAP in Markov Random Fields and MPE in Bayesian Networks, is to find a global assignment of all the variables with maximum a posteriori probability/minimum energy. A usual distinction on MAP solving methods is complete/incomplete, i. e. the ability to prove optimality or not. Most complete methods rely on tree search, while incomplete methods rely on local search. Among them, we study Variable Neighborhood Search (VNS) for graphical models. In this paper, we propose an iterative approach above VNS which uses (partial) tree search inside its local neighborhood exploration. The resulting hybrid method offers a good compromise between completeness and anytime behavior than existing tree search methods while still being competitive for proving optimality. We further propose a parallel version of our method improving its anytime behavior on difficult instances coming from a large graphical model benchmark. Last we experiment on the challenging minimum energy problem found in Computational Protein Design, showing the practical benefit of our parallel version. Solver at www. inra. fr/mia/T/toulbar2 v1. 0. We focus on models with discrete variables like Markov Random Field and Bayesian Network. Our goal is to find a global assignment of all the variables with maximum a posteriori probability. This optimization task defines an NP-complete problem (Shimony, 1994). Solving methods can be categorized in two groups: exact and local search methods. Exact methods rely on tree search, variable elimination, linear programming, or a combination of them (Marinescu and Dechter, 2009; Otten and Dechter, 2012; Allouche et al. , 2015). Graph-cut and message-passing algorithms like loopy belief propagation and variational approaches (Kolmogorov, 2006; Wainwright and Jordan, 2008; Sontag et al. , 2008; 2012; Wang and Koller, 2013) are exact only in some particular cases (e. g. , Potts model or tree structure). Local search methods are stochastic algorithms like Gibbs sampling, Guided Local Search (Park, 2002; Hutter et al. , 2005), and Stochastic Greedy Search (Mengshoel et al. , 2011). Some of them have theoretical asymptotic proof of convergence, i. e. , the optimal solution is guaranteed to be found if infinite time is available. In practice, they may exhibit a better anytime behavior than exact methods on large and difficult problems (Marinescu et al. , 2003; Hutter et al. , 2005; Mengshoel et al. , 2011), i. e. , they produce better solutions in less time.

AAAI Conference 2012 Conference Paper

Filtering Decomposable Global Cost Functions

  • David Allouche
  • Christian Bessiere
  • Patrice Boizumault
  • Simon de Givry
  • Patricia Gutierrez
  • Samir Loudni
  • Jean-Philippe Métivier
  • Thomas Schiex

As (Lee and Leung 2012) have shown, weighted constraint satisfaction problems can benefit from the introduction of global cost functions, leading to a new Cost Function Programming paradigm. In this paper, we explore the possibility of decomposing global cost functions in such a way that enforcing soft local consistencies on the decomposition achieves the same level of consistency on the original global cost function. We give conditions under which directional and virtual arc consistency offer such guarantees. We conclude by experiments on decomposable cost functions showing that decompositions may be very useful to easily integrate efficient global cost functions in solvers.

IJCAI Conference 2009 Conference Paper

  • Marti Sanchez
  • David Allouche
  • Simon de Givry
  • Thomas Schiex

Optimization in graphical models is an important problem which has been studied in many AI frameworks such as weighted CSP, maximum satisfiability or probabilistic networks. By identifying conditionally independent subproblems, which are solved independently and whose optimum is cached, recent Branch and Bound algorithms offer better asymptotic time complexity. But the locality of bounds induced by decomposition often hampers the practical effects of this result because subproblems are often uselessly solved to optimality. Following the Russian Doll Search (RDS) algorithm, a possible approach to overcome this weakness is to (inductively) solve a relaxation of each subproblem to strengthen bounds. The algorithm obtained generalizes both RDS and treedecomposition based algorithms such as BTD or AND-OR Branch and Bound. We study its efficiency on different problems, closing a very hard frequency assignment instance which has been open for more than 10 years.