UAI Conference 2017 Conference Paper
- Abdelkader Ouali
- David Allouche
- Simon de Givry
- Samir Loudni
- Yahia Lebbah
- Lakhdar Loukil
range of areas such as image analysis, speech recognition, bioinformatics, and ecology. Graphical models factorize a global probability distribution/energy function as the product/sum of local functions. A major inference task, known as MAP in Markov Random Fields and MPE in Bayesian Networks, is to find a global assignment of all the variables with maximum a posteriori probability/minimum energy. A usual distinction on MAP solving methods is complete/incomplete, i. e. the ability to prove optimality or not. Most complete methods rely on tree search, while incomplete methods rely on local search. Among them, we study Variable Neighborhood Search (VNS) for graphical models. In this paper, we propose an iterative approach above VNS which uses (partial) tree search inside its local neighborhood exploration. The resulting hybrid method offers a good compromise between completeness and anytime behavior than existing tree search methods while still being competitive for proving optimality. We further propose a parallel version of our method improving its anytime behavior on difficult instances coming from a large graphical model benchmark. Last we experiment on the challenging minimum energy problem found in Computational Protein Design, showing the practical benefit of our parallel version. Solver at www. inra. fr/mia/T/toulbar2 v1. 0. We focus on models with discrete variables like Markov Random Field and Bayesian Network. Our goal is to find a global assignment of all the variables with maximum a posteriori probability. This optimization task defines an NP-complete problem (Shimony, 1994). Solving methods can be categorized in two groups: exact and local search methods. Exact methods rely on tree search, variable elimination, linear programming, or a combination of them (Marinescu and Dechter, 2009; Otten and Dechter, 2012; Allouche et al. , 2015). Graph-cut and message-passing algorithms like loopy belief propagation and variational approaches (Kolmogorov, 2006; Wainwright and Jordan, 2008; Sontag et al. , 2008; 2012; Wang and Koller, 2013) are exact only in some particular cases (e. g. , Potts model or tree structure). Local search methods are stochastic algorithms like Gibbs sampling, Guided Local Search (Park, 2002; Hutter et al. , 2005), and Stochastic Greedy Search (Mengshoel et al. , 2011). Some of them have theoretical asymptotic proof of convergence, i. e. , the optimal solution is guaranteed to be found if infinite time is available. In practice, they may exhibit a better anytime behavior than exact methods on large and difficult problems (Marinescu et al. , 2003; Hutter et al. , 2005; Mengshoel et al. , 2011), i. e. , they produce better solutions in less time.