Arrow Research search
Back to AAAI

AAAI 2024

A Reinforcement-Learning-Based Multiple-Column Selection Strategy for Column Generation

Conference Paper AAAI Technical Track on Constraint Satisfaction and Optimization Artificial Intelligence

Abstract

Column generation (CG) is one of the most successful approaches for solving large-scale linear programming (LP) problems. Given an LP with a prohibitively large number of variables (i.e., columns), the idea of CG is to explicitly consider only a subset of columns and iteratively add potential columns to improve the objective value. While adding the column with the most negative reduced cost can guarantee the convergence of CG, it has been shown that adding multiple columns per iteration rather than a single column can lead to faster convergence. However, it remains a challenge to design a multiple-column selection strategy to select the most promising columns from a large number of candidate columns. In this paper, we propose a novel reinforcement-learning-based (RL) multiple-column selection strategy. To the best of our knowledge, it is the first RL-based multiple-column selection strategy for CG. The effectiveness of our approach is evaluated on two sets of problems: the cutting stock problem and the graph coloring problem. Compared to several widely used single-column and multiple-column selection strategies, our RL-based multiple-column selection strategy leads to faster convergence and achieves remarkable reductions in the number of CG iterations and runtime.

Authors

Keywords

  • CSO: Constraint Optimization
  • CSO: Constraint Programming
  • CSO: Mixed Discrete/Continuous Optimization
  • CSO: Solvers and Tools
  • PRS: Mixed Discrete/Continuous Planning
  • SO: Combinatorial Optimization

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
14028058464094078