Arrow Research search
Back to IJCAI

IJCAI 2017

High Dimensional Bayesian Optimization using Dropout

Conference Paper Machine Learning A-R Artificial Intelligence

Abstract

Scaling Bayesian optimization to high dimensions is challenging task as the global optimization of high-dimensional acquisition function can be expensive and often infeasible. Existing methods depend either on limited “active” variables or the additive form of the objective function. We propose a new method for high-dimensional Bayesian optimization, that uses a drop-out strategy to optimize only a subset of variables at each iteration. We derive theoretical bounds for the regret and show how it can inform the derivation of our algorithm. We demonstrate the efficacy of our algorithms for optimization on two benchmark functions and two real-world applications - training cascade classifiers and optimizing alloy composition.

Authors

Keywords

  • Constraints and Satisfiability: Constraint Optimisation
  • Machine Learning: Active Learning
  • Machine Learning: Machine Learning

Context

Venue
International Joint Conference on Artificial Intelligence
Archive span
1969-2025
Indexed papers
14525
Paper id
1004437021900975831