Arrow Research search
Back to NeurIPS

NeurIPS 2016

The Parallel Knowledge Gradient Method for Batch Bayesian Optimization

Conference Paper Artificial Intelligence ยท Machine Learning

Abstract

In many applications of black-box optimization, one can evaluate multiple points simultaneously, e. g. when evaluating the performances of several different neural network architectures in a parallel computing environment. In this paper, we develop a novel batch Bayesian optimization algorithm --- the parallel knowledge gradient method. By construction, this method provides the one-step Bayes optimal batch of points to sample. We provide an efficient strategy for computing this Bayes-optimal batch of points, and we demonstrate that the parallel knowledge gradient method finds global optima significantly faster than previous batch Bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms, especially when function evaluations are noisy.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Annual Conference on Neural Information Processing Systems
Archive span
1987-2025
Indexed papers
30776
Paper id
963396653138677038