Arrow Research search
Back to ICLR

ICLR 2020

BayesOpt Adversarial Attack

Conference Paper Poster Presentations Artificial Intelligence ยท Machine Learning

Abstract

Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input. Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries. Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost. We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction. We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.

Authors

Keywords

  • Black-box Adversarial Attack
  • Bayesian Optimisation
  • Gaussian Process

Context

Venue
International Conference on Learning Representations
Archive span
2013-2025
Indexed papers
10294
Paper id
658070579656070645