Arrow Research search
Back to IJCAI

IJCAI 2025

Accelerating Adversarial Training on Under-Utilized GPU

Conference Paper Computer Vision Artificial Intelligence

Abstract

Deep neural networks are vulnerable to adversarial attacks and adversarial training has been proposed to defend against such attacks by adaptively generating attacks, i. e. , adversarial examples, during training. However, adversarial training is significantly slower than traditional training due to the search for worst attacks for each minibatch. To speed up adversarial training, existing work has considered a subset of a minibatch for generating attacks and reduced the steps in the search for attacks. We propose a novel adversarial training acceleration method, called AttackRider, by exploring under-utilized GPU hardware to reduce the number of calls to attack generation without increasing the time of each call. We characterize the extent of under-utilization of GPU for given GPU and model size, hence the potential for speedup, and present the application scenarios where this opportunity exists. The results on various machine learning tasks and datasets show that AttackRider can speed up state-of-the-art adversarial training algorithms with comparable robust accuracy. The source code of AttackRider is available at https: //github. com/zxzhan/AttackRider.

Authors

Keywords

  • Machine Learning: ML: Adversarial machine learning
  • Machine Learning: ML: Applications
  • Machine Learning: ML: Robustness

Context

Venue
International Joint Conference on Artificial Intelligence
Archive span
1969-2025
Indexed papers
14525
Paper id
1138811129679544127