Arrow Research search
Back to NeurIPS

NeurIPS 2024

Adversarially Robust Multi-task Representation Learning

Conference Paper Main Conference Track Artificial Intelligence ยท Machine Learning

Abstract

We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task. In particular, we consider a multi-task representation learning (MTRL) setting, i. e. , we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e. g. , the final hidden layer of a deep neural network). In this general setting, we provide rates on~the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses. These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments. Additionally, we provide novel rates for the single-task setting.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Annual Conference on Neural Information Processing Systems
Archive span
1987-2025
Indexed papers
30776
Paper id
52371227013395929