Arrow Research search
Back to AAAI

AAAI 2021

Adaptive Gradient Methods for Constrained Convex Optimization and Variational Inequalities

Conference Paper AAAI Technical Track on Machine Learning I Artificial Intelligence

Abstract

We provide new adaptive first-order methods for constrained convex optimization. Our main algorithms ADAACSA and ADAAGD+ are accelerated methods, which are universal in the sense that they achieve nearly-optimal convergence rates for both smooth and non-smooth functions, even when they only have access to stochastic gradients. In addition, they do not require any prior knowledge on how the objective function is parametrized, since they automatically adjust their percoordinate learning rate. These can be seen as truly accelerated ADAGRAD methods for constrained optimization. We complement them with a simpler algorithm ADAGRAD+ which enjoys the same features, and achieves the standard non-accelerated convergence rate. We also present a set of new results involving adaptive methods for unconstrained optimization and monotone operators.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
68486968568527720