Arrow Research search
Back to ICLR

ICLR 2025

LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics

Conference Paper Accept (Poster) Artificial Intelligence ยท Machine Learning

Abstract

We introduce LDAdam, a memory-efficient optimizer for training large models, that performs adaptive optimization steps within lower dimensional subspaces, while consistently exploring the full parameter space during training. This strategy keeps the optimizer's memory footprint to a fraction of the model size. LDAdam relies on a new projection-aware update rule for the optimizer states that allows for transitioning between subspaces, i.e., estimation of the statistics of the projected gradients. To mitigate the errors due to low-rank projection, LDAdam integrates a new generalized error feedback mechanism, which explicitly accounts for both gradient and optimizer state compression. We prove the convergence of LDAdam under standard assumptions, and provide empirical evidence that LDAdam allows for efficient fine-tuning and pre-training of language models.

Authors

Keywords

  • adaptive optimization
  • memory efficiency
  • low-rank learning
  • low-rank compression
  • convergence rates

Context

Venue
International Conference on Learning Representations
Archive span
2013-2025
Indexed papers
10294
Paper id
665891039702915629