Arrow Research search
Back to AAAI

AAAI 2021

Learning a Gradient-free Riemannian Optimizer on Tangent Spaces

Conference Paper AAAI Technical Track on Machine Learning I Artificial Intelligence

Abstract

A principal way of addressing constrained optimization problems is to model them as problems on Riemannian manifolds. Recently, Riemannian meta-optimization provides a promising way for solving constrained optimization problems by learning optimizers on Riemannian manifolds in a datadriven fashion, making it possible to design task-specific constrained optimizers. A close look at the Riemannian metaoptimization reveals that learning optimizers on Riemannian manifolds needs to differentiate through the nonlinear Riemannian optimization, which is complex and computationally expensive. In this paper, we propose a simple yet efficient Riemannian meta-optimization method that learns to optimize on tangent spaces of manifolds. In doing so, we present a gradient-free optimizer on tangent spaces, which takes parameters of the model along with the training data as inputs, and generates the updated parameters directly. As a result, the constrained optimization is transformed from Riemannian manifolds to tangent spaces where complex Riemannian operations (e. g. , retraction operations) are removed from the optimizer, and learning the optimizer does not need to differentiate through the Riemannian optimization. We empirically show that our method brings efficient learning of the optimizer, while enjoying a good optimization trajectory in a data-driven manner.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
200029587330029021