Arrow Research search
Back to NeurIPS

NeurIPS 2025

Dynamic Regret Reduces to Kernelized Static Regret

Conference Paper Main Conference Track Artificial Intelligence · Machine Learning

Abstract

We study dynamic regret in online convex optimization, where the objective is to achieve low cumulative loss relative to an arbitrary benchmark sequence. By observing that competing with an arbitrary sequence of comparators $u_{1}, \ldots, u_{T}$ in $\mathcal{W}\subseteq\mathbb{R}^{d}$ can be reframed as competing with a *fixed* comparator *function* $u: [1, T]\to \mathcal{W}$, we cast dynamic regret minimization as a *static regret* problem in a *function space*. By carefully constructing a suitable function space in the form of a Reproducing Kernel Hilbert Space (RKHS), our reduction enables us to recover the optimal $R_{T}(u_{1}, \ldots, u_{T}) = \mathcal{O}(\sqrt{\sum_{t}\\|u_{t}-u_{t-1}\\|T})$ dynamic regret guarantee in the setting of linear losses, and yields new scale-free and directionally-adaptive dynamic regret guarantees. Moreover, unlike prior dynamic-to-static reductions---which are valid only for linear losses---our reduction holds for *any* sequence of losses, allowing us to recover $\mathcal{O}\big(\\|u\\|^2_{\mathcal{H}}+d_{\mathrm{eff}}(\lambda)\ln T\big)$ bounds when the losses have meaningful curvature, where $d_{\mathrm{eff}}(\lambda)$ is a measure of complexity of the RKHS. Despite working in an infinite-dimensional space, the resulting reduction leads to algorithms that are computable in practice, due to the reproducing property of RKHSs.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Annual Conference on Neural Information Processing Systems
Archive span
1987-2025
Indexed papers
30776
Paper id
319604485393882005