Arrow Research search
Back to EWRL

EWRL 2025

Sparse Optimistic Information Directed Sampling

Workshop Paper EWRL 2025 Poster Artificial Intelligence · Machine Learning · Reinforcement Learning

Abstract

Many high-dimensional online decision-making problems can be modeled as stochastic sparse linear bandits. Most existing algorithms are designed to achieve optimal worst-case regret in either the data-rich regime, where polynomial dependence on the ambient dimension is unavoidable, or the data-poor regime, where dimension-independence is possible at the cost of worse dependence on the number of rounds. In contrast, the Bayesian approach of Information Directed Sampling (IDS) achieves the best of both worlds: a Bayesian regret bound that has the optimal rate in both regimes simultaneously. In this work, we explore the use of Sparse Optimistic Information Directed Sampling (SOIDS) to achieve the best of both worlds in the worst-case setting, without Bayesian assumptions. Through a novel analysis that enables the use of a time-dependent learning rate, we show that OIDS can be tuned without prior knowledge to optimally balance information and regret. Our results extend the theoretical guarantees of IDS, providing the first algorithm that simultaneously achieves optimal worst-case regret in both the data-rich and data-poor regimes. We empirically demonstrate the good performance of SOIDS.

Authors

Keywords

  • Bayesian methods
  • Information Directed Sampling
  • regret minimization
  • sparse linear models

Context

Venue
European Workshop on Reinforcement Learning
Archive span
2008-2025
Indexed papers
649
Paper id
563941801677376005