Arrow Research search
Back to ICLR

ICLR 2021

Optimism in Reinforcement Learning with Generalized Linear Function Approximation

Conference Paper Poster Presentations Artificial Intelligence ยท Machine Learning

Abstract

We design a new provably efficient algorithm for episodic reinforcement learning with generalized linear function approximation. We analyze the algorithm under a new expressivity assumption that we call ``optimistic closure,'' which is strictly weaker than assumptions from prior analyses for the linear setting. With optimistic closure, we prove that our algorithm enjoys a regret bound of $\widetilde{O}\left(H\sqrt{d^3 T}\right)$ where $H$ is the horizon, $d$ is the dimensionality of the state-action features and $T$ is the number of episodes. This is the first statistically and computationally efficient algorithm for reinforcement learning with generalized linear functions.

Authors

Keywords

  • reinforcement learning
  • optimism
  • exploration
  • function approximation
  • theory
  • regret analysis
  • provable sample efficiency

Context

Venue
International Conference on Learning Representations
Archive span
2013-2025
Indexed papers
10294
Paper id
528682338867378172