Arrow Research search
Back to NeurIPS

NeurIPS 2025

Offline Actor-Critic for Average Reward MDPs

Conference Paper Main Conference Track Artificial Intelligence ยท Machine Learning

Abstract

We study offline policy optimization for infinite-horizon average-reward Markov decision processes (MDPs) with large or infinite state spaces. Specifically, we propose a pessimistic actor-critic algorithm that uses a computationally efficient linear function class for value function estimation. At the core of our method is a critic that computes a pessimistic estimate of the average reward under the current policy, as well as the corresponding policy gradient, by solving a fixed-point Bellman equation, rather than solving a successive sequence of regression problems as in finite horizon settings. This procedure reduces to solving a second-order cone program, which is computationally tractable. Our theoretical analysis is based on a weak partial data coverage assumption, which requires only that the offline data aligns well with the expected feature vector of a comparator policy. Under this condition, we show that our algorithm achieves the optimal sample complexity of O(\varepsilon^{-2}) for learning a near-optimal policy, up to model misspecification errors.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Annual Conference on Neural Information Processing Systems
Archive span
1987-2025
Indexed papers
30776
Paper id
211618223459021981