Arrow Research search
Back to EWRL

EWRL 2018

Neural Value Function Approximation in Continuous State Reinforcement Learning Problems

Workshop Paper Accepted Paper Artificial Intelligence · Machine Learning · Reinforcement Learning

Abstract

Recent development of Deep Reinforcement Learning (DRL) has demonstrated superior performance of neural networks in solving challenging problems with large or continuous state spaces. In this work, we focus on the problem of minimising the expected one step Temporal Difference (TD) error with neural function approximator for a continuous state space, from a smooth optimisation perspective. An approximate Newton’s algorithm is proposed. Effectiveness of the algorithm is demonstrated on both finite and continuous state space benchmarks. We show that, in order to benefit from the second order approximate Newton’s algorithm, gradient of the TD target needs to be considered for training.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
European Workshop on Reinforcement Learning
Archive span
2008-2025
Indexed papers
649
Paper id
434477696081779267