Arrow Research search
Back to ICAART

ICAART 2009

State Aggregation for Reinforcement Learning using Neuroevolution

Conference Paper Artificial Intelligence Artificial Intelligence ยท Multi-Agent Systems

Abstract

In this paper, we present a new machine learning algorithm, RL-SANE, which uses a combination of neuroevolution (NE) and traditional reinforcement learning (RL) techniques to improve learning performace. RL-SANE is an innovative combination of the neuroevolutionary algorithm NEAT(Stanley, 2004) and the RL algorithm Sarsa(l)(Sutton and Barto, 1998). It uses the special ability of NEAT to generate and train customized neural networks that provide a means for reducing the size of the state space through state aggregation. Reducing the size of the state space through aggregation enables Sarsa(l) to be applied to much more difficult problems than standard tabular based approaches. Previous similar work in this area, such as in Whiteson and Stone (Whiteson and Stone, 2006) and Stanley and Miikkulainen (Stanley and Miikkulainen, 2001), have shown positive and promising results. This paper gives a brief overview of neuroevolutionary methods, introduces the RL-SANE algorithm, presents a comparative analysis of RL-SANE to other neuroevolutionary algorithms, and concludes with a discussion of enhancements that need to be made to RL-SANE.

Authors

Keywords

  • Reinforcement learning
  • NeuroEvolution
  • Evolutionary algorithms
  • State aggregation

Context

Venue
International Conference on Agents and Artificial Intelligence
Archive span
2009-2025
Indexed papers
109
Paper id
61326360369996556