Arrow Research search
Back to AAMAS

AAMAS 2011

Metric Learning for Reinforcement Learning Agents

Conference Paper Session A6 - Robotics and Learning Autonomous Agents and Multiagent Systems

Abstract

A key component of any reinforcement learning algorithm is the underlying representation used by the agent. While reinforcement learning (RL) agents have typically relied on hand-coded state representations, there has been a growing interest in learning this representation. While inputs to an agent are typically fixed (i. e. , state variables represent sensors on a robot), it is desirable to automatically determine the optimal relative scaling of such inputs, as well as to diminish the impact of irrelevant features. This work introduces HOLLER, a novel distance metric learning algorithm, and combines it with an existing instance-based RL algorithm to achieve precisely these goals. The algorithms' success is highlighted via empirical measurements on a set of six tasks within the mountain car domain.

Authors

Keywords

  • Reinforcement Learning
  • Distance Metric Learning
  • Autonomous Feature Selection
  • Learning State Representations

Context

Venue
International Conference on Autonomous Agents and Multiagent Systems
Archive span
2002-2025
Indexed papers
7403
Paper id
831835705802525604