AAMAS 2010
Learning multi-agent state space representations
Abstract
This paper describes an algorithm, called CQ-learning, whichlearns to adapt the state representation for multi-agent systems inorder to coordinate with other agents. We propose a multi-levelapproach which builds a progressively more advanced representation of the learning problem. The idea is that agents start with aminimal single agent state space representation, which is expandedonly when necessary. In cases where agents detect conflicts, theyautomatically expand their state to explicitly take into account theother agents. These conflict situations are then analyzed in an attempt to find an abstract representation which generalises over theproblem states. Our system allows agents to learn effective policies, while avoiding the exponential state space growth typical inmulti-agent environments. Furthermore, the method we introduceto generalise over conflict states allows knowledge to be transferredto unseen and possibly more complex situations. Our research departs from previous efforts in this area of multi-agent learning because our agents combine state space generalisation with an agent-centric point of view. The algorithms that we introduce can beused in robotic systems to automatically reduce the sensor information to what is essential to solve the problem at hand. This isa must when dealing with multiple agents, since learning in suchenvironments is a cumbersome task due to the massive amount ofinformation, much of which may be irrelevant. In our experimentswe demonstrate a simulation of such environments using variousgridworlds.
Authors
Keywords
Context
- Venue
- International Conference on Autonomous Agents and Multiagent Systems
- Archive span
- 2002-2025
- Indexed papers
- 7403
- Paper id
- 329958517566111119