Arrow Research search
Back to AAMAS

AAMAS 2010

Evolving Policy Geometry for Scalable Multiagent Learning

Conference Paper Session 15 - Learning II Autonomous Agents and Multiagent Systems

Abstract

A major challenge for traditional approaches to multiagentlearning is to train teams that easily scale to include additional agents. The problem is that such approaches typically encode each agent's policy separately. Such separationmeans that computational complexity explodes as the number of agents in the team increases, and also leads to theproblem of reinvention: Skills that should be shared amongagents must be rediscovered separately for each agent. Toaddress this problem, this paper presents an alternative evolutionary approach to multiagent learning called multiagentHyperNEAT that encodes the team as a pattern of relatedpolicies rather than as a set of individual agents. To capturethis pattern, a policy geometry is introduced to describe therelationship between each agent's policy and its canonicalgeometric position within the team. Because policy geometry can encode variations of a shared skill across all of thepolicies it represents, the problem of reinvention is avoided. Furthermore, because the policy geometry of a particularteam can be sampled at any resolution, it acts as a heuristicfor generating policies for teams of any size, producing apowerful new capability for multiagent learning. In this paper, multiagent HyperNEAT is tested in predator-prey androom-clearing domains. In both domains the results are effective teams that can be successfully scaled to larger teamsizes without any further training.

Authors

Keywords

  • CPPNs
  • HyperNEAT
  • Multiagent learning
  • NEAT
  • Neural Networks
  • Evolutionary Computation

Context

Venue
International Conference on Autonomous Agents and Multiagent Systems
Archive span
2002-2025
Indexed papers
7403
Paper id
943628258147722440