RLDM 2019
Multi Type Mean Field Reinforcement Learning
Abstract
Mean field theory has been integrated with the field of multiagent reinforcement learning to enable multiagent algorithms to scale to a large number of interacting agents in the environment. In this paper, we extend mean field multiagent algorithms to multiple types. The types enable the relaxation of a core assumption in mean field games, which is the assumption that all agents in the environment are playing almost similar strategies and have the same goal. We consider two new testbeds for the field of many agent reinforcement learning, based on the standard MAgents testbed for many agent environments. Here we consider two different kinds of mean field games. In the first kind of games, agents belong to predefined types that are known a priori. In the second kind of games, the type of each agent is unknown and therefore must be learned based on observations. We introduce new algorithms for each of the scenarios and demonstrate superior performance to state of the art algorithms that assume that all agents belong to the same type and other baseline algorithms in the MAgent framework.
Authors
Keywords
No keywords are indexed for this paper.
Context
- Venue
- Multidisciplinary Conference on Reinforcement Learning and Decision Making
- Archive span
- 2013-2025
- Indexed papers
- 1004
- Paper id
- 743471559045529730