Arrow Research search
Back to AAMAS

AAMAS 2023

Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning

Conference Paper Poster Session I Autonomous Agents and Multiagent Systems

Abstract

We present a novel multi-agent RL approach, Selective Multi-Agent Prioritized Experience Relay, in which agents share with other agents a limited number of transitions they observe during training. The intuition behind this is that even a small number of relevant experiences from other agents could help each agent learn. Unlike many other multi-agent RL algorithms, this approach allows for largely decentralized training, requiring only a limited communication channel between agents. We show that our approach outperforms baseline no-sharing decentralized training and state-of-the art multi-agent RL algorithms. Further, sharing only a small number of highly relevant experiences outperforms sharing all experiences between agents, and the performance uplift from selective experience sharing is robust across a range of hyperparameters and DQN variants. A reference implementation is available under https: //github. com/mgerstgrasser/super.

Authors

Keywords

  • Reinforcement Learning
  • Multi-Agent Reinforcement Learning
  • Cooperative AI

Context

Venue
International Conference on Autonomous Agents and Multiagent Systems
Archive span
2002-2025
Indexed papers
7403
Paper id
505942434337431784