Mava: A new Framework for Distributed Multi-Agent Reinforcement Learning

A. Pretorius 1 | K. Tessera 1 | A.P. Smit 2 | C. Formanek 3 | S.J. Grimbly 3 | K. Eloff 4 | S. Danisa 3 | L. Francis 1 | J. Shock 3 | H. Kamper 4 | W. Brink 4 | H. Engelbrecht 4 | A. Laterre 1 | K. Beguir 1

1 InstaDeep | 2 Stellenbosch University | 3 University of Cape Town | 4 Stellenbosch University

Published

ABSTRACT

Breakthrough advances in reinforcement learning (RL) research have led to a surge in the development and application of RL. To support the field and its rapid growth, several frameworks have emerged that aim to help the community more easily build effective and scalable agents. However, very few of these frameworks exclusively support multi-agent RL (MARL), an increasingly active field in itself, concerned with decentralised decision making problems. In this work, we attempt to fill this gap by presenting Mava: a research framework specifically designed for building scalable MARL systems. Mava provides useful components, abstractions, utilities and tools for MARL and allows for simple scaling for multi-process system training and execution, while providing a high level of flexibility and composability. Mava is built on top of DeepMind’s Acme (Hoffman et al., 2020), and therefore integrates with, and greatly benefits from, a wide range of already existing single-agent RL components made available in Acme. Several MARL baseline systems have already been implemented in Mava. These implementations serve as examples showcasing Mava’s reusable features, such as interchangeable system architectures, communication and mixing modules. Furthermore, these implementations allow existing MARL algorithms to be easily reproduced and extended. We provide experimental results for these implementations on a wide range of multi-agent environments and highlight the benefits of distributed system training. Mava’s source code is available here: https://github.com/instadeepai/Mava.