Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX

Clément Bonnet | Daniel Luo | Donal Byrne | Shikha Surana | Vincent Coyette | Paul Duckworth | Laurence I. Midgley 1 | Tristan Kalloniatis | Sasha Abramowitz | Cemlyn N. Waters | Andries P. Smit | Nathan Grinsztajn | Ulrich A. Mbou Sob | Omayma Mahjoub | Elshadai Tegegn | Mohamed A. Mimouni | Raphael Boige | Ruan de Kock | Daniel Furelos-Blanco 2 | Victor Le | Arnu Pretorius | Alexandre Laterre

1 University of Cambridge | 2 Imperial College London

Published

ABSTRACT

Open-source reinforcement learning (RL) environments have played a crucial role in driving progress in the development of AI algorithms. In modern RL research, there is a need for simulated environments that are performant, scalable, and modular to enable their utilization in a wider range of potential real-world applications. Therefore, we present Jumanji, a suite of diverse RL environments specifically designed to be fast, flexible, and scalable. Jumanji provides a suite of environments focusing on combinatorial problems frequently encountered in industry, as well as challenging general decision-making tasks. By leveraging the efficiency of JAX and hardware accelerators like GPUs and TPUs, Jumanji enables rapid iteration of research ideas and large-scale experimentation, ultimately empowering more capable agents. Unlike existing RL environment suites, Jumanji is highly customizable, allowing users to tailor the initial state distribution and problem complexity to their needs. Furthermore, we provide actor-critic baselines for each environment, accompanied by preliminary findings on scaling and generalization scenarios. Jumanji aims to set a new standard for speed, adaptability, and scalability of RL environments.