How much can change in a year? Revisiting Evaluation in Multi-Agent Reinforcement Learning

Omayma Mahjoub | Ruan de Kock | Siddarth Singh | Wiem Khlifi | Abidine Vall 1 | Kale-ab Tessera 2 | Arnu Pretorius

1 National School of engineering Tunis | 2 University of Edinburgh

Published

ABSTRACT

Establishing sound experimental standards and rigour is important in any growing field of research. Deep Multi-Agent Reinforcement Learning (MARL) is one such nascent field. Although exciting progress has been made, MARL has recently come under scrutiny for replicability issues and a lack of standardised evaluation methodology, specifically in the cooperative setting. Although protocols have been proposed to help alleviate the issue, it remains important to actively monitor the health of the field. In this work, we extend the database of evaluation methodology previously published by (Gorsane et al. 2022) containing meta-data on MARL publications from top-rated conferences and compare the findings extracted from this updated database to the trends identified in their work. Our analysis shows that many of the worrying trends in performance reporting remain. This includes the omission of uncertainty quantification, not reporting all relevant evaluation details and a narrowing of algorithmic development classes. Promisingly, we do observe a trend towards more difficult scenarios in SMAC-v1, which if continued into SMAC-v2 (Ellis et al. 2022) will encourage novel algorithmic development. Our data indicate that replicability needs to be approached more proactively by the MARL community to ensure trust in the field as we move towards exciting new
frontiers.