InstaDeep Research team continues success at ICLR with record four publications, hosts exclusive preview screening of new AI documentary

Published

Categories

InstaDeep’s research team is off to a great start in 2023 with a new record of four confirmed papers: two papers on the main track, including a spotlight; and two workshop papers, including an oral presentation. ICLR, the globally respected International Conference on Learning Representations, is being held at the Kigali Convention Centre in Rwanda this May, and is renowned for featuring cutting-edge research in areas such as deep learning, reinforcement learning, and natural language processing, bringing together researchers, academics, and practitioners from around the world to showcase their work, network with peers, and stay up-to-date with the latest trends and innovations in the field. InstaDeep is delighted to be sending several members of its research team to Kigali to present their work.

2023 also marks an important turning point for the AI research world, as it is the first year ICLR, or any major Machine Learning conference, has been held in Africa. Not only is this a milestone for the continent, it is especially sweet for Tunisian-founded InstaDeep, which has long-championed the vast reservoir of AI talent across every African nation; plus its equally-longstanding commitment to supporting African-led and authored research, ensuring the field is more inclusive and reflective of all global perspectives and voices in AI.There are also several other important African AI initiatives co-located with ICLR 2023: IndabaX Rwanda, Kaggle Workshops and a new “Tiny Papers” track – all aimed at empowering the flourishing but under-represented local AI community.

InstaDeep Co-Founder and CEO Karim Beguir commented, “I’m incredibly proud of our team – our work continues to set the standard in AI research and this quartet of acceptances across four very different areas illustrates the breadth of expertise we have in-house at InstaDeep. ICLR is a key event in the AI research world and I’m thrilled that it is finally being held in Africa, not to mention that we’re sending many of our amazing authors in person too!”


++++ Join InstaDeep for an exclusive AI film preview ++++

The team will also be hosting an exceptional social event during ICLR. On Tuesday evening, conference attendees can join the InstaDeep research team at a very special and exclusive preview of a new documentary film, “Cape to Carthage”, an official selection at this year’s AI International Film Festival in Utah.  The screening is being held in the beautiful grounds of the Kigali Convention Centre. Numbers are limited, and interested delegates must apply for a place by completing this form. 


InstaDeep is presenting four papers at ICLR this year:

Quality-Diversity as a competitive alternative to standard RL methods 

The spotlight paper, entitled “Neuroevolution is a Competitive Alternative to Reinforcement Learning for Skill Discovery”, demonstrates that Quality-Diversity methods are competitive to standard RL methods. The authors undertook an extensive empirical evaluation comparison of eight state-of-the-art methods on the basis of (i) metrics directly evaluating the skills’ diversity, (ii) the skills’ performance on adaptation tasks, and (iii) the skills’ performance when used as primitives for hierarchical planning.  They found that QD methods provided equal, and sometimes improved, performance whilst being less sensitive to hyperparameters and more scalable. They also concluded that because no single method is found to provide near-optimal performance across all environments, there is a rich scope for further research, as supported by the proposed future directions and provision of optimised open-source implementations. The work was co-authored by InstaDeep Research Scientists and Antoine Cully and his AIRL team at Imperial College London, continuing the two institutions’ successful partnership since the joint work on Quality-Diversity which was showcased at GECCO last year

Source code: https://github.com/instadeepai/qd-skill-discovery-benchmark 

Oral presentation:

Poster presentation:


A novel, more flexible algorithm 

The second accepted paper, “Evolving Populations of Diverse RL Agents with MAP-Elites” introduces a novel algorithm which is more flexible and achieves better performance than the current SOTA in the QD field. The work addresses some of the inherent problems of existing MAP-Elites-based approaches; and proposes a flexible population-based, RL framework instead. This framework allows the use of any RL algorithm within a population update and alleviates the identified limitations by evolving populations of agents (whose definition include hyperparameters and all learnable parameters) instead of just policies. This is supported by numerical experiments on a number of robotics control problems taken from the QD-RL literature, alongside an open source JAX-based implementation of the algorithm (available in QDax).

Poster presentation:


Enabling Deep Learning based protein redesign via predictions of changes in protein stability upon multiple mutations

Third in the list of papers is a BioAI-focused piece of work produced entirely in-house. Titled “Predicting Protein Stability Changes Under Multiple Amino Acid Substitutions Using Equivariant Graph Neural Networks”, the team of three InstaDeep authors discuss how accurate prediction of changes in protein stability under multiple amino acid substitutions is essential for realising true in-silico protein re-design. They propose improvements to state-of-the-art Deep learning (DL) protein stability prediction models, and enable first-of-a-kind predictions for variable numbers of amino acid substitutions, on structural representations, by decoupling the atomic and residue scales of protein representations. This was achieved using E(3)-equivariant graph neural networks (EGNNs) for both atomic environment (AE) embedding and residue-level scoring tasks. Their AE embedder was used to featurise a residue-level graph, then trained to score mutant stability (ΔΔG). To achieve effective training of this predictive EGNN the trio leveraged the unprecedented scale of a new high-throughput protein stability experimental dataset, Mega-scale, before demonstrating the immediately promising results of this procedure, outlining the current shortcomings, and noting potential future strategies.

Workshop:


Selective Reincarnation – Reusing Prior Computation in MARL 

The final paper, accepted as an oral presentation at the Reincarnating RL workshop, was another collaboration – this time between InstaDeep and the University of Cape Town. The “Reduce, Reuse, Recycle: Selective Reincarnation In Multi-Agent Reinforcement Learning” paper looked at the concept of ‘Reincarnation’ in reinforcement learning, noting how it has been proposed as a formalisation of reusing prior computation from past experiments when training an agent in an environment. The joint team presented an overview of reincarnation in the multi-agent (MA) context, before considering the case where only some agents are reincarnated, whereas the others are trained from scratch – selective reincarnation. In the fully-cooperative MA setting with heterogeneous agents, it was demonstrated that selective reincarnation can lead to higher returns than training fully from scratch, and faster convergence than training with full reincarnation. The research then showed how the choice of which agents to reincarnate in a heterogeneous system is vitally important to the outcome of the training, illustrating how a poor choice can lead to considerably worse results than the alternatives. In summation, the authors argue that a rich field of work exists here, and expressed their desire that this work catalyses further energy in bringing the topic of reincarnation to the multi-agent realm.

Additional page https://instadeepai.github.io/selective-reincarnation-marl/ 

Workshop:


About ICLR

ICLR is well-known to InstaDeep, with multiple papers over the past few years; and latterly being recognised as the 3rd best conference in computer science by Google Scholar, ranking above other well-known conferences like NeurIPS and ICML.  ICLR 2023 will take place between 1-5 May 2023 at the Kigali Convention Centre in Rwanda. Virtual and in-person registration is still open at https://iclr.cc/Register.  


(Don’t forget to apply for a place at our exclusive Cape to Carthage preview screening!)


If you are interested in working on exciting initiatives like this, InstaDeep is hiring for its research team across many topics and tracks across its offices in Africa, Europe and North America.  All open opportunities are available at www.instadeep.com/careers.