Multitask Reinforcement Learning with Python

Environments and Baselines for Multitask Reinforcement Learning

Shagun Sodhani

Deep Learning Machine-Learning

See in schedule: Fri, Jul 30, 15:00-15:45 CEST (45 min) Download/View Slides

Reinforcement Learning (RL) has led to several groundbreaking innovations, from defeating the world champion of Go to synthesizing molecules and drugs. RL has benefitted from a mature ecosystem of open-source frameworks that have enabled more and more people to use RL for their use cases. A subdomain of RL is multi-task RL, where the agent should perform multiple tasks at once. However, there are far fewer resources for people to start with multi-task RL, and very often, people have to implement all the components from scratch.

The two key components in a multi-task RL codebase are (i) Multi-task RL algorithms and (ii) Multi-task RL environments. We develop open-source libraries for both components. [MTRL](https://github.com/facebookresearch/mtrl) provides components to implement multi-task RL algorithms, and [MTEnv](https://github.com/facebookresearch/mtenv) is a library to interface with existing multi-task RL environments and create new ones. These libraries are used in several ongoing and published works at Facebook AI.

The poster presents an overview of the two libraries, how to use them to get started with multi-task RL, and the different use cases that these libraries can enable. We are happy to gather feedback and understand how to make these libraries more useful for people.

Type: Poster session (45 mins); Python level: Beginner; Domain level: Intermediate


Shagun Sodhani

Facebook

Shagun Sodhani is a Research Engineer in the Facebook AI Research Group. He is primarily interested in lifelong reinforcement learning - training AI systems that can interact with and learn from the physical world and consistently improve as they do so without forgetting the previous knowledge.