Multi Task RL Algorithms
MTRL is a library of multi-task reinforcement learning algorithms. It has two main components:
-
Building blocks and agents that implement the multi-task RL algorithms.
-
Experiment setups that enable training/evaluation on different setups.
Together, these two components enable use of MTRL across different environments and setups.
List of publications & submissions using MTRL (please create a pull request to add the missing entries):
- Learning Robust State Abstractions for Hidden-Parameter Block MDPs
- Multi-Task Reinforcement Learning with Context-based Representations
- We use the
af8417bfc82a3e249b4b02156518d775f29eb289
commit for the MetaWorld environments for our experiments.
- We use the
-
MTRL uses MIT License.
If you use MTRL in your research, please use the following BibTeX entry:
@Misc{Sodhani2021MTRL,
author = {Shagun Sodhani and Amy Zhang},
title = {MTRL - Multi Task RL Algorithms},
howpublished = {Github},
year = {2021},
url = {https://github.com/facebookresearch/mtrl}
}
-
Clone the repository:
git clone [email protected]:facebookresearch/mtrl.git
. -
Install dependencies:
pip install -r requirements/dev.txt
-
MTRL supports 8 different multi-task RL algorithms as described here.
-
MTRL supports multi-task environments using MTEnv. These environments include MetaWorld and multi-task variants of DMControl Suite
-
Refer the tutorial to get started with MTRL.
https://mtrl.readthedocs.io
There are several ways to contribute to MTRL.
-
Use MTRL in your research.
-
Contribute a new algorithm. We currently support 8 multi-task RL algorithms and are looking forward to adding more environments.
-
Check out the good-first-issues on GitHub and contribute to fixing those issues.
-
Check out additional details here.
Ask questions in the chat or github issues: