Skip to content

The Second Project of Udacity Deep Reinforcement Learning Nano Degree. PyTorch implementation of PPO to solve Reacher Environment (Unity) with Continuous Action Space.

Notifications You must be signed in to change notification settings

ulamaca/DRLND_P2_Continuous_Control

Repository files navigation

Project 2: Continuous Control

Introduction

For this project, I implemented a PPO reinforcement learning agent to solve Unit Reacher continuous control Reacher environment. In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible. The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.

Distributed Training

For this project, two separate versions of the Unity environment are available:

  • The first version contains a single agent.
  • The second version contains 20 identical agents, each with its own copy of the environment.

I trained my agent in the second environment to utilize the parallelizable nature of PPO algorithms.

Solving the Environment

Option 1: Solve the First Version

The task is episodic, and in order to solve the environment, your agent must get an average score of +30 over 100 consecutive episodes.

Option 2: Solve the Second Version

The barrier for solving the second version of the environment is slightly different, to take into account the presence of many agents. In particular, your agents must get an average score of +30 (over 100 consecutive episodes, and over all agents). Specifically,

  • After each episode, we add up the rewards that each agent received (without discounting), to get a score for each agent. This yields 20 (potentially different) scores. We then take the average of these 20 scores.
  • This yields an average score for each episode (where the average is over all 20 agents).

The environment is considered solved, when the average (over 100 episodes) of those average scores is at least +30.

Prerequisites

  1. Please first setup a Python3 Anaconda environment.
  2. Then install the requirements for the project through:
pip install -r requirement.txt
  1. clone the repo
git clone [email protected]:ulamaca/DRLND_P2_Continuous_Control.git
  1. Follow the instructions to download the multi-agent version environment from the Getting Started section in Udacity DRLND repo.

  2. Place the env directory in the root of the project and rename it as "Reacher_Multi_Linux"

Instructions

  1. To train a PPO agent from scratch, execute in the command line:
python run.py  

After trained, two files will be saved in ./data/ppo_gae: progress.txt and checkpoint.pth. progress.txt saves the training score traces and checkpoint.pth is the model parameters of the trained agent. More detailed instructions can be found using:

  1. To get statistics plots after training, execute:
python plot.py -l ppo_gae
  1. To see how your favorite agent plays, use
python play.py -p path/to/model-params 

If you did not get one, try out

plot.py play.py -p ./data/saved/checkpoint.pth

About

The Second Project of Udacity Deep Reinforcement Learning Nano Degree. PyTorch implementation of PPO to solve Reacher Environment (Unity) with Continuous Action Space.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages