Skip to content

Third Project from the Collaboration and Competition Lesson

License

Notifications You must be signed in to change notification settings

tik0/udacity_project_tennis

Repository files navigation

udacity_project_tennis

Project Details

This document is the report of the third Udacity project (Collaboration and Competition) for deep reinforcment learning.

For this project, I have trained a single DDPG meta-agent which controlls both agents, using the Tennis environment.

example tennis

In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.

The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Each agent receives its own, local observation. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping.

The task is episodic, and in order to solve the environment, the agents must get an average score of +0.5 (over 100 consecutive episodes, after taking the maximum over both agents). Specifically,

  • After each episode, we add up the rewards that each agent received (without discounting), to get a score for each agent. This yields 2 (potentially different) scores. We then take the maximum of these 2 scores.
  • This yields a single score for each episode. The environment is considered solved, when the average (over 100 episodes) of those scores is at least +0.5.

Implementation

The single DDPG agent is inspired by the bi-pedal DDPG soution with the following adaptations:

  • The OU noise has an additional noise decay factor (like epsilon-greedy), to reduce the influence of the noise on the agents actions
  • Two agents are controlled by one meta-agent by concatenating both observation and action vectors
  • Only the informed immediate reward (i.e. reward not equal 0.0) is stored to the replay buffer

Getting Started

Install Instructions

  • Please follow the instructions in the DRLND GitHub repository to set up a Python environment.
  • By following these instructions, you will install PyTorch, the ML-Agents toolkit, and a few more Python packages required to complete the project.
  • Put all files into the p3_collab-compet/ folder

Run the Report

Run the Report.ipynb which realizes the training in a Actor-Critic DDPG setup.

Results

The meta DDPG agent was able to solve the task (i.e. reaching an average score of 0.5) after ca. 3900 iterations. However, after 4834 iterations, the agent achived an average score of 1.0 resulting in much more efficient gameplay. Both results can be viewd on YouTube: VIDEO OF meta-DDPG AGENT

Releases

No releases published

Packages

No packages published