This is a project to accompany the course "Implementing Artificial Neural Networks with TensorFlow".
We implement a simple snake game and different playing strategies:
- Human controlled:
python human_snake.py [--store subjectID]
runs the snake to be controlled with the arrow keys. Supply a subject ID to store the score inparticipant[subjectID].csv
. If no ID but--store
is supplied, it is stored inparticipant_unspecified.csv
, to not lose it. - Systematic:
python systematic_snake.py [--store]
runs the systematic snake. Supply--store
to store the result intosystematic.csv
. - Q-Learning:
python q_snake.py train TARGET_CHECKPOINT_FOLDER
python q_snake.py test CHECKPOINT_PATH
e.g.python q_snake.py test ckpts/q_snake-20170218-001217-100000
- Evolve: Use
python evolve_snake.py train
if you want to start a new training session with the parameters defined in the class or usepython evolve_snake.py play <file>.np
to replay a given snake network with the correct number of weights.
For the accompanying project report, check the documentation folder.
You can add --video [name.mp4]
to save a movie. If you do not supply a name, it's stored as pysnake.mp4
. This is not perfect yet, but it's there. Note that if you use this in conjunction with --store subject
you should fully specify all but the last argument:
python human_snake.py --video human.mp4 --store
python human_snake.py --store 123 --video
python human_snake.py --video human.mp4 --store 123
Similarly you should use --store
before --video
or supply a file name for the systematic snake:
python systematic_snake.py --store --video
python systematic_snake.py --video systematic.mp4 --store
To aggregate n runs of data of the systematic snake, run:
make sys n=100
make sys n=1000
Similarily, for collecting q snake data, run:
make q n=100 p=ckpts/q_snake-20170218-001217-100000
To run a participant, just run:
make id=1
The awstf.py
script looks for the cheapest AWS region to lunch a p2.xlarge
spot instance and provides you with a docker-machine
command to launch such an instance prepared with an AMI optimized for Tensorflow GPU computing using nvidia-docker
and Google's official Tensorflow container image.
After launching a container scp your files to the container (or use git ssh'ed into the container) and start training:
localhost$ eval $(docker-machine env machinename)
localhost$ docker-machine scp ./yourscript.py machinename:/home/ubuntu/
localhost$ docker-machine ssh machinename
awsremote$ sudo nvidia-docker run -it -v /home/ubuntu:/workdir tensorflow/tensorflow:latest-gpu-py3 python3 /workdir/yourscript.py
Or if you want to run the evolve_snake.py
use the following command after copying over the relevant source files:
sudo docker run -it -v /home/ubuntu:/workdir tensorflow/tensorflow:latest-py3 python3 /workdir/evolve_snake.py