The Flatland Challenge is a competition to foster progress in multi-agent reinforcement learning for any re-scheduling problem (RSP) 🔗 https://www.aicrowd.com/challenges/flatland-challenge.
The repository contains our solutions with relative code to the problem presented in the challenge.
The provided solution has been developed using a Reinforcement Learning approach and in particular a Dueling Double DQN. The basic idea has been taken from the two papers 📜 http://papers.nips.cc/paper/3964-double-q-learning.pdf 📜 https://arxiv.org/abs/1511.06581.
The first interesting solutions have been developed on the Single Agnet case, in which a single train needs to learn how to reach a target in a simple framework. Traning, tests and results using different techniques are included in the reository and discussed deeply in the Complete Paper.
Different methods have been used to perform training and testing from 3 up to 10 agents in different environments, including malfunctions and different trains' velocities. For further details refer to the Complete Paper.
A summary of the final results obtained during the project is provided in the tables:
The Medium size and Big size maps are of the kind:
Further details on the dimensions and complexity of the maps as well as the metrics involved, can be found in the Complete Paper.
The project has been developed by Giovanni Montanari, Lorenzo Sarti and Alessandro Sitta (me) in the context of the course "Deep Learning" at the University of Bologna.
If you have any questions, feel free to ask: