This Python app uses Q-learning to find the optimal path through a maze. It is built with the Plotly and Dash frameworks.
- Maze editor: create your own custom maze to run the algorithm on.
- Q-learning algorithm: uses reinforcement learning to find the optimal path through the maze.
- Visualization: view the maze and the optimal path through it in real-time.
- Dynamic/animated simulation: watch the Q-learning algorithm find the optimal path through the maze in real-time.
- Obstacle editor: add or remove obstacles to the path of the follower.
- Clone the repository:
git clone https://github.com/DinithHeshan/q-learning-maze-follower-app.git
- Install the required packages:
pip install -r requirements.txt
- Run the app:
Maze_Follower_App.py
- Select the maze size by using the "Grid Size" input.
- Add or remove walls by clicking on the maze squares.
- Select the destination square by clicking on a square and then clicking the "Set Destination" button.
- Choose the algorithm parameters (discount factor, greedy policy, and learning rate) using the respective inputs.
- Train the algorithm by clicking the "Train" button, choosing the number of episodes, and clicking "Start Training".
- Watch the algorithm train in real-time, with the number of trained episodes displayed.
- Select the initial point for the follower by clicking on a square and then clicking the "Set Initial Point" button.
- Add or remove obstacles by clicking on the squares in the "Obstacle Editor" section. (Important: Obstacles can only be added to the path which is shown in the static simulation. Once an obstacle is added, the next obstacle can only be added to the path which is taken to avoid the last obstacle.)
- Watch the follower navigate the maze in real-time by clicking the "Simulation" button.
This app was created by Dinith Heshan as the final year project of engineering degree.
This project is licensed under the MIT License - see the LICENSE file for details.