Welcome to Neural Nitro, a Unity-based learning environment designed for training and evaluating reinforcement learning agents in the exciting domain of racing car simulations. Whether you're a student, researcher, or developer, Neural Nitro provides a dynamic platform to explore and experiment with state-of-the-art reinforcement learning algorithms for autonomous driving.
-
Racing Simulation: Neural Nitro offers a realistic racing environment, complete with diverse tracks, challenging curves, and varying road conditions to provide a comprehensive learning experience.
-
Customizable Environments: Modify the Learning environment by creating new tracks, changing physics settings, etc. Experiment with different scenarios to test the adaptability of your reinforcement learning models.
-
Reinforcement Learning Integration: The environment is designed to seamlessly integrate with Unity's ML-Agetns toolkit, allowing you to easily train and evaluate your AI agents using algorithms like Proximal Policy Optimization (PPO), and more.
Follow these steps to get started with Neural Nitro:
-
Clone the Repository:
git clone https://github.com/Sookeyy-12/NeuralNitro-AI.git
-
Install Dependencies: Ensure that you have Unity installed on your machine. Open the project in Unity and install any additional packages or dependencies as specified in the documentation. NOTE: You will get an error message when you open the project in Unity. to fix this error, navigate to
Packages\manifest.json
andPackages\packages-lock.json
and change the path ofcom.unity.ml-agents
to where you have cloned the ml-agents repository. -
Train and Evaluate: Train your reinforcement learning agents in the Neural Nitro environment and evaluate their performance. Use the visualization tools to analyze the results and iterate on your models. You can also load pretraind models in
Assets\Models
.
- 1 set of Ray Perception Sensors to detect the Walls placed around the track.
- 1 set of Ray Perception Sensors to detect the checkpoints placed at every few intervals on the track.
- Speed of the agent as an observation.
- Throttle: Continuous action space with a range of 0 to 1.
- Steer: Continuous action space with a range of -1 to 1.
- Brake: Discrete action.
- Contact with a checkpoint: +ve reward for contact with the checkpoint.
- Contact with a wall: -ve reward for contact with the wall.
- Step Reward: small -ve reward for every step taken by the agent.
- Speed Reward: small coefficient multiplied by the speed of the agent.
We welcome contributions to Neural Nitro! If you have ideas for improvements, new features, or bug fixes.
Neural Nitro is licensed under the MIT License. Feel free to use, modify, and distribute this learning environment for your educational and research purposes.
Happy racing and happy learning with Neural Nitro! 🏎️🚀