A comprehensive comparative analysis of two formidable deep reinforcement learning algorithms: Soft Actor-Critic (SAC) and Double Deep Q-Network with Prioritized Experience Replay (DDQN with PER). Our primary goal was to discern how the choice of observation space influences the performance of these algorithms.
Our primary goal was to discern how the choice of observation space influences the performance of these algorithms and to offer an alternative to end-to-end deep learning studies carried out with raw sensor data and to show that processed data is much more successful in terms of reinforcement learning algorithms in the autonomous driving system, compared to raw data.
This is work in progress so there are things to do:
- Modulate the code
- Implement some other DRL algorithms
- Expand project with collision avoidence task (This needs a new reward function)
- Implement more advanced experiment tracking tool (Wandb instead of tensorboard is better)
- Using Highway-Env simulation.
- The simulated environment was designed to mimic a racetrack scenario.
- Vehicle tasked with lane-keeping and maintaining target speed on a racetrack.
- Testing two different deep reinforcement learning algorithms (SAC and DDQN-PER) with two different observation types (Kinematics and Birdview Images)
Both steering and throttle can be controlled. In fact, "one_act" file contains code for the situation where agents control steering only, and "two_acts" file contains code for the situation where agents control both steering and throttle. This doc focused on "two_acts".
Action spaces are continuous between [-1,1] values. Continuous action space is supported in SAC. For DDQN-PER, we discretize action space to 55 different action.
Two different observation types are testes:
- Kinematics
- Birdview Images
Designed to Promote:
- On-road behavior
- Distance to lane centering
- Target speed maintenance
** FOR TARGET SPEED MAINTENANCE WE USE GAUSSIAN FUNCTION
Terminal conditions:
- Agent is off road
- Agent reaches maximum number of steps
- Agent reaches maximum time to run