Skip to content

Autonomous Driving W/ Deep Reinforcement Learning in Lane Keeping - DDQN and SAC with kinematics/birdview-images

Notifications You must be signed in to change notification settings

mozturan/AutonomousDrive2D-DRL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autonomous Driving W/ Deep Reinforcement Learning in Lane Keeping

A comprehensive comparative analysis of two formidable deep reinforcement learning algorithms: Soft Actor-Critic (SAC) and Double Deep Q-Network with Prioritized Experience Replay (DDQN with PER). Our primary goal was to discern how the choice of observation space influences the performance of these algorithms.

Our primary goal was to discern how the choice of observation space influences the performance of these algorithms and to offer an alternative to end-to-end deep learning studies carried out with raw sensor data and to show that processed data is much more successful in terms of reinforcement learning algorithms in the autonomous driving system, compared to raw data.

NOTE!

This is work in progress so there are things to do:

  • Modulate the code
  • Implement some other DRL algorithms
  • Expand project with collision avoidence task (This needs a new reward function)
  • Implement more advanced experiment tracking tool (Wandb instead of tensorboard is better)

Simulation Environment

Screenshot from 2024-02-14 19-03-17

  • Using Highway-Env simulation.
  • The simulated environment was designed to mimic a racetrack scenario.
  • Vehicle tasked with lane-keeping and maintaining target speed on a racetrack.
  • Testing two different deep reinforcement learning algorithms (SAC and DDQN-PER) with two different observation types (Kinematics and Birdview Images)

Action Space :

Both steering and throttle can be controlled. In fact, "one_act" file contains code for the situation where agents control steering only, and "two_acts" file contains code for the situation where agents control both steering and throttle. This doc focused on "two_acts".

Action spaces are continuous between [-1,1] values. Continuous action space is supported in SAC. For DDQN-PER, we discretize action space to 55 different action.

Observation Spaces

Two different observation types are testes:

  1. Kinematics

Screenshot from 2024-02-14 19-17-12

  1. Birdview Images

Screenshot from 2024-02-14 19-18-06

Reward Funtion

Designed to Promote:

  • On-road behavior
  • Distance to lane centering
  • Target speed maintenance

Screenshot from 2024-02-14 19-22-00

** FOR TARGET SPEED MAINTENANCE WE USE GAUSSIAN FUNCTION

Terminal conditions:

  • Agent is off road
  • Agent reaches maximum number of steps
  • Agent reaches maximum time to run

Deep Networks for Algorithms

For Kinematics Input

Screenshot from 2024-02-14 19-25-04

For Birdview Input

Screenshot from 2024-02-14 19-26-59

RESULS

Performance Graphs

avegare_100 episode_reward episode_len

SAC with KINEMATICS INPUT TRAINING RESULTS

SAC-KINEMATICS.mp4

DDQN-PER with KINEMATICS INPUT TRAINING RESULTS

DDQN-PER-KINAMATICS.mp4

Releases

No releases published

Packages

No packages published