Skip to content

tahir1069/Behavioral_Cloning

Repository files navigation


Behavioural Cloning (End to End Learning) for Self Driving Cars


Overview



Everyone in the world in alot of aspexts. Coming to driving a car here too many of us likes to drive very smoothly but some of us love driving dangerously ;). This project is all about copying one's driving style.



Task



The specific task here in this project is to predict steering angles to driving the car but we can also use throotle and break to perfectly clone someone in terms of driving.

Here are three images from respective cars can be seen.

Input

• Images and Steering Angles

Output

• Steering Angles

In data augmentation I have flipped the images. It can be seen from the horizontal flips that this make sense because it will genrealize the model by changing the right turn to the left. It also entirely changes the environment view. Here the one thing is to be noted by flipping images horizontally the steering angles also needs to be changed i.e from +45° to -45°. vertical flip is just for the visualization purpose here only and is not used in original code.



Data Collection



The simulatorcontains two tracks and in echa track it can work in two ways

  1. Training Mode

In training mode we can collect data from the simulator. Data is recorded in the following format.

• Center Camera Images

• Right Camera Images

• Left Camera Images

• Steering Angles

• Throttle

• Break

Here are some control commands for the simulator:

  1. Steering is controlled via position mouse instead of keyboard. This creates better angles for training. Note the angle is based on the mouse distance. To steer hold the left mouse button and move left or right. To reset the angle to 0 simply lift your finger off the left mouse button.

  2. You can toggle record by pressing R, previously you had to click the record button (you can still do that).

  3. When recording is finished, saves all the captured images to disk at the same time instead of trying to save them while the car is still driving periodically. You can see a save status and play back of the captured data.

  4. You can takeover in autonomous mode. While W or S are held down you can control the car the same way you would in training mode. This can be helpful for debugging. As soon as W or S are let go autonomous takes over again.

  5. Pressing the spacebar in training mode toggles on and off cruise control (effectively presses W for you).

  6. Added a Control screen

  7. Track 2 was replaced from a mountain theme to Jungle with free assets , Note the track is challenging

  8. You can use brake input in drive.py by issuing negative throttle values

  9. Autonomous Mode

In autonomous mode the car waits the steering anles to be predicted. By running drive.py the steering angles are predicted which are fed to the simulator by using web sockets and than the car drives.

Here in this project camera images are given as input and steeing angles are predicted. But later on I have used images to predict throttle hence the end result will be predicting steering angles and throttle.



Approach



Following were the steps to to complete the project.

• Collecting the data from Simulator

• Data Augmentation

• Data Preprocessing

• Designing a model

I have used three different models to train the model and see the results.

nVidia end to end learning model

Comma AI

Mobile Net

I trained all of three models. I was able to use nVidia and Comma.AI but not Mobile Net because after training this model whole night I was not able to test the model due to limited resources (My PC exhausted!). I am uploading model here(Hoping someone might test it and inform me :) ). Personally in my opinion Comma.AI model gave promising results. It was more accurate and got trained in very short time. Here the videos for track1 and track2.

As can be seen in above videos the model correctly predicts the steeriing angle but the car drives with only constatnt speed also it can not reverse if stuck somewhere. So here comes the fun part! What I did:

  1. Trained a model for steering angels with Comma.AI

  2. Trained another model for Throttle Values again this with Comma.AI

Now I tested these models with new_drive.py. The video can be found here.

Cool the video is a bit shaky but still seems OK. Here we can do two improvements.

• Increasing Dropout or using some other regularization technique.

• Recording some more fine data

*The main reason for the shaky video is not caring about throttle during data collection phase. It was hard for me to balance between throttle and angle and I was more of concerned about steering angles during data collection.





Layer (type) Output Shape Param #

=================================================================

lambda_19 (Lambda) (None, 160, 320, 3) 0


cropping2d_19 (Cropping2D) (None, 90, 320, 3) 0


conv2d_351 (Conv2D) (None, 43, 158, 24) 1824


conv2d_352 (Conv2D) (None, 20, 77, 36) 21636


conv2d_353 (Conv2D) (None, 8, 37, 48) 43248


conv2d_354 (Conv2D) (None, 6, 35, 64) 27712


conv2d_355 (Conv2D) (None, 4, 33, 64) 36928


flatten_9 (Flatten) (None, 8448) 0


dense_35 (Dense) (None, 100) 844900


dense_36 (Dense) (None, 50) 5050


dense_37 (Dense) (None, 10) 510


dense_38 (Dense) (None, 1) 11

=================================================================

Total params: 981,819

Trainable params: 981,819

Non-trainable params: 0

=================================================================





Layer (type) Output Shape Param #

=================================================================

lambda_20 (Lambda) (None, 160, 320, 3) 0


cropping2d_20 (Cropping2D) (None, 90, 320, 3) 0


conv2d_356 (Conv2D) (None, 23, 80, 16) 3088


elu_9 (ELU) (None, 23, 80, 16) 0


conv2d_357 (Conv2D) (None, 12, 40, 32) 12832


elu_10 (ELU) (None, 12, 40, 32) 0


conv2d_358 (Conv2D) (None, 6, 20, 48) 38448


elu_11 (ELU) (None, 6, 20, 48) 0


conv2d_359 (Conv2D) (None, 4, 18, 64) 27712


elu_12 (ELU) (None, 4, 18, 64) 0


conv2d_360 (Conv2D) (None, 2, 16, 64) 36928


elu_13 (ELU) (None, 2, 16, 64) 0


flatten_10 (Flatten) (None, 2048) 0


dense_39 (Dense) (None, 100) 204900


elu_14 (ELU) (None, 100) 0


dense_40 (Dense) (None, 50) 5050


elu_15 (ELU) (None, 50) 0


dense_41 (Dense) (None, 10) 510


elu_16 (ELU) (None, 10) 0


dense_42 (Dense) (None, 1) 11

=================================================================

Total params: 329,479

Trainable params: 329,479

Non-trainable params: 0

=================================================================






Layer (type) Output Shape Param #

=================================================================

lambda_21 (Lambda) (None, 160, 320, 3) 0


cropping2d_21 (Cropping2D) (None, 90, 320, 3) 0


conv2d_361 (Conv2D) (None, 44, 159, 32) 896


batch_normalization_325 (Bat (None, 44, 159, 32) 128


activation_328 (Activation) (None, 44, 159, 32) 0


conv2d_362 (Conv2D) (None, 44, 159, 32) 9248


batch_normalization_326 (Bat (None, 44, 159, 32) 128


activation_329 (Activation) (None, 44, 159, 32) 0


conv2d_363 (Conv2D) (None, 44, 159, 64) 2112


batch_normalization_327 (Bat (None, 44, 159, 64) 256


activation_330 (Activation) (None, 44, 159, 64) 0


conv2d_364 (Conv2D) (None, 21, 79, 64) 36928


batch_normalization_328 (Bat (None, 21, 79, 64) 256


activation_331 (Activation) (None, 21, 79, 64) 0


conv2d_365 (Conv2D) (None, 21, 79, 128) 8320


batch_normalization_329 (Bat (None, 21, 79, 128) 512


activation_332 (Activation) (None, 21, 79, 128) 0


conv2d_366 (Conv2D) (None, 21, 79, 128) 147584


batch_normalization_330 (Bat (None, 21, 79, 128) 512


activation_333 (Activation) (None, 21, 79, 128) 0


conv2d_367 (Conv2D) (None, 21, 79, 128) 16512


batch_normalization_331 (Bat (None, 21, 79, 128) 512


activation_334 (Activation) (None, 21, 79, 128) 0


conv2d_368 (Conv2D) (None, 11, 40, 128) 147584


batch_normalization_332 (Bat (None, 11, 40, 128) 512


activation_335 (Activation) (None, 11, 40, 128) 0


conv2d_369 (Conv2D) (None, 11, 40, 256) 33024


batch_normalization_333 (Bat (None, 11, 40, 256) 1024


activation_336 (Activation) (None, 11, 40, 256) 0


conv2d_370 (Conv2D) (None, 11, 40, 256) 590080


batch_normalization_334 (Bat (None, 11, 40, 256) 1024


activation_337 (Activation) (None, 11, 40, 256) 0


conv2d_371 (Conv2D) (None, 11, 40, 256) 65792


batch_normalization_335 (Bat (None, 11, 40, 256) 1024


activation_338 (Activation) (None, 11, 40, 256) 0


conv2d_372 (Conv2D) (None, 5, 19, 256) 590080


batch_normalization_336 (Bat (None, 5, 19, 256) 1024


activation_339 (Activation) (None, 5, 19, 256) 0


conv2d_373 (Conv2D) (None, 5, 19, 512) 131584


batch_normalization_337 (Bat (None, 5, 19, 512) 2048


activation_340 (Activation) (None, 5, 19, 512) 0


conv2d_374 (Conv2D) (None, 5, 19, 512) 2359808


batch_normalization_338 (Bat (None, 5, 19, 512) 2048


activation_341 (Activation) (None, 5, 19, 512) 0


conv2d_375 (Conv2D) (None, 5, 19, 512) 262656


batch_normalization_339 (Bat (None, 5, 19, 512) 2048


activation_342 (Activation) (None, 5, 19, 512) 0


conv2d_376 (Conv2D) (None, 5, 19, 512) 2359808


batch_normalization_340 (Bat (None, 5, 19, 512) 2048


activation_343 (Activation) (None, 5, 19, 512) 0


conv2d_377 (Conv2D) (None, 5, 19, 512) 262656


batch_normalization_341 (Bat (None, 5, 19, 512) 2048


activation_344 (Activation) (None, 5, 19, 512) 0


conv2d_378 (Conv2D) (None, 5, 19, 512) 2359808


batch_normalization_342 (Bat (None, 5, 19, 512) 2048


activation_345 (Activation) (None, 5, 19, 512) 0


conv2d_379 (Conv2D) (None, 5, 19, 512) 262656


batch_normalization_343 (Bat (None, 5, 19, 512) 2048


activation_346 (Activation) (None, 5, 19, 512) 0


conv2d_380 (Conv2D) (None, 5, 19, 512) 2359808


batch_normalization_344 (Bat (None, 5, 19, 512) 2048


activation_347 (Activation) (None, 5, 19, 512) 0


conv2d_381 (Conv2D) (None, 5, 19, 512) 262656


batch_normalization_345 (Bat (None, 5, 19, 512) 2048


activation_348 (Activation) (None, 5, 19, 512) 0


conv2d_382 (Conv2D) (None, 5, 19, 512) 2359808


batch_normalization_346 (Bat (None, 5, 19, 512) 2048


activation_349 (Activation) (None, 5, 19, 512) 0


conv2d_383 (Conv2D) (None, 5, 19, 512) 262656


batch_normalization_347 (Bat (None, 5, 19, 512) 2048


activation_350 (Activation) (None, 5, 19, 512) 0


conv2d_384 (Conv2D) (None, 3, 10, 512) 2359808


batch_normalization_348 (Bat (None, 3, 10, 512) 2048


activation_351 (Activation) (None, 3, 10, 512) 0


conv2d_385 (Conv2D) (None, 1, 8, 1024) 4719616


batch_normalization_349 (Bat (None, 1, 8, 1024) 4096


activation_352 (Activation) (None, 1, 8, 1024) 0


conv2d_386 (Conv2D) (None, 1, 4, 1024) 9438208


batch_normalization_350 (Bat (None, 1, 4, 1024) 4096


activation_353 (Activation) (None, 1, 4, 1024) 0


conv2d_387 (Conv2D) (None, 1, 4, 1024) 1049600


batch_normalization_351 (Bat (None, 1, 4, 1024) 4096


activation_354 (Activation) (None, 1, 4, 1024) 0


average_pooling2d_12 (Averag (None, 1, 2, 1024) 0


flatten_11 (Flatten) (None, 2048) 0


dropout_21 (Dropout) (None, 2048) 0


dense_43 (Dense) (None, 1) 2049

=================================================================

Total params: 32,505,121

Trainable params: 32,483,233

Non-trainable params: 21,888

=================================================================



Dependencies



This project requires Python 3.5 and the following Python libraries installed:

• Keras

• NumPy

• SciPy

• TensorFlow

• Pandas

• OpenCV

• Matplotlib



How to Run the Model



This repository comes with trained model which you can directly test using the following command. Just for steering angle.

• python drive.py model.h5 folder_name*

To run both steering angel and trottle for the model

• python new_drive.py steering_angel_model_.h5 throttle_model.h5 folder_name*

*Folder to save the output images to create a movie.



Conclusion and Future Work


The model works fine need to test it some other self driving simulators and datasets. Also collecting some fine data for both throttle, breaks and steering anglewill help the model to work in realistic way.

*Note: Currently it can work with throttle and steering angle. The model can reverse too depending on the situation.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages