Skip to content

Behavioural Cloning: end to end learning for self-driving cars

Notifications You must be signed in to change notification settings

Vishal1711/Behavioural_cloning

Repository files navigation

Behavioural Cloning: End to End Learning for Self-driving Cars

The goal of this project was to train a end-to-end deep learning model that would let a car drive itself around the track in a driving simulator.

Project Structure

File Description
IMG Training data collected on Track 1 using right, left and centre camera
Drive.py Flask & Socket.io to establish bi-directional client-server communication
behavioural_cloning.ipynb Code without data augmentation
behavioural_cloning_Final.ipynb Final Code
driving_log.csv Collected Data - 'steering', 'throttle', 'reverse', 'speed'
model.h5 Saved model after training

Data Collection and Balancing:

The provided driving simulator had two different tracks. One of them was used for collecting training data, and the other one — never seen by the model — as a substitute for test set.

The driving simulator would save frames from three front-facing "cameras", recording data from the car's point of view; as well as various driving statistics like throttle, speed and steering angle. We are going to use camera data as model input and expect it to predict the steering angle in the [-1, 1] range.

I have collected a dataset by driving on both direction around track 1. Driving in both direction reduce bias in collected datatset.

To filter front bias in steering data limit put at 400.

num_bins =25
samples_per_bin = 400
hist, bins = np.histogram(data['steering'], num_bins)
print(bins)
center = (bins[:-1]+ bins[1:]) * 0.5
plt.bar(center, hist, width=0.05)
plt.plot((np.min(data['steering']), np.max(data['steering'])), (samples_p_bin, samples_per_bin))

image image

Data Augmentation and Preprocessing

After six laps of driving data we ended up with 6825 samples, which most likely wouldn't be enough for the model to generalise well. However, as many pointed out, there a couple of augmentation tricks that should let you extend the dataset significantly:

  1. Zoom Image - image

  2. Panned Image -

image

  1. Brightness Changed -

image

  1. Flipped Image-

image

  1. Augmented Image-

image

Image Conversion in YUV Format-

image

Model

I started with the model described in Nvidia paper and kept simplifying and optimising it while making sure it performs well on both tracks.

image

This model can be very briefly encoded with Keras.

def nvidia_model():
  model = Sequential()
  model.add(Convolution2D(24, kernel_size=(5,5), strides=(2,2), input_shape=(66,200,3), activation='elu'))
  model.add(Convolution2D(36, kernel_size=(5, 5), strides=(2,2), activation='relu'))
  model.add(Convolution2D(48, kernel_size=(5, 5), strides=(2,2), activation='relu'))
  model.add(Convolution2D(64, kernel_size=(3, 3), activation='relu'))
  model.add(Convolution2D(64, kernel_size=(3, 3), activation='relu'))
  model.add(Flatten())
  model.add(Dense(100, activation='relu'))
  model.add(Dense(50, activation='relu'))
  model.add(Dense(10, activation='relu'))
  model.add(Dense(1))
  optimizer = Adam(learning_rate=0.001)
  model.compile(loss='mse', optimizer=optimizer)
  return model

Result

The model was trained using arrow keys in the simulator on track 1. Therefore, the performance is not smooth as expected. Still, the car manages to drive just fine on both tracks. This is what driving looks like on track 2 (previously unseen).

Track 1:

Self.Driving.Car.Nanodegree.Program.2022-01-03.13-28-58_Trim.3.online-video-cutter.com.1.mp4

Track 2:

Self.Driving.Car.Nanodegree.Program.2022-01-03.13-33-33_Trim.online-video-cutter.com.1.mp4

Clearly this is a very basic example of end-to-end learning for self-driving cars, nevertheless it should give a rough idea of what these models are capable of, even considering all limitations of training and validating solely on a virtual driving simulator.

About

Behavioural Cloning: end to end learning for self-driving cars

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published