#Behavioral Cloning
Behavioral Cloning Project
The goals / steps of this project are the following:
- Use the simulator to collect data of good driving behavior
- Build, a convolution neural network in Keras that predicts steering angles from images
- Train and validate the model with a training and validation set
- Test that the model successfully drives around track one without leaving the road
- Summarize the results with a written report
My project includes the following files:
- model.py containing the script to create and train the model
- drive.py for driving the car in autonomous mode
- model.h5 containing a trained convolution neural network
- writeup_report.md summarizing the results
The model.py file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works.
###Model Architecture and Training Strategy
####1. An appropriate model architecture has been employed
I used The NVIDIA model, and I typically added dropout layers to avoid overfitting. The model's architecture is below:
- Image normalization
- Convolution: 5x5, filter: 24, strides: 2x2, activation: RELU
- Convolution: 5x5, filter: 36, strides: 2x2, activation: RELU
- Convolution: 5x5, filter: 48, strides: 2x2, activation: RELU
- Convolution: 3x3, filter: 64, strides: 1x1, activation: RELU
- Convolution: 3x3, filter: 64, strides: 1x1, activation: RELU
- Dropout (0.5)
- Flatten
- Dropout (0.5)
- Fully connected: neurons: 100, activation: RELU
- Dropout (0.5)
- Fully connected: neurons: 50, activation: RELU
- Dropout (0.5)
- Fully connected: neurons: 10, activation: RELU
- Fully connected: neurons: 1 (output)
And this is the output from keras of the shape and number of parameters of my model:
Layer (type) | Output Shape | Params | Connected to |
---|---|---|---|
lambda_1 (Lambda) | (None, 160, 320, 3) | 0 | lambda_input_1[0][0] |
cropping2d_1 (Cropping2D) | (None, 90, 320, 3) | 0 | lambda_1[0][0] |
convolution2d_1 (Convolution2D) | (None, 43, 158, 24) | 1824 | cropping2d_1[0][0] |
convolution2d_2 (Convolution2D) | (None, 20, 77, 36) | 21636 | convolution2d_1[0][0] |
convolution2d_3 (Convolution2D) | (None, 8, 37, 48) | 43248 | convolution2d_2[0][0] |
convolution2d_4 (Convolution2D) | (None, 6, 35, 64) | 27712 | convolution2d_3[0][0] |
convolution2d_5 (Convolution2D) | (None, 4, 33, 64) | 36928 | convolution2d_4[0][0] |
dropout_1 (Dropout) | (None, 4, 33, 64) | 0 | convolution2d_5[0][0] |
flatten_1 (Flatten) | (None, 8448) | 0 | dropout_1[0][0] |
dropout_2 (Dropout) | (None, 8448) | 0 | flatten_1[0][0] |
dense_1 (Dense) | (None, 100) | 844900 | dropout_2[0][0] |
dropout_3 (Dropout) | (None, 100) | 0 | dense_1[0][0] |
dense_2 (Dense) | (None, 50) | 5050 | dropout_3[0][0] |
dropout_4 (Dropout) | (None, 50) | 0 | dense_2[0][0] |
dense_3 (Dense) | (None, 10) | 510 | dropout_4[0][0] |
dense_4 (Dense) | (None, 1) | 11 | dense_3[0][0] |
Total params: 981,819
Trainable params: 981,819
Non-trainable params: 0
####2. Attempts to reduce overfitting in the model
The model contains dropout layers in order to reduce overfitting (model.py lines 126, 128, 130, 132). The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.
####3. Model parameter tuning
The model used an adam optimizer, and from experiment, I finally choose learning rate to be 1e-4 (model.py line 144). Besides, I use batch_size to be 64 (I tried 128, 64 and 32, and I find that 64 give me the lowest validation loss).
####4. Appropriate training data
Training data was chosen to keep the vehicle driving on the road. I used a combination of center lane driving, recovering from the left and right sides of the road. 2 round of Track 1 and 2 round of Track2.
For details about how I created the training data, see the next section.
###Model Architecture and Training Strategy
####1. Solution Design Approach
My first step was to use a convolution neural network model similar to the Nvidia's model, I thought this model might be appropriate because it's well documented.
In order to gauge how well the model was working, I split my image and steering angle data into a training and validation set(20% validation set). I found that my first model had a low mean squared error on the training set but a high mean squared error on the validation set. This implied that the model was overfitting.
To combat the overfitting, initially, I try to reduce the number of epochs from 10 to 3, but the overall loss was high and the car can't run properly on track. Then I try to use one dropout layer between convolution layer and flatten layer with 0.5 dropout rate, and still over fitting. I finally use 4 dropout layer with 0.5 rate, this way my model's validation loss can lower than training loss for 10 epochs.
The final step was to run the simulator to see how well the car was driving around track one. There were a few spots where the vehicle fell off the track... to improve the driving behavior in these cases, I typically add more datasets which contains more images on turning back to center when the car is near left or right.
Besides, to get better results, I used left and right images randomly, and correct the steering degrees corresponding to different type of images. Then, I did augmentation such as flip images(model.py, line 82) and cropping(model.py, line 120).
On the other hand, I use generator method to get data per patch size instead of loading the whole data into memory.(model.py, line 47)
At the end of the process, the vehicle is able to drive autonomously around the track without leaving the road.
####2. Final Model Architecture
The final model architecture (model.py lines 114-136) consisted of a convolution neural network with the following layers and layer sizes:
- Image normalization
- Convolution: 5x5, filter: 24, strides: 2x2, activation: RELU
- Convolution: 5x5, filter: 36, strides: 2x2, activation: RELU
- Convolution: 5x5, filter: 48, strides: 2x2, activation: RELU
- Convolution: 3x3, filter: 64, strides: 1x1, activation: RELU
- Convolution: 3x3, filter: 64, strides: 1x1, activation: RELU
- Dropout (0.5)
- Flatten
- Dropout (0.5)
- Fully connected: neurons: 100, activation: RELU
- Dropout (0.5)
- Fully connected: neurons: 50, activation: RELU
- Dropout (0.5)
- Fully connected: neurons: 10, activation: RELU
- Fully connected: neurons: 1 (output)
####3. Creation of the Training Set & Training Process
To capture good driving behavior, I first recorded two laps on track one using center lane driving. Here is an example image of center lane driving:
I then recorded the vehicle recovering from the left side and right sides of the road back to center so that the vehicle would learn to return back to center when it slides away.
Then I repeated this process on track two in order to get more data points.
To augment the data sat, I also flipped images and angles thinking that this would generate more data and make my model be more generalize. And after the collection process, I had 20186 number of data points.
I finally randomly shuffled the data set and put 20% of the data into a validation set.
I used this training data for training the model. The validation set helped determine if the model was over or under fitting. The ideal number of epochs was 10. And this's the chart of training loss and validation loss:
For track 2, I increased the dataset to around 30000, with more focus on the corner of the track, But still have two corners not able to turn. That model is model1.h, and here's my loss chart: