Deep-PI-Car

Techs: Raspberry Pi 3B+ , Google Edge TPU USB accelerator , Sun Founder's Pi Car Kit, USB camera, Lithium Batteries / Python , Noobs Raspbian
Department: Electrical & Computer Engineering
MSTeamURL: click here

The project, which is Deep-Pi-Car, is an autonomous prototype car that performs multiple tasks like Lane navigation and Object Detection to provide a safer environment on the road. Using deep learning algorithms integrated onto the Raspberry Pi microcontroller and Coral Edge TPU accelerator, the car has been trained to stay in between the lanes of a road whilst detecting an incoming object and reacting to it accordingly.

To mimic how the car needed to steer between the lanes, we wrote a code using OpenCV. Firstly, we changed the color space from RGB to HSV to get a uniform color. Since the lanes were of blue color, hence we needed to isolate blue from the image captured by the camera, which was done using masking. Then we applied canny edge detection function to detect the edges of the lanes. We only wanted our car to detect the lane lines hence we isolated region of interest. In the end we used Hough transform to detect the line segments, took their average to get definite lane lines and coded algorithms to steer our car between those lanes. The car would now steer between the lane lines and capture videos mimicking how the car needs to behave. The videos captured by the camera are saved into a remote directory we created on the Raspberry Pi, which could be accessed by our computer. After that, the frames were extracted from the video thus creating a dataset with video images as features and steering angles as labels. Since the dataset was not large enough, so we had to perform image augmentation. It meant that we would be zooming, panning, and flipping the image to enhance our dataset. Then through the help of Google Colab we trained the dataset created. For this purpose, the Nvidia Model’s architecture was utilized which uses a Convolutional Neural Network at its core with a total of 252,419 trainable parameters. This model was then used for training which took around 4 hours to complete and finished with a value of 99.08 percent for the R squared metric. Once this was done, then it was loaded onto the Raspberry Pi and tested which ensured that the car stayed in between the lanes. After this the image classification was done for the object detection purposes. The classification was done on the traffic lights and the traffic signs. For that the Mobile Net v2 SSD COCO quantized was used. Since the raspberry pi has low processing speed, so we connected the Google Edge TPU accelerator to the Raspberry pi which helped real time inferencing to be done at a higher rate. COCO is a dataset which Contains 100 common objects including the ones we required on the road. So, to train this, transfer learning was used. In that we utilize an already trained model, and then train only the hyperparameters to work according to our model. This took around 6 hours to complete. Once this was done, a meta file was generated with a trained model which was converted into the tflite file through a TPU Web compiler. This was necessary because the accelerator only ran a tflite file. After this, the model was loaded onto the Raspberry Pi which started detecting the objects with accuracy. Now, both the models of Lane navigation and object detection were integrated resulting in the final result in which the prototype car started following the lane lines, detecting the objects and behaving accordingly at the same time.

Project Team Members

Registration# Name Email
FA17-BEE-139 Taimour Taj Shami shami.taimour@gmail.com
FA17-BEE-180 Muhammad Abdullah Rizwan blue.army.349@gmail.com
FA17-BEE-070 Laiba Iftikhar Laibaiftikhar163@gmail.com
FA17-BEE-184 Filza Tahir dup

Project Gallery

Copyrights © 2021 IT Center CUI Wah. All rights reserved.