Real-Time Traffic Sign Detection and Recognition System for Assistive Driving

Date of Award

2020

Document Type

Thesis

Degree Name

Master of Science in Electronics Engineering

Department

Electronics, Computer, and Communications Engineering

First Advisor

Patricia Angela R. Abu, PhDMr. Carlos M. OppusRosula SJ. Reyes, PhD

Abstract

The technology behind Advanced Driver Assistance Systems has been continuously advancing in recent years. This has been made possible by artificial intelligence and computer vision. In Automatic Traffic Sign Detection and Recognition System, accurate detection and recognition of traffic signs from the complex traffic environment and varying weather and lighting conditions are still a big challenge. This study implements a traffic sign detection and recognition system that automatically detects and recognizes traffic signs then provides a voice alert pertaining to the meaning of the sign. Four pre-processing and segmentation methods are evaluated con- sidering the trade-off between accuracy and processing speed. These in- clude Bilateral Filtering Processing Method, Color Constancy Algorithm, Relative RGB Segmentation, and Shadow and Highlight Invariant Method. Hough transform is used for segmentation of the region of interest. In the recognition stage, nine machine learning algorithms and one deep learning algorithm are evaluated namely K-Nearest Neighbor, Support Vector Ma- chine, Gaussian Process, Decision Tree, Random Forest, Multilayer Per- ceptron, AdaBoost, Gaussian Naive Bayes, Quadratic Discriminant, and Convolutional Neural Network. For the machine learning algorithms, His- togram of Oriented Gradients is extracted from candidate traffic signs as the key feature in classification. v This study has determined that Shadow and Highlight Invariant Method for the pre-processing and segmentation stage provided the best trade-off between detection success rate and processing speed. Convolu- tional Neural Network for the recognition stage not only provided the best trade-off between classification accuracy and processing speed but also has the best performance even with lesser number of training data. Embedded system implementation utilized Nvidia Jetson Nano with interface Wave- share IMX219-77 camera, Nvidia 7" LCD, and generic speaker and pro- grammed in Python with OpenCV, sci-kit learn, and Pytorch libraries. It is capable of running at an adaptive frame rate from 8-12 frames per sec- ond with no detection and down to approximately 1 frame per second when there is traffic sign detected.

This document is currently not available here.

Share

COinS