SlideShare a Scribd company logo
1 of 44
Download to read offline
VMV COMMERCE, JMT ARTS &
JJP SCIENCE COLLEGE
(Institute of Computer Studies and Research)
Wardhaman Nagar, Nagpur - 08.
(MCA Institute of Computer Studies and Research)
Documentation for Final project Report
1) * Cover Page (Paste on Bound Record) and Front Page (First page of report).
2) *College Certificate
3) CompanyCertificate
4) *Declaration
5) Acknowledgement
6) Company Profile (Collect from Company)
7) Index Page
*Contents ——————————
VMV COMMERCE, JMT ARTS &
JJP SCIENCE COLLEGE
(Institute of Computer Studies and Research)
Wardhaman Nagar, Nagpur-08.
(MCA Institute of Computer Studies and Research)
A
Project Report
On
“Traffic Sign Recognition Using Machine Learning Techniques”
Submitted to
Rashtrasant Tukadoji Maharaj Nagpur University,
Nagpur
In partial fulfillment of the requirement of
Master of Computer Applications
M.C.A-II SEM-IV
Developed & Submitted by
Rajshree Rajkumar Hande
Under the Guidance of
Prof.M.D.Manmode
2022-23
VMV COMMERCE, JMT ARTS &
JJP SCIENCE COLLEGE
(Institute of Computer Studies and Research)
Wardhaman Nagar, Nagpur-08.
(MCA Institute of Computer Studies and Research)
DECLARATION
We here-by declare that the project entitled
“Traffic Sign Recognition Using Machine Learning Techniques’’
has completed by us in partial fulfillment of M.C.A. - II (Master
of Computer Applications), Sem.-IV degree examination as
prescribed by Rashtrasant Tukadoji Maharaj Nagpur University,
Nagpur and had not been submitted for any other examination and
does not form the part of any other course undergone by us.
NameofStudent(s) Student Signature(s)
Rajshree . R .Hande
Place:Nagpur
Date:
Academic Year 2022 - 23
VMV COMMERCE, JMT ARTS &
JJP SCIENCE COLLEGE
(Institute of Computer Studies and Research)
Wardhaman Nagar, Nagpur-08.
(MCA Institute of Computer Studies and Research)
CERTIFICATE
This is to certify that the project entitled “Traffic Sign Recognition
Using Machine Learning Techniques” by “ Rajshree Rajkumar Hande
in partial fulfillment of M.C.A. -- II (Master of Computer
Applications), Sem.-II degree examination, had not been submitted
for any other examination and does not form part of any other
course undergone by the candidate.
It is further certified that he/she/they have completed
his/her/their project as prescribed by Rashtrasant Tukadoji
Maharaj Nagpur University, Nagpur.
Prof. M. D. Manmode Dr. V. R. Bhedi
Guide In-charge & HOD
Internal Examiner External Examiner
Place: Nagpur
Date:
Academic Year 2022 - 23
ACKNOWLEDGEMENT
With immense pride and sense of gratitude, we take this golden
opportunity to express our sincere regards to the honorable In-
charge and HOD Dr. V. R. Bhedi of institute for providing us
facilities and inspiration to gather professional knowledge and
material without which it would have been impossible to
complete this hard task.
we are extremely thankful to our Project Guide Prof. M. D.
Manmode for her project guideline throughout the project. Our
sincere regards to her for giving us her outstanding guidance,
enthusiastic suggestions and invaluable encouragement which
helped us to complete the project.
we will be failing in our duty if you do not thank the non-
teaching staff of the college for their cooperation.
we would like to thank all who helped us in making this project
complete and successful one.
--Name of Projectees--
(MCA)
INDEX
SR.NO CONTENTS PAGE NO.
1.
INTRODUCTION
1.1 EXISTING SYSTEM AND NEED FOR NEW SYSTEM
1.2 PROPOSED SYSTEM
1.3 SCOPE OF WORK
2.
PROBLEM DEFINITION
2.1REVIEW OF RELATED WORK
2.2 PROBLEM DEFINITION
3.
ANALYSIS & DESIGN
3.1 USER REQUIREMENTS
3.2 FRONT END & BACK END
3.3 SYSTEM FLOW
3.4 MODULE DESCRIPTION AND FLOW
3.5 DATA FLOW DIAGRAM (DFD)
3.6 ENTITY RELATIONSHIP DIAGRAM (ERD)
3.7 TABLE DESIGN
4.
IMPLEMENTATION & RESULTS
4.1 INPUT FORMS WITH DATA
4.2 OUTPUT REPORTS WITH DATA
4.3 SAMPLE CODE
5.
TESTING AND MAINTENANCE
5.1TESTING
5.2VALIDATION
5.3MAINTENANCE
6.
USER MANUAL
6.1 USER MANUAL
7.
CONCLUSION AND FUTURE SCOPE
7.1 CONCLUSION
7.2 LIMITATIONS AND FUTURE SCOPE
8. REFERENCES
1 Blank Page at the end.
1. INTRODUCTION
In this era of Artificial Intelligence, humans are becoming more dependent on
technology. With the enhanced technology, multinational companies like
Google, Tesla, Uber, Ford, Audi, Toyota, Mercedes-Benz, and many more are
working on automating vehicles. They are trying to make more accurate
autonomous or driver less vehicles. We all might know about self-driving
cars, where the vehicle itself behaves like a driver and does not need any
human guidance to run on the road. This is not wrong to think about the
safety aspects—a chance of significant accidents from machines. But no
machines are more accurate than humans. Researchers are running many
algorithms to ensure 100% road safety and accuracy. One such algorithm is
Traffic Sign Recognition that we talk about in this blog.
When we go on the road, we see various traffic signs like traffic signals, turn
left or right, speed limits, no passing of heavy vehicles, no entry, children
crossing, etc., that we need to follow for a safe drive. Likewise, autonomous
vehicles also have to interpret these signs and make decisions to achieve
accuracy. The methodology of recognizing which class a traffic sign belongs to
is called Traffic signs classification.
In this Deep Learning project, we will build a model for the classification of
traffic signs available in the image into many categories using a convolution
neural network(CNN) and Keras library.
1.1 EXISTING SYSTEM AND NEED FOR NEW SYSTEM
Traffic signs carry much information necessary for successful driving - they
describe up-to-date traffic situations, define right-of-way, prohibit or permit
certain actions or directions, warn about risk factors, etc. Road signs also help
drivers with routing the vehicle by identifying the road-sign by using
computer vision. The road conditions in the actual scene are very
complicated so it was really very hard for the researchers to make the system
efficient. The existing system has been detected and categorized using
standard computer vision methods, but it was taking a long time.
The existing methods used for designing traffic sign recognition model are
1) K-means clustering
2) Lidar and vision based
3) Video streaming
DISADVANTAGES OF EXSISTING METHODS
I. False Detection
II. Redundancy (inter pixel redundancy)
III. Less efficiency (compared to our model)
IV. Cost related issues
1.2 PROPOSED SYSTEM
In this project, we make a CNN block where predictions are directly
preformed across multiple feature levels. For this project, we are using the
general public data set available at Kaggle i.e GTSRB. Our approach to putting
together this traffic sign classification model is discussed in four steps:
Step 1: Explore the data set Our ‘train’ folders contain 43 folders each
representing a special class. The range of the folder is from 0,1,2, upto 42.
With the assistance of the OS module, we iterate over all the classes and
append images and their respective labels within the data and labels list.
Step 2: Build a CNN model To classify the pictures into their respective
categories, well build a CNN model (Convolutional Neural Network). CNN is
best for image classification purposes.
Steps 3: Train and validate the model After building the model architecture,
we then teach the model using model.fit().
Step 4: Test our model with test data set Our data set contains a test folder and
during a test.csv file, we've the small print associated with the image path and
their respective class labels. Now we are getting to build a graphical interface for
our traffic signs classifier with Tkinter. Tkinter is nothing but a GUI toolkit within
the standard python library. Here you upload the pictures and classify the image.
1.3 SCOPE OF WORK
This project introduces sign detection and recognition. It describes the
characteristics and requirements and also difficulties between the road sign
identification and recognition of the road signs. It shows the convolution neural
network technique used for verification and classification of the road signs. Our
project introduces a traffic sign detection and recognition system that accurately
estimates the situation and exact boundary of traffic signs using convolution
neural network (CNN). During this Python project, you'll build a deep neural
network model which will classify traffic signs present within the image into
different categories. With this model, you are ready to read and understand
traffic signs which are a really important task for all autonomous vehicles.
2.PROBLEM DEFINITION
2.1 REVIEW OF RELATED WORK
Many scholars have recently conducted studies on the topic of traffic sign
recognition. The authors apply the notion of convolutional neural networks
to recognize and classify the photos, according to the research article. The
image is first pre-processed to highlight the most significant details. The
Hough Transform is then used to detect and locate the areas.
The suggested system detects and recognizes traffic sign images in real time.
A freshly developed database of 24 different traffic signs collected from
random roadsides in Saudi Arabia is also a contribution to this work. The
photographs were taken from various perspectives and included various
characteristics and circumstances. A total of 2718 photos were collected to
create the Saudi Arabian Traffic and Road Signs collection (SATRS-2018). To
get the best recognition rates, the CNN architecture was employed in various
settings. The proposed CNN architecture attained a precision of 100 percent
in experiments, which is higher than that of similar previous studies.
This study provides an intelligent transportation system design based on
current requirements and technology to address the bottleneck issues that
plague intelligent transportation research and to investigate the ITS and
research focus's future prospects as new technologies become accessible.
This research presents a framework for detecting and categorizing various
sorts of traffic signs in photos. Road sign detection and classification and
recognition are the two key elements of the technique. Color-based
segmentation is used in the first stage to determine whether or not a traffic
sign is present. The sign will be highlighted, normalized in size, and
categorized if it is present. For classification, a neural network is used. Stop,
No Entry, Give Way, and Speed Limit signs are among the four types of traffic
indicators used for evaluation. For training purposes, 300 sets of photos are
employed, 75 sets for each kind. Testing is done with 200 photos. The
detection rate is above 90%, and the recognition accuracy is over 88 percent,
according to the test findings.
In this study, a deep learning-based system for recognizing road traffic signs
is created, which has a lot of potential in the development of ADAS and
autonomous vehicles. The system architecture is intended to key elements
from photographs of traffic signs in order to categorize them. To conduct the
recognition, the presented method employs a modified LeNet-5 network to
extract a deep representation of traffic signs. It is made up of a Convolutional
Neural Network (CNN) with all convolutional layers' output connected to a
Multilayer Perceptron (MLP). The training uses the German Traffic Sign
Dataset and produces good results in terms of traffic sign recognition.
2.2 PROBLEM DEFINITION
We always come across incidents of accidents where drivers’ Over speed or
lack of vision leads to major accidents. In winter, the risk of road accidents
has a 40-50% increase because of the traffic signs’ lack of visibility. So here in
this article, we will be implementing Traffic Sign recognition using
a Convolutional Neural Network. It will be very useful in Automatic Driving
Vehicles.
When you go on the road, you see various traffic signs like traffic signals, turn
left or right, speed limits, zebra crossing, u-turn, no passing of heavy vehicles,
no entry, children crossing, etc., that you need to follow for safety purpose.
Likewise, autonomous or self-driving cars must interpret these signboards
and make decisions to achieve maximum accuracy. This algorithm recognizes
which class a traffic signboard belongs to is called Traffic signs recognition.
In the world of Artificial Intelligence and advancement in technologies, many
researchers and big companies like Tesla, Uber, Google, Mercedes-Benz,
Toyota, Ford, Audi, etc. are working on autonomous vehicles and self-driving
cars. So, for achieving accuracy in this technology, the vehicles should be able
to interpret traffic signs and make decisions accordingly.
3.ANALYSIS & DESIGN
3.1 USER REQUIREMENTS
User Requirements Definition: The user requirement for this system is to make
the system fast, flexible, less prone to error, reduce expenses and save the time.
The recognition of traffic signs is an important study area in computer vision. It
can be divided into two types of technologies: traffic-sign detection and
recognition. The accuracy of detection will directly influence the ultimate results
of identification. Traffic signs convey important signals about vehicle safety and
display current traffic conditions, define road rights, prohibit and permit certain
behaviors and driving routes, and display dangerous messages, among other
things. They can also assist drivers in determining the state of the road and thus
the best driving routes.
Humans are becoming more reliant on technology in this age of artificial
intelligence. Multinational corporations such as Google, Tesla, Uber, Ford, Audi,
Toyota, Mercedes-Benz, and others are attempting to automate vehicles using
improved technology. They're attempting to develop more precise autonomous
or driverless automobiles. You may have heard about self-driving cars, in which
the vehicle acts as a driver and does not require human intervention to operate
on the road.
It is reasonable to consider the safety aspects—the possibility of serious machine
mishaps. However, no machine is more precise than humans. Many algorithms
are being conducted by researchers to assure complete road safety and accuracy.
When driving on the road, you will encounter numerous traffic signs such as
traffic lights, turn left or right, speed restrictions, no passing of heavy vehicles, no
entering, children crossing, and so on, which you must obey in order to drive
safely. In order to reach accuracy, autonomous vehicles must also analyze these
indicators and make decisions. Traffic signs classification is the process of
determining which class a traffic sign belongs to.
3.2 FRONT END & BACK END
FRONT END
Set up Anaconda, Jupyter Notebook for the front-end
Figure .1 Front -End
Desktop Frameworks for Windows App Development- Python has a massive
range of libraries and tools that opens you to their pre-written tools and less
time in development. In addition, there is a wide range of ecosystems that
extend you to development and solutions like Pandas and Numpy for analysis
Figure .2 Frameworks
What is Tkinter ?
Tkinter is one of the most popular programming frameworks for Desktop
apps and GUIs. It is a combination of the Tk and Python GUI frameworks. It is
named Tkinter because of its simple UI and UX; development beginners can
easily use it for python desktop applications. Tkinter has an abundant source
of codes and reference books that makes it a popular choice. In addition, it
has various widgets, like labels, buttons, and almost everything that you
might need in your python desktop development process and GUI
Designing. Tkinter is available for Windows, Mac, Linux, and mobile.[9]
Figure .3 Tkinter
Important Libraries must be Imported:
Pandas – Use to load the data frame in a 2D array format.
NumPy – NumPy arrays are very fast and can perform large computations in a
very short time.
Matplotlib – This library is used to draw visualizations.
OpenCV – This library mainly focused on image processing and handling.
Tensorflow – It provides a range of functions to achieve complex functionalities
with single lines of code.
BACK END
Install Tensor flow and keras for the Back-end
Figure .4 Back-end
Back-end development is the writing of code and designing of the database
and server. The back-end logic begins with one of two types of applications:
a. Single page application: Involves dynamically rewriting one single page
without reloading the page from the server. It only requires APIs to reload
the page.
b. Multi-page application: Requires the page to be loaded again and again
from the server at the user’s request.
The back-end needs extensive code and logic but there are a few frameworks
that make it easier to develop. The choice of framework is up to the
developer and also depends on the tech stack required for the project. Some
frameworks are Node.js, Flask, Django, Laravel, Swift, and Flutter.
Because back-end software can run on a vast array of different systems,
there are a wide variety of tools and skills that back-end developers may
learn. Here are the major languages: Python can be used for either front-end
or back-end development. That said, it’s approachable syntax and
widespread server-side use makes Python a core programming language for
back-end development. Front-end Python is not unheard of, it’s just not
usually preferred. Python is open source and works with the Flask and Django
web frameworks. [10]
Datasets and algorithms are also a key component to coding on the back-end,
so database technology variants of SQL become invaluable for writing
database queries and to create models.
What is Pandas ?
Pandas is a Python library used for working with data sets. It has functions for
analyzing, cleaning, exploring, and manipulating data.[11]
Figure .5 Pandas Mechanism
What is Keras ?
Keras is an open-source high-level Neural Network library, which is written in
Python is capable enough to run on Theano, TensorFlow, or CNTK. It was
developed by one of the Google engineers, Francois Chollet. It is made user-
friendly, extensible, and modular for facilitating faster experimentation with
deep neural networks. It not only supports Convolutional Networks and
Recurrent Networks individually but also their combination. It cannot handle
low-level computations, so it makes use of the Backend library to resolve it.
The backend library act as a high-level API wrapper for the low-level API,
which lets it run on TensorFlow, CNTK, or Theano.[12]
3.3 SYSTEM FLOW
A system flowchart is a valuable presentation aid because it shows how my
system major components fit together and interact. In effect, it serves as a
system roadmap.
Figure .6 System Flow Diagram
3.4 MODULE DESCRIPTION AND FLOW
This project has only one user module. This module has it own specification.
Main function of user module :- In this module ,the user can upload a picture
into this module after choosing the image.He/she can classify the picture.
This project we’ll develop using python. We are getting to develop a model
which can detect the traffic sign. We build deep neural network model which
will identify which traffic sign is present therein image. We also used PIL
library to open image content into array. Our data-set contain train folder
which carries folder each represents different classes and in test folder we've
the small print associated with the image path and their respective class
labels.
Figure .7 Flowchart
3.5 DATA FLOW DIAGRAM (DFD)
A data flow diagram (DFD) is a graphical representation of the "flow" of data
through an information system, modeling its process aspects. Often they are a
preliminary step used to create an overview of the system which can later be
elaborated. DFDs can also be used for the visualization of data processing
(structured design).
A DFD shows what kind of information will be input to and output from the
system, where the data will come from and go to, and where the data will be
stored. It does not show information about the timing of processes, or information
about whether processes will operate in sequence or in parallel.
Figure .8 Data Flow Diagram
3.6 ENTITY RELATIONSHIP DIAGRAM (ERD)
In software engineering, an entity–relationship model (ER model) is a data model
for describing the data or information aspects of a business domain or its process
requirements, in an abstract way that lends itself to ultimately being implemented
in a database such as a relational database. The main components of ER models
are entities (things) and the relationships that can exist among them.
An Entity-relationship model (ERM) is an abstract and conceptual representation
of data. ER modeling is a DB modeling method, used to produce a type of
conceptual schema of a system. Diagrams created by this process are called ER
diagrams
The ER diagram of the project “Traffic Sign Classification Using CNN and Keras
in Python” is shown in Fig .
Figure 9. ER diagram
3.7 TABLE DESIGN
About Database :
The German Traffic Sign Benchmark is a multi-class, single-image classification
challenge held at the International Joint Conference on Neural Networks (IJCNN)
2011. Our benchmark has the following properties:
 Single-image, multi-class classification problem
 More than 40 classes
 More than 50,000 images in total
 Large, lifelike database
The tables used in the database are as follow:
In this project, we used Python and Tensorflow to classify traffic signs. Dataset
used: German Traffic Sign Dataset. This data-set has more than 50,000 images of
43 classes. 96.06% testing accuracy.
IMPLEMENTATION & RESULTS
4.1 INPUT FORMS WITH DATA
4.2 OUTPUT REPORTS WITH DATA
4.3 SAMPLE CODE
gui.py
import tkinter as tk
from tkinter import filedialog
from tkinter import *
from PIL import ImageTk, Image
import numpy
#load the trained model to classify sign
from keras.models import load_model
model = load_model('traffic_classifier.h5')
#dictionary to label all traffic signs class.
classes = { 1:'Speed limit (20km/h)',
2:'Speed limit (30km/h)',
3:'Speed limit (50km/h)',
4:'Speed limit (60km/h)',
5:'Speed limit (70km/h)',
6:'Speed limit (80km/h)',
7:'End of speed limit (80km/h)',
8:'Speed limit (100km/h)',
9:'Speed limit (120km/h)',
10:'No passing',
11:'No passing veh over 3.5 tons',
12:'Right-of-way at intersection',
13:'Priority road',
14:'Yield',
15:'Stop',
16:'No vehicles',
17:'Veh > 3.5 tons prohibited',
18:'No entry',
19:'General caution',
20:'Dangerous curve left',
21:'Dangerous curve right',
22:'Double curve',
23:'Bumpy road',
24:'Slippery road',
25:'Road narrows on the right',
26:'Road work',
27:'Traffic signals',
28:'Pedestrians',
29:'Children crossing',
30:'Bicycles crossing',
31:'Beware of ice/snow',
32:'Wild animals crossing',
33:'End speed + passing limits',
34:'Turn right ahead',
35:'Turn left ahead',
36:'Ahead only',
37:'Go straight or right',
38:'Go straight or left',
39:'Keep right',
40:'Keep left',
41:'Roundabout mandatory',
42:'End of no passing',
43:'End no passing veh > 3.5 tons' }
#initialise GUI
top=tk.Tk()
top.geometry('800x600')
top.title('Traffic sign classification')
top.configure(background='#CDCDCD')
label=Label(top,background='#CDCDCD', font=('arial',15,'bold'))
sign_image = Label(top)
def classify(file_path):
global label_packed
image = Image.open(file_path)
image = image.resize((30,30))
image = numpy.expand_dims(image, axis=0)
image = numpy.array(image)
print(image.shape)
pred = model.predict_classes([image])[0]
sign = classes[pred+1]
print(sign)
label.configure(foreground='#011638', text=sign)
def show_classify_button(file_path):
classify_b=Button(top,text="Classify Image",command=lambda:
classify(file_path),padx=10,pady=5)
classify_b.configure(background='#364156',
foreground='white',font=('arial',10,'bold'))
classify_b.place(relx=0.79,rely=0.46)
def upload_image():
try:
file_path=filedialog.askopenfilename()
uploaded=Image.open(file_path)
uploaded.thumbnail(((top.winfo_width()/2.25),(top.winfo_height()/2.25))
)
im=ImageTk.PhotoImage(uploaded)
sign_image.configure(image=im)
sign_image.image=im
label.configure(text='')
show_classify_button(file_path)
except:
pass
upload=Button(top,text="Upload an
image",command=upload_image,padx=10,pady=5)
upload.configure(background='#364156',
foreground='white',font=('arial',10,'bold'))
upload.pack(side=BOTTOM,pady=50)
sign_image.pack(side=BOTTOM,expand=True)
label.pack(side=BOTTOM,expand=True)
heading = Label(top, text="Know Your Traffic Sign",pady=20,
font=('arial',20,'bold'))
heading.configure(background='#CDCDCD',foreground='#364156')
heading.pack()
top.mainloop()
traffic_sign.py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
import tensorflow as tf
from PIL import Image
import os
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
from keras.models import Sequential, load_model
from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout
data = []
labels = []
classes = 43
cur_path = os.getcwd()
#Retrieving the images and their labels
for i in range(classes):
path = os.path.join(cur_path,'gtsrb-german-traffic-sign/Train',str(i))
images = os.listdir(path)
a
for a in images:
try:
image = Image.open(path + ''+ a)
image = image.resize((30,30))
image = np.array(image)
#sim = Image.fromarray(image)
data.append(image)
labels.append(i)
except:
print("Error loading image")
#Converting lists into numpy arrays
data = np.array(data)
labels = np.array(labels)
print(data.shape, labels.shape)
#Splitting training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(data, labels,
test_size=0.2, random_state=42)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
#Converting the labels into one hot encoding
y_train = to_categorical(y_train, 43)
y_test = to_categorical(y_test, 43)
#Building the model
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu',
input_shape=X_train.shape[1:]))
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(43, activation='softmax'))
#Compilation of the model
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
epochs = 15
history = model.fit(X_train, y_train, batch_size=32, epochs=epochs,
validation_data=(X_test, y_test))
model.save("my_model.h5")
#plotting graphs for accuracy
plt.figure(0)
plt.plot(history.history['accuracy'], label='training accuracy')
plt.plot(history.history['val_accuracy'], label='val accuracy')
plt.title('Accuracy')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend()
plt.show()
plt.figure(1)
plt.plot(history.history['loss'], label='training loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.show()
#testing accuracy on test dataset
from sklearn.metrics import accuracy_score
y_test = pd.read_csv('Test.csv')
labels = y_test["ClassId"].values
imgs = y_test["Path"].values
data=[]
for img in imgs:
image = Image.open(img)
image = image.resize((30,30))
data.append(np.array(image))
X_test=np.array(data)
pred = model.predict_classes(X_test)
#Accuracy with the test data
from sklearn.metrics import accuracy_score
print(accuracy_score(labels, pred))
ts.py
import os
import pandas as pd
from scipy.misc import imread
import math
import numpy as np
import cv2
import keras
import seaborn as sns
from keras.layers import Dense, Dropout, Flatten, Input
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import BatchNormalization
from keras.optimizers import Adam
from keras.models import Sequential
### LOADING DATASET
data_dir = os.path.abspath('gtsrb-german-traffic-sign/Train')
os.path.exists(data_dir)
### Function to resize the images using open cv
def resize_cv(im):
return cv2.resize(im, (64, 64), interpolation = cv2.INTER_LINEAR)
### Loading datset
list_images = []
output = []
for dir in os.listdir(data_dir):
if dir == '.DS_Store' :
continue
inner_dir = os.path.join(data_dir, dir)
csv_file = pd.read_csv(os.path.join(inner_dir,"GT-" + dir + '.csv'),
sep=';')
for row in csv_file.iterrows() :
img_path = os.path.join(inner_dir, row[1].Filename)
img = imread(img_path)
img =
img[row[1]['Roi.X1']:row[1]['Roi.X2'],row[1]['Roi.Y1']:row[1]['Roi.Y2'],:]
img = resize_cv(img)
list_images.append(img)
output.append(row[1].ClassId)
### Plotting the dataset
fig = sns.distplot(output, kde=False, bins = 43, hist = True,
hist_kws=dict(edgecolor="black", linewidth=2))
fig.set(title = "Traffic signs frequency graph",
xlabel = "ClassId",
ylabel = "Frequency")
input_array = np.stack(list_images)
train_y = keras.utils.np_utils.to_categorical(output)
### Randomizing the dataset
randomize = np.arange(len(input_array))
np.random.shuffle(randomize)
x = input_array[randomize]
y = train_y[randomize]
### Splitting the dataset in train, validation, test set
split_size = int(x.shape[0]*0.6)
train_x, val_x = x[:split_size], x[split_size:]
train1_y, val_y = y[:split_size], y[split_size:]
split_size = int(val_x.shape[0]*0.5)
val_x, test_x = val_x[:split_size], val_x[split_size:]
val_y, test_y = val_y[:split_size], val_y[split_size:]
### Building the model
hidden_num_units = 2048
hidden_num_units1 = 1024
hidden_num_units2 = 128
output_num_units = 43
epochs = 10
batch_size = 16
pool_size = (2, 2)
input_shape = Input(shape=(32, 32,3))
model = Sequential([
Conv2D(16, (3, 3), activation='relu', input_shape=(64,64,3),
padding='same'),
BatchNormalization(),
Conv2D(16, (3, 3), activation='relu', padding='same'),
BatchNormalization(),
MaxPooling2D(pool_size=pool_size),
Dropout(0.2),
Conv2D(32, (3, 3), activation='relu', padding='same'),
BatchNormalization(),
Conv2D(32, (3, 3), activation='relu', padding='same'),
BatchNormalization(),
MaxPooling2D(pool_size=pool_size),
Dropout(0.2),
Conv2D(64, (3, 3), activation='relu', padding='same'),
BatchNormalization(),
Conv2D(64, (3, 3), activation='relu', padding='same'),
BatchNormalization(),
MaxPooling2D(pool_size=pool_size),
Dropout(0.2),
Flatten(),
Dense(units=hidden_num_units, activation='relu'),
Dropout(0.3),
Dense(units=hidden_num_units1, activation='relu'),
Dropout(0.3),
Dense(units=hidden_num_units2, activation='relu'),
Dropout(0.3),
Dense(units=output_num_units, input_dim=hidden_num_units,
activation='softmax'),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-
4), metrics=['accuracy'])
### Training the model
trained_model_conv = model.fit(train_x.reshape(-1,64,64,3), train1_y,
epochs=epochs, batch_size=batch_size, validation_data=(val_x,
val_y))
### Prdicting the class
pred = model.predict_classes(test_x)
### Evaluating the model
model.evaluate(test_x, test_y)
TESTING AND MAINTENANCE
5.1 TESTING
Model testing
A folder named” test” is available in our dataset; inside that, we got the main
working comma-separated file called” test.csv”. It comprises two things, the
image paths, and their respective class labels. We can use the pandas’ python
library to extract the image path with corresponding labels. Next, we need to
resize our images to 30×30 pixels to predict the model and create a numpy
array filled with image data. To understand how the model predicts the
actual labels, we need to import accuracy_score from the sklearn.metrics. At
last, we are calling the Keras model.save() method to keep our trained model.
#testing accuracy on test dataset
from sklearn.metrics import accuracy_score
y_test = pd.read_csv('Test.csv')
labels = y_test["ClassId"].values
imgs = y_test["Path"].values
data=[]for img in imgs:
image = Image.open(img)
image = image.resize((30,30))
data.append(np.array(image))
X_test=np.array(data)
pred = model.predict_classes(X_test)
#Accuracy with the test data
from sklearn.metrics import accuracy_score
print(accuracy_score(labels, pred))
Output
0.9532066508313539
model.save(‘traffic_classifier.h5’)#to save
5.2VALIDATION
The final step involves validation which determines which the software functions
as the user expected.
What is Validation Testing?
Validation testing is the process of ensuring if the tested and developed software
satisfies the client /user needs. The business requirement logic or scenarios have
to be tested in detail. All the critical functionalities of an application must be
tested here.
As a tester, it is always important to know how to verify the business logic or
scenarios that are given to you. One such method that helps in detail evaluation
of the functionalities is the Validation Process.
Whenever you are asked to perform a validation test, it takes a great
responsibility as you need to test all the critical business requirements based on
the user needs. There should not be even a single miss on the requirements
asked by the user. Hence a keen knowledge on validation testing is much
important.
Validation
To train our model, we will use the model.fit() method that works well after
the successful building of model architecture. With the help of 64 batch sizes,
we got 95%accuracy on training sets and acquired stability after 15 epochs.
eps = 15
anc = model.fit(X_t1, y_t1, batch_size=32, epochs=eps,
validation_data=(X_t2, y_t2))
5.3MAINTENANCE
As the number of computer based systems, grieve libraries of computer software
began to expand. In house developed projects produced tones of thousand soft
program source statements. Software products purchased from the outside
added hundreds of thousands of new statements. A dark cloud appeared on the
horizon. All of these programs, all of those source statements-had to be
corrected when false were detected, modified as user requirements changed, or
adapted to new hardware that was purchased. These activities were collectively
called software Maintenance.
The maintenance phase focuses on change that is associated with error
correction, adaptations required as the software's environment evolves, and
changes due to enhancements brought about by changing customer
requirements. Four types of changes are encountered during the maintenance
phase.
 Correction
 Adaptation
 Enhancement
 Prevention
Correction:
Even with the best quality assurance activities is lightly that the customer will
uncover defects in the software. Corrective maintenance changes the software
to correct defects.
Maintenance is a set of software Engineering activities that occur after software
has been delivered to the customer and put into operation. Software
configuration management is a set of tracking and control activities that began
when a software project begins and terminates only when the software is taken
out of the operation.
Only about 20 percent of all maintenance work are spent "fixing mistakes". The
remaining 80 percent are spent adapting existing systems to changes in their
external environment, making enhancements requested by users, and
reengineering an application for use.
ADAPTATION:
Over time, the original environment (E>G., CPU, operating system, business
rules, external product characteristics) for which the software was developed is
likely to change. Adaptive maintenance results in modification to the software to
accommodate change to its external environment.
ENHANCEMENT:
As software is used, the customer/user will recognize additional functions that
will provide benefit. Perceptive maintenance extends the software beyond its
original function requirements.
PREVENTION :
Computer software deteriorates due to change, and because of this,
preventive maintenance, often called software reengineering, must be
conducted to enable the software to serve the needs of its end users. In
essence, preventive maintenance makes changes to computer programs so
that they can be more easily corrected, adapted, and enhanced. Software
configuration management (SCM) is an umbrella activity that is applied
throughout the software process.
USER MANUAL
6.1 USER MANUAL
1. Load The Data.
2. Dataset Summary & Exploration
3. Data Preprocessing.
a) Shuffling.
b) Gray scaling.
c) Local Histogram Equalization.
d) Normalization.
4. Design Model Architecture.
5. Model Training and Evaluation.
6. Testing the Model Using the Test Set.
7. Testing the Model on New Images.
Step 1: Load The Data
Download the dataset from here. https://github.com/rahulsonone1234/Traffic-
Sign-Recognition This is a pickled dataset in which we've already resized the
images to 32x32 and have three .p files of 32x32 resized images:
train.p: The training set.
test.p: The testing set.
valid.p: The validation set.
Use Python pickle to load the data.
Step 2: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
 'features' is a 4D array containing raw pixel data of the traffic sign images,
(num examples, width, height, channels).
 'labels' is a 1D array containing the label/class id of the traffic sign. The file
signnames.csv contains id -> name mappings for each id.
 'sizes' is a list containing tuples, (width, height) representing the original
width and height the image.
 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of
a bounding box around the sign in the image.
First, you have to use numpy provide the number of images in each subset, in
addition to the image size, and the number of unique classes. Number of training
examples: 34799 Number of testing examples: 12630 Number of validation
examples: 4410 Image data shape = (32, 32, 3) Number of classes = 43 Then, you
used matplotlib plot sample images from each subset. And finally, you will use
numpy to plot a histogram of the count of images in each unique class.
Step 3: Data Preprocessing
In this step, you will apply several preprocessing steps to the input images to
achieve the best possible results. you will use the following preprocessing
techniques:
 Shuffling.
 Grayscaling.
 Local Histogram Equalization.
 Normalization.
Step 4: Design A Model Architecture
In this step, you have to design and implement a deep learning model that
learns to recognize traffic signs from our dataset German Traffic Sign Dataset.
You have to use Convolutional Neural Networks to classify the images in this
dataset. The reason behind choosing ConvNets is that they are designed to
recognize visual patterns directly from pixel images with minimal preprocessing.
They automatically learn hierarchies of invariant features at every level from
data. You will implement two of the most famous ConvNets. Our goal is to reach
an accuracy of +95% on the validation set.
.
Step 5: Model Training and Evaluation
In this step, we will train our model using normalized images, then we'll compute
softmax cross entropy between logits and labels to measure the model's error
probability.
Now, we'll run the training data through the training pipeline to train the model.
Before each epoch, we'll shuffle the training set.
After each epoch, we measure the loss and accuracy of the validation set.
And after training, we will save the model.
A low accuracy on the training and validation sets imply underfitting. A high
accuracy on the training set but low accuracy on the validation set implies
overfitting.
Step 6: Testing the Model using the Test Set
Now, you have to use the testing set to measure the accuracy of the model over
unknown examples. We've been able to reach a Test accuracy of 95%. A
remarkable performance.
Now we'll plot the confusion matrix to see where the model actually fails.
We observe some clusters in the confusion matrix above. It turns out that the
various speed limits are sometimes misclassified among themselves. Similarly,
traffic signs with traingular shape are misclassified among themselves. We can
further improve on the model using hierarchical CNNs to first identify broader
groups (like speed signs) and then have CNNs to classify finer features (such as
the actual speed limit).
Step 7: Testing the Model on New Images
In this step, you can use the model to predict traffic signs type of 5 random
images of German traffic signs from the web our model's performance on these
images. Number of new testing examples: 5
For instance, we have easy to predict signs like the "Stop" and the "No entry".
The two signs are clear and belong to classes where the model can predict with
high accuracy.
On the other hand, we have signs belong to classes where has poor accuracy,
like the "Speed limit" sign, because as stated above it turns out that the various
speed limits are sometimes misclassified among themselves, and the
"Pedestrians" sign, because traffic signs with traingular shape are misclassified
among themselves.
notice from the top 5 softmax probabilities, the model has very high confidence
(100%) when it comes to predict simple signs, like the "Stop" and the "No entry"
sign, and even high confidence when predicting simple triangular signs in a very
clear image, like the "Yield" sign.
On the other hand, the model's confidence slightly reduces with more complex
triangular sign in a "pretty noisy" image, in the "Pedestrian" sign image, we have
a triangular sign with a shape inside it, and the images copyrights adds some
noise to the image, the model was able to predict the true class, but with 80%
confidence.
And in the "Speed limit" sign, we can observe that the model accurately
predicted that it's a "Speed limit" sign, but was somehow confused between the
different speed limits. However, it predicted the true class at the end.
The VGGNet model was able to predict the right class for each of the 5 new test
images. Test Accuracy = 100.0%. In all cases, the model was very certain (80% -
100%).
CONCLUSION AND FUTURE SCOPE
7.1 CONCLUSION
In this paper, a traffic sign recognition method on account of deep learning
with help of a convolutional neural network (CNN) and Keras is proposed,
which mainly aims at different traffic signs. By using image pre-processing,
the data set from Keggal, traffic sign detection, recognition, and
classification, this method can effectively detect and identify traffic signs.
With help of these results, we can identify the traffic sign. It helps the user in
two ways, while the user is in manual mode it displays the result on the
dashboard screen and while the car driver is set to automatic it helps the car
to drive safely by identifying the traffic signs. The test result displays that the
accuracy of this method is very high.
7.2 LIMITATIONS AND FUTURE SCOPE
Traffic Sign Recognition (TSR) is to detect the location of traffic signs from
digital images or video frames, given a specific classification. The TSR
methods basically make use of visual information such as shape and color of
traffic signs. However, the conventional TSR algorithms are facing drawbacks
in real-time tests, such as being easily restricted by driving conditions,
including lighting, camera angle, obstruction, driving speed, and so on. It’s
also very difficult to achieve multi-target detection, easy to miss visual
objects because of slow recognition.
Our algorithm is continuous in detecting the signs which leads to detecting
signs even there are no signs in the area, which leads to continuous flow of
output. This results in false detection or unnecessary detection. This could be
improved by increasing the threshold value for detecting sign. The overall
performance can also be improved and customized with the help of more
datasets from different countries.
REFERENCES
[1] Thakur Pankaj and D. Manoj E. Patil “Recognition Of Traffic Symbols Using
K-Means And Shape Analysis” International Journal of Engineering Research
& Technology (IJERT) Vol. 2 Issue 5, May - 2013 ISSN: 2278-0181.
[2] Zhou, L., & Deng, Z. (2014). “LIDAR and vision-based real-time traffic sign
detection and recognition algorithm for intelligent vehicle”. 17th
International IEEE Conference on Intelligent Transportation Systems (ITSC).
doi:10.1109/itsc.2014.6957752.
[3] Zakir, U., Edirishinghe, E. A., & Hussain, A. (2012). Road Sign Detection
and Recognition from Video Stream Using HSV, Contour let Transform and
Local Energy Based Shape Histogram. Lecture Notes in Computer Science,
411– 419. doi:10.1007/978- 3-642-3156.
[4] Traffic Sign Detection and Recognition Based on Convolutional Neural
Network, Yingsun; pingshuge; dequan liu, 2019, IEEE
[5] Research and application of traffic sign detection and recognition based
on deep learning, Canyong wang,2018 IEEE.
[6] Traffic sign detection and classification using color feature and neural
network, Md. Abdul alim sheikh; alok kole; Tanmoy maity 2018 IEEE
[7] Autonomous Traffic Sign(ASTR) Detection and Recognition using Deep
CNN, Danyah A.Alghmgham , ghazanfar latif , jaafar alghazo , loay alzubaidi
,2019 SCIENCEDIRECT
[8] Understanding of a convolutional Neural network, saad albawi; tareq
abed mohammed; saad al-zawi 2018 IEEE.
[9] https://externlabs.com/blogs/python-frameworks-for-desktop-
applications/
[10] https://andrejgajdos.com/single-page-application-vs-multiple-page-
application/
[11] https://www.w3schools.com/python/pandas/default.asp
[12] https://www.javatpoint.com/keras
Rajshree1.pdf

More Related Content

Similar to Rajshree1.pdf

IRJET- Self Driving Car using Deep Q-Learning
IRJET-  	  Self Driving Car using Deep Q-LearningIRJET-  	  Self Driving Car using Deep Q-Learning
IRJET- Self Driving Car using Deep Q-LearningIRJET Journal
 
IRJET - Autonomous Navigation System using Deep Learning
IRJET -  	  Autonomous Navigation System using Deep LearningIRJET -  	  Autonomous Navigation System using Deep Learning
IRJET - Autonomous Navigation System using Deep LearningIRJET Journal
 
TRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNN
TRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNNTRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNN
TRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNNIRJET Journal
 
Traffic sign recognition and detection using SVM and CNN
Traffic sign recognition and detection using SVM and CNNTraffic sign recognition and detection using SVM and CNN
Traffic sign recognition and detection using SVM and CNNIRJET Journal
 
VEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSIS
VEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSISVEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSIS
VEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSISIRJET Journal
 
ROAD POTHOLE DETECTION USING YOLOV4 DARKNET
ROAD POTHOLE DETECTION USING YOLOV4 DARKNETROAD POTHOLE DETECTION USING YOLOV4 DARKNET
ROAD POTHOLE DETECTION USING YOLOV4 DARKNETIRJET Journal
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASIRJET Journal
 
Real time Traffic Signs Recognition using Deep Learning
Real time Traffic Signs Recognition using Deep LearningReal time Traffic Signs Recognition using Deep Learning
Real time Traffic Signs Recognition using Deep LearningIRJET Journal
 
Self Driving Car
Self Driving CarSelf Driving Car
Self Driving CarIRJET Journal
 
Residual balanced attention network for real-time traffic scene semantic segm...
Residual balanced attention network for real-time traffic scene semantic segm...Residual balanced attention network for real-time traffic scene semantic segm...
Residual balanced attention network for real-time traffic scene semantic segm...IJECEIAES
 
Autonomous Vehicle and Augmented Reality Usage
Autonomous Vehicle and Augmented Reality UsageAutonomous Vehicle and Augmented Reality Usage
Autonomous Vehicle and Augmented Reality UsageDr. Amarjeet Singh
 
A Literature Survey on Image Linguistic Visual Question Answering
A Literature Survey on Image Linguistic Visual Question AnsweringA Literature Survey on Image Linguistic Visual Question Answering
A Literature Survey on Image Linguistic Visual Question AnsweringIRJET Journal
 
Noise Removal in Traffic Sign Detection Systems
Noise Removal in Traffic Sign Detection SystemsNoise Removal in Traffic Sign Detection Systems
Noise Removal in Traffic Sign Detection SystemsCSEIJJournal
 
CAR DAMAGE DETECTION USING DEEP LEARNING
CAR DAMAGE DETECTION USING DEEP LEARNINGCAR DAMAGE DETECTION USING DEEP LEARNING
CAR DAMAGE DETECTION USING DEEP LEARNINGIRJET Journal
 
A REVIEW ON IMPROVING TRAFFIC-SIGN DETECTION USING YOLO ALGORITHM FOR OBJECT ...
A REVIEW ON IMPROVING TRAFFIC-SIGN DETECTION USING YOLO ALGORITHM FOR OBJECT ...A REVIEW ON IMPROVING TRAFFIC-SIGN DETECTION USING YOLO ALGORITHM FOR OBJECT ...
A REVIEW ON IMPROVING TRAFFIC-SIGN DETECTION USING YOLO ALGORITHM FOR OBJECT ...IRJET Journal
 
A Review: Machine vision and its Applications
A Review: Machine vision and its ApplicationsA Review: Machine vision and its Applications
A Review: Machine vision and its ApplicationsIOSR Journals
 
Vision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
Vision-Based Motorcycle Crash Detection and Reporting Using Deep LearningVision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
Vision-Based Motorcycle Crash Detection and Reporting Using Deep LearningIRJET Journal
 
Autonomous Driving Scene Parsing
Autonomous Driving Scene ParsingAutonomous Driving Scene Parsing
Autonomous Driving Scene ParsingIRJET Journal
 
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...IRJET Journal
 

Similar to Rajshree1.pdf (20)

IRJET- Self Driving Car using Deep Q-Learning
IRJET-  	  Self Driving Car using Deep Q-LearningIRJET-  	  Self Driving Car using Deep Q-Learning
IRJET- Self Driving Car using Deep Q-Learning
 
IRJET - Autonomous Navigation System using Deep Learning
IRJET -  	  Autonomous Navigation System using Deep LearningIRJET -  	  Autonomous Navigation System using Deep Learning
IRJET - Autonomous Navigation System using Deep Learning
 
TRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNN
TRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNNTRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNN
TRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNN
 
Traffic sign recognition and detection using SVM and CNN
Traffic sign recognition and detection using SVM and CNNTraffic sign recognition and detection using SVM and CNN
Traffic sign recognition and detection using SVM and CNN
 
VEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSIS
VEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSISVEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSIS
VEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSIS
 
ROAD POTHOLE DETECTION USING YOLOV4 DARKNET
ROAD POTHOLE DETECTION USING YOLOV4 DARKNETROAD POTHOLE DETECTION USING YOLOV4 DARKNET
ROAD POTHOLE DETECTION USING YOLOV4 DARKNET
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADAS
 
Real time Traffic Signs Recognition using Deep Learning
Real time Traffic Signs Recognition using Deep LearningReal time Traffic Signs Recognition using Deep Learning
Real time Traffic Signs Recognition using Deep Learning
 
Self Driving Car
Self Driving CarSelf Driving Car
Self Driving Car
 
A Review paper on Artificial Neural Network: Intelligent Traffic Management S...
A Review paper on Artificial Neural Network: Intelligent Traffic Management S...A Review paper on Artificial Neural Network: Intelligent Traffic Management S...
A Review paper on Artificial Neural Network: Intelligent Traffic Management S...
 
Residual balanced attention network for real-time traffic scene semantic segm...
Residual balanced attention network for real-time traffic scene semantic segm...Residual balanced attention network for real-time traffic scene semantic segm...
Residual balanced attention network for real-time traffic scene semantic segm...
 
Autonomous Vehicle and Augmented Reality Usage
Autonomous Vehicle and Augmented Reality UsageAutonomous Vehicle and Augmented Reality Usage
Autonomous Vehicle and Augmented Reality Usage
 
A Literature Survey on Image Linguistic Visual Question Answering
A Literature Survey on Image Linguistic Visual Question AnsweringA Literature Survey on Image Linguistic Visual Question Answering
A Literature Survey on Image Linguistic Visual Question Answering
 
Noise Removal in Traffic Sign Detection Systems
Noise Removal in Traffic Sign Detection SystemsNoise Removal in Traffic Sign Detection Systems
Noise Removal in Traffic Sign Detection Systems
 
CAR DAMAGE DETECTION USING DEEP LEARNING
CAR DAMAGE DETECTION USING DEEP LEARNINGCAR DAMAGE DETECTION USING DEEP LEARNING
CAR DAMAGE DETECTION USING DEEP LEARNING
 
A REVIEW ON IMPROVING TRAFFIC-SIGN DETECTION USING YOLO ALGORITHM FOR OBJECT ...
A REVIEW ON IMPROVING TRAFFIC-SIGN DETECTION USING YOLO ALGORITHM FOR OBJECT ...A REVIEW ON IMPROVING TRAFFIC-SIGN DETECTION USING YOLO ALGORITHM FOR OBJECT ...
A REVIEW ON IMPROVING TRAFFIC-SIGN DETECTION USING YOLO ALGORITHM FOR OBJECT ...
 
A Review: Machine vision and its Applications
A Review: Machine vision and its ApplicationsA Review: Machine vision and its Applications
A Review: Machine vision and its Applications
 
Vision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
Vision-Based Motorcycle Crash Detection and Reporting Using Deep LearningVision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
Vision-Based Motorcycle Crash Detection and Reporting Using Deep Learning
 
Autonomous Driving Scene Parsing
Autonomous Driving Scene ParsingAutonomous Driving Scene Parsing
Autonomous Driving Scene Parsing
 
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
 

Recently uploaded

EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)Dr. Mazin Mohamed alkathiri
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupJonathanParaisoCruz
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 

Recently uploaded (20)

EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized Group
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 

Rajshree1.pdf

  • 1. VMV COMMERCE, JMT ARTS & JJP SCIENCE COLLEGE (Institute of Computer Studies and Research) Wardhaman Nagar, Nagpur - 08. (MCA Institute of Computer Studies and Research) Documentation for Final project Report 1) * Cover Page (Paste on Bound Record) and Front Page (First page of report). 2) *College Certificate 3) CompanyCertificate 4) *Declaration 5) Acknowledgement 6) Company Profile (Collect from Company) 7) Index Page *Contents ——————————
  • 2. VMV COMMERCE, JMT ARTS & JJP SCIENCE COLLEGE (Institute of Computer Studies and Research) Wardhaman Nagar, Nagpur-08. (MCA Institute of Computer Studies and Research) A Project Report On “Traffic Sign Recognition Using Machine Learning Techniques” Submitted to Rashtrasant Tukadoji Maharaj Nagpur University, Nagpur In partial fulfillment of the requirement of Master of Computer Applications M.C.A-II SEM-IV Developed & Submitted by Rajshree Rajkumar Hande Under the Guidance of Prof.M.D.Manmode 2022-23
  • 3. VMV COMMERCE, JMT ARTS & JJP SCIENCE COLLEGE (Institute of Computer Studies and Research) Wardhaman Nagar, Nagpur-08. (MCA Institute of Computer Studies and Research) DECLARATION We here-by declare that the project entitled “Traffic Sign Recognition Using Machine Learning Techniques’’ has completed by us in partial fulfillment of M.C.A. - II (Master of Computer Applications), Sem.-IV degree examination as prescribed by Rashtrasant Tukadoji Maharaj Nagpur University, Nagpur and had not been submitted for any other examination and does not form the part of any other course undergone by us. NameofStudent(s) Student Signature(s) Rajshree . R .Hande Place:Nagpur Date: Academic Year 2022 - 23
  • 4. VMV COMMERCE, JMT ARTS & JJP SCIENCE COLLEGE (Institute of Computer Studies and Research) Wardhaman Nagar, Nagpur-08. (MCA Institute of Computer Studies and Research) CERTIFICATE This is to certify that the project entitled “Traffic Sign Recognition Using Machine Learning Techniques” by “ Rajshree Rajkumar Hande in partial fulfillment of M.C.A. -- II (Master of Computer Applications), Sem.-II degree examination, had not been submitted for any other examination and does not form part of any other course undergone by the candidate. It is further certified that he/she/they have completed his/her/their project as prescribed by Rashtrasant Tukadoji Maharaj Nagpur University, Nagpur. Prof. M. D. Manmode Dr. V. R. Bhedi Guide In-charge & HOD Internal Examiner External Examiner Place: Nagpur Date: Academic Year 2022 - 23
  • 5. ACKNOWLEDGEMENT With immense pride and sense of gratitude, we take this golden opportunity to express our sincere regards to the honorable In- charge and HOD Dr. V. R. Bhedi of institute for providing us facilities and inspiration to gather professional knowledge and material without which it would have been impossible to complete this hard task. we are extremely thankful to our Project Guide Prof. M. D. Manmode for her project guideline throughout the project. Our sincere regards to her for giving us her outstanding guidance, enthusiastic suggestions and invaluable encouragement which helped us to complete the project. we will be failing in our duty if you do not thank the non- teaching staff of the college for their cooperation. we would like to thank all who helped us in making this project complete and successful one. --Name of Projectees-- (MCA)
  • 6. INDEX SR.NO CONTENTS PAGE NO. 1. INTRODUCTION 1.1 EXISTING SYSTEM AND NEED FOR NEW SYSTEM 1.2 PROPOSED SYSTEM 1.3 SCOPE OF WORK 2. PROBLEM DEFINITION 2.1REVIEW OF RELATED WORK 2.2 PROBLEM DEFINITION 3. ANALYSIS & DESIGN 3.1 USER REQUIREMENTS 3.2 FRONT END & BACK END 3.3 SYSTEM FLOW 3.4 MODULE DESCRIPTION AND FLOW 3.5 DATA FLOW DIAGRAM (DFD) 3.6 ENTITY RELATIONSHIP DIAGRAM (ERD) 3.7 TABLE DESIGN 4. IMPLEMENTATION & RESULTS 4.1 INPUT FORMS WITH DATA 4.2 OUTPUT REPORTS WITH DATA 4.3 SAMPLE CODE 5. TESTING AND MAINTENANCE 5.1TESTING 5.2VALIDATION 5.3MAINTENANCE 6. USER MANUAL 6.1 USER MANUAL 7. CONCLUSION AND FUTURE SCOPE 7.1 CONCLUSION 7.2 LIMITATIONS AND FUTURE SCOPE 8. REFERENCES 1 Blank Page at the end.
  • 7. 1. INTRODUCTION In this era of Artificial Intelligence, humans are becoming more dependent on technology. With the enhanced technology, multinational companies like Google, Tesla, Uber, Ford, Audi, Toyota, Mercedes-Benz, and many more are working on automating vehicles. They are trying to make more accurate autonomous or driver less vehicles. We all might know about self-driving cars, where the vehicle itself behaves like a driver and does not need any human guidance to run on the road. This is not wrong to think about the safety aspects—a chance of significant accidents from machines. But no machines are more accurate than humans. Researchers are running many algorithms to ensure 100% road safety and accuracy. One such algorithm is Traffic Sign Recognition that we talk about in this blog. When we go on the road, we see various traffic signs like traffic signals, turn left or right, speed limits, no passing of heavy vehicles, no entry, children crossing, etc., that we need to follow for a safe drive. Likewise, autonomous vehicles also have to interpret these signs and make decisions to achieve accuracy. The methodology of recognizing which class a traffic sign belongs to is called Traffic signs classification. In this Deep Learning project, we will build a model for the classification of traffic signs available in the image into many categories using a convolution neural network(CNN) and Keras library.
  • 8. 1.1 EXISTING SYSTEM AND NEED FOR NEW SYSTEM Traffic signs carry much information necessary for successful driving - they describe up-to-date traffic situations, define right-of-way, prohibit or permit certain actions or directions, warn about risk factors, etc. Road signs also help drivers with routing the vehicle by identifying the road-sign by using computer vision. The road conditions in the actual scene are very complicated so it was really very hard for the researchers to make the system efficient. The existing system has been detected and categorized using standard computer vision methods, but it was taking a long time. The existing methods used for designing traffic sign recognition model are 1) K-means clustering 2) Lidar and vision based 3) Video streaming DISADVANTAGES OF EXSISTING METHODS I. False Detection II. Redundancy (inter pixel redundancy) III. Less efficiency (compared to our model) IV. Cost related issues
  • 9. 1.2 PROPOSED SYSTEM In this project, we make a CNN block where predictions are directly preformed across multiple feature levels. For this project, we are using the general public data set available at Kaggle i.e GTSRB. Our approach to putting together this traffic sign classification model is discussed in four steps: Step 1: Explore the data set Our ‘train’ folders contain 43 folders each representing a special class. The range of the folder is from 0,1,2, upto 42. With the assistance of the OS module, we iterate over all the classes and append images and their respective labels within the data and labels list. Step 2: Build a CNN model To classify the pictures into their respective categories, well build a CNN model (Convolutional Neural Network). CNN is best for image classification purposes. Steps 3: Train and validate the model After building the model architecture, we then teach the model using model.fit(). Step 4: Test our model with test data set Our data set contains a test folder and during a test.csv file, we've the small print associated with the image path and their respective class labels. Now we are getting to build a graphical interface for our traffic signs classifier with Tkinter. Tkinter is nothing but a GUI toolkit within the standard python library. Here you upload the pictures and classify the image. 1.3 SCOPE OF WORK This project introduces sign detection and recognition. It describes the characteristics and requirements and also difficulties between the road sign identification and recognition of the road signs. It shows the convolution neural network technique used for verification and classification of the road signs. Our project introduces a traffic sign detection and recognition system that accurately estimates the situation and exact boundary of traffic signs using convolution neural network (CNN). During this Python project, you'll build a deep neural network model which will classify traffic signs present within the image into different categories. With this model, you are ready to read and understand traffic signs which are a really important task for all autonomous vehicles.
  • 10. 2.PROBLEM DEFINITION 2.1 REVIEW OF RELATED WORK Many scholars have recently conducted studies on the topic of traffic sign recognition. The authors apply the notion of convolutional neural networks to recognize and classify the photos, according to the research article. The image is first pre-processed to highlight the most significant details. The Hough Transform is then used to detect and locate the areas. The suggested system detects and recognizes traffic sign images in real time. A freshly developed database of 24 different traffic signs collected from random roadsides in Saudi Arabia is also a contribution to this work. The photographs were taken from various perspectives and included various characteristics and circumstances. A total of 2718 photos were collected to create the Saudi Arabian Traffic and Road Signs collection (SATRS-2018). To get the best recognition rates, the CNN architecture was employed in various settings. The proposed CNN architecture attained a precision of 100 percent in experiments, which is higher than that of similar previous studies. This study provides an intelligent transportation system design based on current requirements and technology to address the bottleneck issues that plague intelligent transportation research and to investigate the ITS and research focus's future prospects as new technologies become accessible. This research presents a framework for detecting and categorizing various sorts of traffic signs in photos. Road sign detection and classification and recognition are the two key elements of the technique. Color-based segmentation is used in the first stage to determine whether or not a traffic sign is present. The sign will be highlighted, normalized in size, and categorized if it is present. For classification, a neural network is used. Stop, No Entry, Give Way, and Speed Limit signs are among the four types of traffic indicators used for evaluation. For training purposes, 300 sets of photos are employed, 75 sets for each kind. Testing is done with 200 photos. The detection rate is above 90%, and the recognition accuracy is over 88 percent, according to the test findings.
  • 11. In this study, a deep learning-based system for recognizing road traffic signs is created, which has a lot of potential in the development of ADAS and autonomous vehicles. The system architecture is intended to key elements from photographs of traffic signs in order to categorize them. To conduct the recognition, the presented method employs a modified LeNet-5 network to extract a deep representation of traffic signs. It is made up of a Convolutional Neural Network (CNN) with all convolutional layers' output connected to a Multilayer Perceptron (MLP). The training uses the German Traffic Sign Dataset and produces good results in terms of traffic sign recognition. 2.2 PROBLEM DEFINITION We always come across incidents of accidents where drivers’ Over speed or lack of vision leads to major accidents. In winter, the risk of road accidents has a 40-50% increase because of the traffic signs’ lack of visibility. So here in this article, we will be implementing Traffic Sign recognition using a Convolutional Neural Network. It will be very useful in Automatic Driving Vehicles. When you go on the road, you see various traffic signs like traffic signals, turn left or right, speed limits, zebra crossing, u-turn, no passing of heavy vehicles, no entry, children crossing, etc., that you need to follow for safety purpose. Likewise, autonomous or self-driving cars must interpret these signboards and make decisions to achieve maximum accuracy. This algorithm recognizes which class a traffic signboard belongs to is called Traffic signs recognition. In the world of Artificial Intelligence and advancement in technologies, many researchers and big companies like Tesla, Uber, Google, Mercedes-Benz, Toyota, Ford, Audi, etc. are working on autonomous vehicles and self-driving cars. So, for achieving accuracy in this technology, the vehicles should be able to interpret traffic signs and make decisions accordingly.
  • 12. 3.ANALYSIS & DESIGN 3.1 USER REQUIREMENTS User Requirements Definition: The user requirement for this system is to make the system fast, flexible, less prone to error, reduce expenses and save the time. The recognition of traffic signs is an important study area in computer vision. It can be divided into two types of technologies: traffic-sign detection and recognition. The accuracy of detection will directly influence the ultimate results of identification. Traffic signs convey important signals about vehicle safety and display current traffic conditions, define road rights, prohibit and permit certain behaviors and driving routes, and display dangerous messages, among other things. They can also assist drivers in determining the state of the road and thus the best driving routes. Humans are becoming more reliant on technology in this age of artificial intelligence. Multinational corporations such as Google, Tesla, Uber, Ford, Audi, Toyota, Mercedes-Benz, and others are attempting to automate vehicles using improved technology. They're attempting to develop more precise autonomous or driverless automobiles. You may have heard about self-driving cars, in which the vehicle acts as a driver and does not require human intervention to operate on the road. It is reasonable to consider the safety aspects—the possibility of serious machine mishaps. However, no machine is more precise than humans. Many algorithms are being conducted by researchers to assure complete road safety and accuracy. When driving on the road, you will encounter numerous traffic signs such as traffic lights, turn left or right, speed restrictions, no passing of heavy vehicles, no entering, children crossing, and so on, which you must obey in order to drive safely. In order to reach accuracy, autonomous vehicles must also analyze these indicators and make decisions. Traffic signs classification is the process of determining which class a traffic sign belongs to.
  • 13. 3.2 FRONT END & BACK END FRONT END Set up Anaconda, Jupyter Notebook for the front-end Figure .1 Front -End Desktop Frameworks for Windows App Development- Python has a massive range of libraries and tools that opens you to their pre-written tools and less time in development. In addition, there is a wide range of ecosystems that extend you to development and solutions like Pandas and Numpy for analysis Figure .2 Frameworks
  • 14. What is Tkinter ? Tkinter is one of the most popular programming frameworks for Desktop apps and GUIs. It is a combination of the Tk and Python GUI frameworks. It is named Tkinter because of its simple UI and UX; development beginners can easily use it for python desktop applications. Tkinter has an abundant source of codes and reference books that makes it a popular choice. In addition, it has various widgets, like labels, buttons, and almost everything that you might need in your python desktop development process and GUI Designing. Tkinter is available for Windows, Mac, Linux, and mobile.[9] Figure .3 Tkinter Important Libraries must be Imported: Pandas – Use to load the data frame in a 2D array format. NumPy – NumPy arrays are very fast and can perform large computations in a very short time. Matplotlib – This library is used to draw visualizations. OpenCV – This library mainly focused on image processing and handling. Tensorflow – It provides a range of functions to achieve complex functionalities with single lines of code. BACK END Install Tensor flow and keras for the Back-end Figure .4 Back-end
  • 15. Back-end development is the writing of code and designing of the database and server. The back-end logic begins with one of two types of applications: a. Single page application: Involves dynamically rewriting one single page without reloading the page from the server. It only requires APIs to reload the page. b. Multi-page application: Requires the page to be loaded again and again from the server at the user’s request. The back-end needs extensive code and logic but there are a few frameworks that make it easier to develop. The choice of framework is up to the developer and also depends on the tech stack required for the project. Some frameworks are Node.js, Flask, Django, Laravel, Swift, and Flutter. Because back-end software can run on a vast array of different systems, there are a wide variety of tools and skills that back-end developers may learn. Here are the major languages: Python can be used for either front-end or back-end development. That said, it’s approachable syntax and widespread server-side use makes Python a core programming language for back-end development. Front-end Python is not unheard of, it’s just not usually preferred. Python is open source and works with the Flask and Django web frameworks. [10] Datasets and algorithms are also a key component to coding on the back-end, so database technology variants of SQL become invaluable for writing database queries and to create models.
  • 16. What is Pandas ? Pandas is a Python library used for working with data sets. It has functions for analyzing, cleaning, exploring, and manipulating data.[11] Figure .5 Pandas Mechanism What is Keras ? Keras is an open-source high-level Neural Network library, which is written in Python is capable enough to run on Theano, TensorFlow, or CNTK. It was developed by one of the Google engineers, Francois Chollet. It is made user- friendly, extensible, and modular for facilitating faster experimentation with deep neural networks. It not only supports Convolutional Networks and Recurrent Networks individually but also their combination. It cannot handle low-level computations, so it makes use of the Backend library to resolve it. The backend library act as a high-level API wrapper for the low-level API, which lets it run on TensorFlow, CNTK, or Theano.[12]
  • 17. 3.3 SYSTEM FLOW A system flowchart is a valuable presentation aid because it shows how my system major components fit together and interact. In effect, it serves as a system roadmap. Figure .6 System Flow Diagram
  • 18. 3.4 MODULE DESCRIPTION AND FLOW This project has only one user module. This module has it own specification. Main function of user module :- In this module ,the user can upload a picture into this module after choosing the image.He/she can classify the picture. This project we’ll develop using python. We are getting to develop a model which can detect the traffic sign. We build deep neural network model which will identify which traffic sign is present therein image. We also used PIL library to open image content into array. Our data-set contain train folder which carries folder each represents different classes and in test folder we've the small print associated with the image path and their respective class labels. Figure .7 Flowchart
  • 19. 3.5 DATA FLOW DIAGRAM (DFD) A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system, modeling its process aspects. Often they are a preliminary step used to create an overview of the system which can later be elaborated. DFDs can also be used for the visualization of data processing (structured design). A DFD shows what kind of information will be input to and output from the system, where the data will come from and go to, and where the data will be stored. It does not show information about the timing of processes, or information about whether processes will operate in sequence or in parallel. Figure .8 Data Flow Diagram
  • 20. 3.6 ENTITY RELATIONSHIP DIAGRAM (ERD) In software engineering, an entity–relationship model (ER model) is a data model for describing the data or information aspects of a business domain or its process requirements, in an abstract way that lends itself to ultimately being implemented in a database such as a relational database. The main components of ER models are entities (things) and the relationships that can exist among them. An Entity-relationship model (ERM) is an abstract and conceptual representation of data. ER modeling is a DB modeling method, used to produce a type of conceptual schema of a system. Diagrams created by this process are called ER diagrams The ER diagram of the project “Traffic Sign Classification Using CNN and Keras in Python” is shown in Fig . Figure 9. ER diagram
  • 21. 3.7 TABLE DESIGN About Database : The German Traffic Sign Benchmark is a multi-class, single-image classification challenge held at the International Joint Conference on Neural Networks (IJCNN) 2011. Our benchmark has the following properties:  Single-image, multi-class classification problem  More than 40 classes  More than 50,000 images in total  Large, lifelike database The tables used in the database are as follow: In this project, we used Python and Tensorflow to classify traffic signs. Dataset used: German Traffic Sign Dataset. This data-set has more than 50,000 images of 43 classes. 96.06% testing accuracy.
  • 22.
  • 23. IMPLEMENTATION & RESULTS 4.1 INPUT FORMS WITH DATA
  • 24. 4.2 OUTPUT REPORTS WITH DATA
  • 25. 4.3 SAMPLE CODE gui.py import tkinter as tk from tkinter import filedialog from tkinter import * from PIL import ImageTk, Image import numpy #load the trained model to classify sign from keras.models import load_model model = load_model('traffic_classifier.h5') #dictionary to label all traffic signs class. classes = { 1:'Speed limit (20km/h)', 2:'Speed limit (30km/h)', 3:'Speed limit (50km/h)', 4:'Speed limit (60km/h)', 5:'Speed limit (70km/h)', 6:'Speed limit (80km/h)', 7:'End of speed limit (80km/h)', 8:'Speed limit (100km/h)', 9:'Speed limit (120km/h)', 10:'No passing', 11:'No passing veh over 3.5 tons', 12:'Right-of-way at intersection', 13:'Priority road', 14:'Yield', 15:'Stop', 16:'No vehicles', 17:'Veh > 3.5 tons prohibited', 18:'No entry', 19:'General caution', 20:'Dangerous curve left', 21:'Dangerous curve right', 22:'Double curve', 23:'Bumpy road', 24:'Slippery road', 25:'Road narrows on the right', 26:'Road work', 27:'Traffic signals', 28:'Pedestrians', 29:'Children crossing', 30:'Bicycles crossing', 31:'Beware of ice/snow', 32:'Wild animals crossing',
  • 26. 33:'End speed + passing limits', 34:'Turn right ahead', 35:'Turn left ahead', 36:'Ahead only', 37:'Go straight or right', 38:'Go straight or left', 39:'Keep right', 40:'Keep left', 41:'Roundabout mandatory', 42:'End of no passing', 43:'End no passing veh > 3.5 tons' } #initialise GUI top=tk.Tk() top.geometry('800x600') top.title('Traffic sign classification') top.configure(background='#CDCDCD') label=Label(top,background='#CDCDCD', font=('arial',15,'bold')) sign_image = Label(top) def classify(file_path): global label_packed image = Image.open(file_path) image = image.resize((30,30)) image = numpy.expand_dims(image, axis=0) image = numpy.array(image) print(image.shape) pred = model.predict_classes([image])[0] sign = classes[pred+1] print(sign) label.configure(foreground='#011638', text=sign) def show_classify_button(file_path): classify_b=Button(top,text="Classify Image",command=lambda: classify(file_path),padx=10,pady=5) classify_b.configure(background='#364156', foreground='white',font=('arial',10,'bold')) classify_b.place(relx=0.79,rely=0.46) def upload_image(): try: file_path=filedialog.askopenfilename() uploaded=Image.open(file_path) uploaded.thumbnail(((top.winfo_width()/2.25),(top.winfo_height()/2.25))
  • 27. ) im=ImageTk.PhotoImage(uploaded) sign_image.configure(image=im) sign_image.image=im label.configure(text='') show_classify_button(file_path) except: pass upload=Button(top,text="Upload an image",command=upload_image,padx=10,pady=5) upload.configure(background='#364156', foreground='white',font=('arial',10,'bold')) upload.pack(side=BOTTOM,pady=50) sign_image.pack(side=BOTTOM,expand=True) label.pack(side=BOTTOM,expand=True) heading = Label(top, text="Know Your Traffic Sign",pady=20, font=('arial',20,'bold')) heading.configure(background='#CDCDCD',foreground='#364156') heading.pack() top.mainloop() traffic_sign.py import numpy as np import pandas as pd import matplotlib.pyplot as plt import cv2 import tensorflow as tf from PIL import Image import os from sklearn.model_selection import train_test_split from keras.utils import to_categorical from keras.models import Sequential, load_model from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout data = [] labels = [] classes = 43
  • 28. cur_path = os.getcwd() #Retrieving the images and their labels for i in range(classes): path = os.path.join(cur_path,'gtsrb-german-traffic-sign/Train',str(i)) images = os.listdir(path) a for a in images: try: image = Image.open(path + ''+ a) image = image.resize((30,30)) image = np.array(image) #sim = Image.fromarray(image) data.append(image) labels.append(i) except: print("Error loading image") #Converting lists into numpy arrays data = np.array(data) labels = np.array(labels) print(data.shape, labels.shape) #Splitting training and testing dataset X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, random_state=42) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) #Converting the labels into one hot encoding y_train = to_categorical(y_train, 43) y_test = to_categorical(y_test, 43) #Building the model model = Sequential() model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape=X_train.shape[1:])) model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu')) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(rate=0.5))
  • 29. model.add(Dense(43, activation='softmax')) #Compilation of the model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) epochs = 15 history = model.fit(X_train, y_train, batch_size=32, epochs=epochs, validation_data=(X_test, y_test)) model.save("my_model.h5") #plotting graphs for accuracy plt.figure(0) plt.plot(history.history['accuracy'], label='training accuracy') plt.plot(history.history['val_accuracy'], label='val accuracy') plt.title('Accuracy') plt.xlabel('epochs') plt.ylabel('accuracy') plt.legend() plt.show() plt.figure(1) plt.plot(history.history['loss'], label='training loss') plt.plot(history.history['val_loss'], label='val loss') plt.title('Loss') plt.xlabel('epochs') plt.ylabel('loss') plt.legend() plt.show() #testing accuracy on test dataset from sklearn.metrics import accuracy_score y_test = pd.read_csv('Test.csv') labels = y_test["ClassId"].values imgs = y_test["Path"].values data=[] for img in imgs: image = Image.open(img) image = image.resize((30,30)) data.append(np.array(image)) X_test=np.array(data)
  • 30. pred = model.predict_classes(X_test) #Accuracy with the test data from sklearn.metrics import accuracy_score print(accuracy_score(labels, pred)) ts.py import os import pandas as pd from scipy.misc import imread import math import numpy as np import cv2 import keras import seaborn as sns from keras.layers import Dense, Dropout, Flatten, Input from keras.layers import Conv2D, MaxPooling2D from keras.layers import BatchNormalization from keras.optimizers import Adam from keras.models import Sequential ### LOADING DATASET data_dir = os.path.abspath('gtsrb-german-traffic-sign/Train') os.path.exists(data_dir) ### Function to resize the images using open cv def resize_cv(im): return cv2.resize(im, (64, 64), interpolation = cv2.INTER_LINEAR) ### Loading datset list_images = [] output = [] for dir in os.listdir(data_dir): if dir == '.DS_Store' : continue inner_dir = os.path.join(data_dir, dir) csv_file = pd.read_csv(os.path.join(inner_dir,"GT-" + dir + '.csv'), sep=';') for row in csv_file.iterrows() : img_path = os.path.join(inner_dir, row[1].Filename) img = imread(img_path) img = img[row[1]['Roi.X1']:row[1]['Roi.X2'],row[1]['Roi.Y1']:row[1]['Roi.Y2'],:]
  • 31. img = resize_cv(img) list_images.append(img) output.append(row[1].ClassId) ### Plotting the dataset fig = sns.distplot(output, kde=False, bins = 43, hist = True, hist_kws=dict(edgecolor="black", linewidth=2)) fig.set(title = "Traffic signs frequency graph", xlabel = "ClassId", ylabel = "Frequency") input_array = np.stack(list_images) train_y = keras.utils.np_utils.to_categorical(output) ### Randomizing the dataset randomize = np.arange(len(input_array)) np.random.shuffle(randomize) x = input_array[randomize] y = train_y[randomize] ### Splitting the dataset in train, validation, test set split_size = int(x.shape[0]*0.6) train_x, val_x = x[:split_size], x[split_size:] train1_y, val_y = y[:split_size], y[split_size:] split_size = int(val_x.shape[0]*0.5) val_x, test_x = val_x[:split_size], val_x[split_size:] val_y, test_y = val_y[:split_size], val_y[split_size:] ### Building the model hidden_num_units = 2048 hidden_num_units1 = 1024 hidden_num_units2 = 128 output_num_units = 43 epochs = 10 batch_size = 16 pool_size = (2, 2) input_shape = Input(shape=(32, 32,3)) model = Sequential([ Conv2D(16, (3, 3), activation='relu', input_shape=(64,64,3), padding='same'),
  • 32. BatchNormalization(), Conv2D(16, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2), Conv2D(32, (3, 3), activation='relu', padding='same'), BatchNormalization(), Conv2D(32, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2), Conv2D(64, (3, 3), activation='relu', padding='same'), BatchNormalization(), Conv2D(64, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2), Flatten(), Dense(units=hidden_num_units, activation='relu'), Dropout(0.3), Dense(units=hidden_num_units1, activation='relu'), Dropout(0.3), Dense(units=hidden_num_units2, activation='relu'), Dropout(0.3), Dense(units=output_num_units, input_dim=hidden_num_units, activation='softmax'), ]) model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e- 4), metrics=['accuracy']) ### Training the model trained_model_conv = model.fit(train_x.reshape(-1,64,64,3), train1_y, epochs=epochs, batch_size=batch_size, validation_data=(val_x, val_y)) ### Prdicting the class pred = model.predict_classes(test_x) ### Evaluating the model model.evaluate(test_x, test_y)
  • 33. TESTING AND MAINTENANCE 5.1 TESTING Model testing A folder named” test” is available in our dataset; inside that, we got the main working comma-separated file called” test.csv”. It comprises two things, the image paths, and their respective class labels. We can use the pandas’ python library to extract the image path with corresponding labels. Next, we need to resize our images to 30×30 pixels to predict the model and create a numpy array filled with image data. To understand how the model predicts the actual labels, we need to import accuracy_score from the sklearn.metrics. At last, we are calling the Keras model.save() method to keep our trained model. #testing accuracy on test dataset from sklearn.metrics import accuracy_score y_test = pd.read_csv('Test.csv') labels = y_test["ClassId"].values imgs = y_test["Path"].values data=[]for img in imgs: image = Image.open(img) image = image.resize((30,30)) data.append(np.array(image)) X_test=np.array(data) pred = model.predict_classes(X_test) #Accuracy with the test data from sklearn.metrics import accuracy_score print(accuracy_score(labels, pred)) Output 0.9532066508313539 model.save(‘traffic_classifier.h5’)#to save
  • 34. 5.2VALIDATION The final step involves validation which determines which the software functions as the user expected. What is Validation Testing? Validation testing is the process of ensuring if the tested and developed software satisfies the client /user needs. The business requirement logic or scenarios have to be tested in detail. All the critical functionalities of an application must be tested here. As a tester, it is always important to know how to verify the business logic or scenarios that are given to you. One such method that helps in detail evaluation of the functionalities is the Validation Process. Whenever you are asked to perform a validation test, it takes a great responsibility as you need to test all the critical business requirements based on the user needs. There should not be even a single miss on the requirements asked by the user. Hence a keen knowledge on validation testing is much important. Validation To train our model, we will use the model.fit() method that works well after the successful building of model architecture. With the help of 64 batch sizes, we got 95%accuracy on training sets and acquired stability after 15 epochs. eps = 15 anc = model.fit(X_t1, y_t1, batch_size=32, epochs=eps, validation_data=(X_t2, y_t2))
  • 35. 5.3MAINTENANCE As the number of computer based systems, grieve libraries of computer software began to expand. In house developed projects produced tones of thousand soft program source statements. Software products purchased from the outside added hundreds of thousands of new statements. A dark cloud appeared on the horizon. All of these programs, all of those source statements-had to be corrected when false were detected, modified as user requirements changed, or adapted to new hardware that was purchased. These activities were collectively called software Maintenance. The maintenance phase focuses on change that is associated with error correction, adaptations required as the software's environment evolves, and changes due to enhancements brought about by changing customer requirements. Four types of changes are encountered during the maintenance phase.  Correction  Adaptation  Enhancement  Prevention Correction: Even with the best quality assurance activities is lightly that the customer will uncover defects in the software. Corrective maintenance changes the software to correct defects. Maintenance is a set of software Engineering activities that occur after software has been delivered to the customer and put into operation. Software configuration management is a set of tracking and control activities that began when a software project begins and terminates only when the software is taken out of the operation. Only about 20 percent of all maintenance work are spent "fixing mistakes". The remaining 80 percent are spent adapting existing systems to changes in their external environment, making enhancements requested by users, and reengineering an application for use. ADAPTATION: Over time, the original environment (E>G., CPU, operating system, business rules, external product characteristics) for which the software was developed is likely to change. Adaptive maintenance results in modification to the software to accommodate change to its external environment.
  • 36. ENHANCEMENT: As software is used, the customer/user will recognize additional functions that will provide benefit. Perceptive maintenance extends the software beyond its original function requirements. PREVENTION : Computer software deteriorates due to change, and because of this, preventive maintenance, often called software reengineering, must be conducted to enable the software to serve the needs of its end users. In essence, preventive maintenance makes changes to computer programs so that they can be more easily corrected, adapted, and enhanced. Software configuration management (SCM) is an umbrella activity that is applied throughout the software process.
  • 37. USER MANUAL 6.1 USER MANUAL 1. Load The Data. 2. Dataset Summary & Exploration 3. Data Preprocessing. a) Shuffling. b) Gray scaling. c) Local Histogram Equalization. d) Normalization. 4. Design Model Architecture. 5. Model Training and Evaluation. 6. Testing the Model Using the Test Set. 7. Testing the Model on New Images.
  • 38. Step 1: Load The Data Download the dataset from here. https://github.com/rahulsonone1234/Traffic- Sign-Recognition This is a pickled dataset in which we've already resized the images to 32x32 and have three .p files of 32x32 resized images: train.p: The training set. test.p: The testing set. valid.p: The validation set. Use Python pickle to load the data. Step 2: Dataset Summary & Exploration The pickled data is a dictionary with 4 key/value pairs:  'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).  'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.  'sizes' is a list containing tuples, (width, height) representing the original width and height the image.  'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. First, you have to use numpy provide the number of images in each subset, in addition to the image size, and the number of unique classes. Number of training examples: 34799 Number of testing examples: 12630 Number of validation examples: 4410 Image data shape = (32, 32, 3) Number of classes = 43 Then, you used matplotlib plot sample images from each subset. And finally, you will use numpy to plot a histogram of the count of images in each unique class. Step 3: Data Preprocessing In this step, you will apply several preprocessing steps to the input images to achieve the best possible results. you will use the following preprocessing techniques:
  • 39.  Shuffling.  Grayscaling.  Local Histogram Equalization.  Normalization. Step 4: Design A Model Architecture In this step, you have to design and implement a deep learning model that learns to recognize traffic signs from our dataset German Traffic Sign Dataset. You have to use Convolutional Neural Networks to classify the images in this dataset. The reason behind choosing ConvNets is that they are designed to recognize visual patterns directly from pixel images with minimal preprocessing. They automatically learn hierarchies of invariant features at every level from data. You will implement two of the most famous ConvNets. Our goal is to reach an accuracy of +95% on the validation set. . Step 5: Model Training and Evaluation In this step, we will train our model using normalized images, then we'll compute softmax cross entropy between logits and labels to measure the model's error probability. Now, we'll run the training data through the training pipeline to train the model. Before each epoch, we'll shuffle the training set. After each epoch, we measure the loss and accuracy of the validation set. And after training, we will save the model. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. Step 6: Testing the Model using the Test Set Now, you have to use the testing set to measure the accuracy of the model over unknown examples. We've been able to reach a Test accuracy of 95%. A remarkable performance. Now we'll plot the confusion matrix to see where the model actually fails.
  • 40. We observe some clusters in the confusion matrix above. It turns out that the various speed limits are sometimes misclassified among themselves. Similarly, traffic signs with traingular shape are misclassified among themselves. We can further improve on the model using hierarchical CNNs to first identify broader groups (like speed signs) and then have CNNs to classify finer features (such as the actual speed limit). Step 7: Testing the Model on New Images In this step, you can use the model to predict traffic signs type of 5 random images of German traffic signs from the web our model's performance on these images. Number of new testing examples: 5 For instance, we have easy to predict signs like the "Stop" and the "No entry". The two signs are clear and belong to classes where the model can predict with high accuracy. On the other hand, we have signs belong to classes where has poor accuracy, like the "Speed limit" sign, because as stated above it turns out that the various speed limits are sometimes misclassified among themselves, and the "Pedestrians" sign, because traffic signs with traingular shape are misclassified among themselves. notice from the top 5 softmax probabilities, the model has very high confidence (100%) when it comes to predict simple signs, like the "Stop" and the "No entry" sign, and even high confidence when predicting simple triangular signs in a very clear image, like the "Yield" sign. On the other hand, the model's confidence slightly reduces with more complex triangular sign in a "pretty noisy" image, in the "Pedestrian" sign image, we have a triangular sign with a shape inside it, and the images copyrights adds some noise to the image, the model was able to predict the true class, but with 80% confidence. And in the "Speed limit" sign, we can observe that the model accurately predicted that it's a "Speed limit" sign, but was somehow confused between the different speed limits. However, it predicted the true class at the end. The VGGNet model was able to predict the right class for each of the 5 new test images. Test Accuracy = 100.0%. In all cases, the model was very certain (80% - 100%).
  • 41. CONCLUSION AND FUTURE SCOPE 7.1 CONCLUSION In this paper, a traffic sign recognition method on account of deep learning with help of a convolutional neural network (CNN) and Keras is proposed, which mainly aims at different traffic signs. By using image pre-processing, the data set from Keggal, traffic sign detection, recognition, and classification, this method can effectively detect and identify traffic signs. With help of these results, we can identify the traffic sign. It helps the user in two ways, while the user is in manual mode it displays the result on the dashboard screen and while the car driver is set to automatic it helps the car to drive safely by identifying the traffic signs. The test result displays that the accuracy of this method is very high.
  • 42. 7.2 LIMITATIONS AND FUTURE SCOPE Traffic Sign Recognition (TSR) is to detect the location of traffic signs from digital images or video frames, given a specific classification. The TSR methods basically make use of visual information such as shape and color of traffic signs. However, the conventional TSR algorithms are facing drawbacks in real-time tests, such as being easily restricted by driving conditions, including lighting, camera angle, obstruction, driving speed, and so on. It’s also very difficult to achieve multi-target detection, easy to miss visual objects because of slow recognition. Our algorithm is continuous in detecting the signs which leads to detecting signs even there are no signs in the area, which leads to continuous flow of output. This results in false detection or unnecessary detection. This could be improved by increasing the threshold value for detecting sign. The overall performance can also be improved and customized with the help of more datasets from different countries.
  • 43. REFERENCES [1] Thakur Pankaj and D. Manoj E. Patil “Recognition Of Traffic Symbols Using K-Means And Shape Analysis” International Journal of Engineering Research & Technology (IJERT) Vol. 2 Issue 5, May - 2013 ISSN: 2278-0181. [2] Zhou, L., & Deng, Z. (2014). “LIDAR and vision-based real-time traffic sign detection and recognition algorithm for intelligent vehicle”. 17th International IEEE Conference on Intelligent Transportation Systems (ITSC). doi:10.1109/itsc.2014.6957752. [3] Zakir, U., Edirishinghe, E. A., & Hussain, A. (2012). Road Sign Detection and Recognition from Video Stream Using HSV, Contour let Transform and Local Energy Based Shape Histogram. Lecture Notes in Computer Science, 411– 419. doi:10.1007/978- 3-642-3156. [4] Traffic Sign Detection and Recognition Based on Convolutional Neural Network, Yingsun; pingshuge; dequan liu, 2019, IEEE [5] Research and application of traffic sign detection and recognition based on deep learning, Canyong wang,2018 IEEE. [6] Traffic sign detection and classification using color feature and neural network, Md. Abdul alim sheikh; alok kole; Tanmoy maity 2018 IEEE [7] Autonomous Traffic Sign(ASTR) Detection and Recognition using Deep CNN, Danyah A.Alghmgham , ghazanfar latif , jaafar alghazo , loay alzubaidi ,2019 SCIENCEDIRECT [8] Understanding of a convolutional Neural network, saad albawi; tareq abed mohammed; saad al-zawi 2018 IEEE. [9] https://externlabs.com/blogs/python-frameworks-for-desktop- applications/ [10] https://andrejgajdos.com/single-page-application-vs-multiple-page- application/ [11] https://www.w3schools.com/python/pandas/default.asp [12] https://www.javatpoint.com/keras