SlideShare a Scribd company logo
1 of 28
Download to read offline
Automatic Number Plate Recognition System in Bangla using Deep
Learning model.
Most.Jannat-Ul-Ferdoush(200101068),Mst.Habiba Hena Sumi (200101070),
Md. Talath Un Nabi (200101076)
Course Code: CSE 4132Course Title: Artificial Neural Networks and Fuzzy Systems Sessional
Semester: Winter 2023
*Department of Computer Science and Engineering, Bangladesh Army University of Science
and Technology (BAUST)
Abstract
Traffic control and vehicle owner identification become major problems in Bangladesh.
Most of the time it is difficult to identify the driver or the owner of the vehicles who violate
the traffic rules or do any accidental work on the road. So, this work for Bangla number
plate detection. We use 3 different model for number plate detection and easy OCR for
Bangla character recognition. The trained model are i.YOLOv5(You Only Look
Once),ii.VGG16 and iii.Inception_ResNet-V2 .1st
model for automatic number plate
recognition with (ANPR) YOLOv5 system, and text detect with OCR , here detection
confidence rate is 89% . 2nd
paper model for number plate recognition with (ANPR)
VGG16 model system, and text detect with OCR , here detection confidence rate is 79.5%.
3rd
model for automatic number plate recognition with (ANPR) ANN using Inceptiop-
resnetV2 model ,and text detect with OCR , here detection confidence rate is 64.66%.Better
model is YOLOv5 for this dataset. For speed, we tested our model on Google Colaboratory’s
free GPU and attained a speed of 7 frames per second while detecting and recognizing the
license plate numbers.
Keywords—Automatic Number Plate Recognition (ANPR), Optical Character Recognition
(OCR), License Plate (LP), YOlOv5, Inception-ResNetv2, VGG16, deep neural
network(DNN), Convolutional Neural Network(CNN).
1. Introduction
Automatic Number Plate Recognition is a vital part of controlling the traffic system
intelligently and efficiently. The use of the automated parking management system is
increasingly becoming popular in Bangladesh. Use the detect number for toll collection,
parking lot management, enterprise entrance management, border surveillance, effective
traffic control and security applications such as access control to restricted areas and
tracking of wanted vehicles digitally.[1] ANPR systems contain three core steps: number
plate area detection, breakdown of characters, and Optical Character Recognition
(OCR).[2] Different methodologies have been used for ANPR systems, including
Artificial Neural Network, Probabilistic neural network, Optical Character Recognition,
MATLAB, Configurable method, Sliding Concentrating window, Back-Propagation
Neural Network, and Support Vector Machine. ANPR are born on joint methodologies
such as Artificial Neural Network, Probabilistic neural network, Optical Character
Recognition, MATLAB, Configurable method, Sliding Concentrating window, Back-
Propagation Neural Network, Support Vector Machine, Inductive Learning.[2] Optical
Character Recognition method, which is a widely used tool for mechanical or electronic
conversion of images of typed, handwritten or printed text into machine-encoded text,
whether from a scanned document, a photo of a document, a scene-photo or from subtitle
text superimposed on an image. OCR software pre-processes the images to enhance the
chances of successful recognition. The two non-intersecting images data sets were used
to copy the actual-world cases where the neural network will be subjected to.[2] Here
using 3 model for same dataset. One is YOLO version 5. YOLOv5 is one of the ‘state of
the art’ algorithms for real time object detection and classification.[3] Using YOLOv5 as
our CNN model, we achieved confidence up to 89% in our dataset which is best from
other 2 model confidence. Here we use 3 different type model for different dataset and
get detect number plate.YOLOv5 is a real-time object detection model that uses a single
convolutional neural network to predict the class and bounding box coordinates of
multiple objects in an image. It is a smaller and faster version of previous YOLO models
while maintaining high confidence. 2nd
model is Inception_ResNetv2.Inception model is
a deep neural network architecture used for image and video classification tasks. It
utilizes a combination of convolutional layers with various filter sizes to extract features
from images at different scales. ResNetv2 is a deep neural network architecture that
includes skip connections, enabling training of very deep neural networks up to hundreds
of layers. It uses residual learning to address the vanishing gradient problem and improve
model performance. We combine this 2 model for object detection. Another model is
VGG16. VGG16 is a deep convolutional neural network architecture used for image
classification tasks. It consists of 16 convolutional layers with small filter sizes and max
pooling layers, followed by fully connected layers for classification. We use OCR for
Character recognition in all model. We compare our three model for get character from
bangla number plate perfectly and perform work. The most popular style of one line
license plate format throughout the world, Bangla license plate is composed of two
License (BRTA) standard number plates).[3] The vehicle category is indicated by the
vehicle class letter (খ,গ,ঘ,এ,) .This kind of digital plate contains two rows. At lower
row, there are numbers and at the upper row, there are alphabets. In the lower row, there
remain two separate parts containing six digits.[8]
2. Literature Review
Automatic License Plate Recognition (ALPR) is a type of technology that enables
computer systems to study automatically the registration number (license number) of
vehicles from digital pictures. So many works doing in this side. CNN, ANN, Deep
learning models can detect easily the number plate. Although here some limitation has
present. In [1], authors proposed Bangla automatic number plate recognition system
using artificial neural network to detection for various climate condition and accuracy
rate is 95% with an average processing time of 0.75 seconds. A robust feature extraction
technique is applied to extract the feature from each characters which is invariant to the
rotation and scaling. In [2], authors proposed English automatic number plate recognition
system using Back-Propagation Neural Network, Support Vector Machine accuracy rate
is 82.5% .OCR use for character recognition . In [3], authors proposed bangle automatic
number plate recognition system using) convolutional neural network (CNN) based
model YOLO system, and text detect with OCR , here detection accuracy rate is 99.5%.
In [4], authors proposed bangle automatic number plate recognition system using) ANN
using feature extraction model system, and text detect with OCR, here detection accuracy
rate is 94.45%. In [4], authors proposed bangle automatic number plate recognition
system using) ANN using feature extraction model system, and text detect with OCR,
here detection accuracy rate is 94.45%. In [5], authors proposed bangle automatic
number plate recognition system model YOLOv3 system, and text detect with CNN
model, here detection accuracy rate is 97.5%. In [6], authors proposed bangle automatic
number plate recognition system using Support Vector Machine has been used for
classification, and text detect with OCR, here detection accuracy rate is 92.5%. In [7],
authors proposed bangle automatic number plate recognition system using multilayer
feed-forward network, and MLP network to recognize each characters and words to
identify the number plate., here detection accuracy rate is 75.51%. In [8], authors
proposed bangle automatic number plate recognition system using YOLO model system,
and text detect with CNN, here detection accuracy rate is 81%. In [9], authors proposed
bangle automatic number plate recognition system using YOOv3, model system, and text
detect with OCR, here detection accuracy rate is 88.89%. In [10], authors proposed
bangle automatic number plate recognition system using SSD model system, and text
detect with CNN, here detection accuracy rate is 97.5%. In [11], authors proposed bangle
automatic number plate recognition system using ANN Deep Convolutional Neural
Network (DCNN) model which is a single short detection, here detection accuracy rate is
99%. In [12], authors proposed bangle automatic number plate recognition system using
CNN using feature extraction model system, here detection accuracy rate is 89%.
3. Why we choose these models
YOLOv5 VGG16
Inception-ResNet
v2
Architecture Object Detection
Convolutional
Neural Network
Convolutional
Neural Network
Single Shot Detector Yes No No
Number of
Parameters
Varies based on the
model size
138 million
parameters
55.8 million
parameters
Object Detection
Average
Precision(AP)
High Moderate High
Performance
State-of-the-art in
object detection
Well-established in
image classification
High performance in
various tasks
Table1:model details
4. Device we used
5. Dataset Description
Total Images: 20,000
Annotations:20,000
Unique Images: 4,000
Augmentations: random_brightness, horizontal_flip, vertical_flip, rotation, grayscale
A license plate same dataset used for train our models. We use 20,000 data and use 5 type
augmentations, which are randomly brightness, grayscale, horizontal flip, vertical flip, rotation.
We separate 80% data for training (for two models, another model use 90%) and 20% for
validation and 5 selected data for testing from overall data. Every license plate has two line in
the first line and 6 numbers in the second line, resulting in 9450 words and characters
(alphanumeric symbols) that have been annotated with bounding boxes. Our dataset has also
been manually augmented to avoid over fitting. For that, we randomly translated and scaled up
to 20% of the captured image sizes.
Fig.1.our dataset
Fig.2.annotation code for bounding box
6. Data Augmentation
Fig.3. after perform data augmentation
7. Methodology
For the complete detection process of characters from the license plate, three stages were split.
Initially, Load image. Then crop image to find bounding box for predicted coordination, then the
image was augmentation to get perfect predicted value, Process image as our model parameter
and normalize it. Separate 80% data for train and rest 20% for validation. Then apply our model
for train data and validation data. Then text data and predict output. Then create pipeline or
threshold and apply easy OCR for character reorganization. Flow diagram of the proposed
system of license plate recognition is shown in Fig 3.
Data Preprocessing: Load data deletion. Then annotation was completed for the number plate
of all vehicles in the first dataset. To annotate the dataset, we utilize labeling. After annotation,
we contrast Normalization for license plate recognition.
Fig.1.Normalize image
Augmentation: convert real image into grayscale, Horizontal flip, vertical flip, rotation, random
brightness. Show in fig.4
Apply Models: YOLOv5, Inception-resNetv2,VGG16 and Train Dataset abjectness’ was used
by model for bounding box prediction and cost function measurement. For each bounding box
using logistic regression, models predict an object score. The cost function is calculated
differently in models. And those models was used for detecting the number plates which were
trained on the first dataset after the annotation.
License Plate Detection and Localization:Following that, after YOLOv5(10),Inception-
resNetv2(180),VGG16(180) training, the best-predicted model was used. The experiment could
save the best epoch of training. After epochs, the targeted detection model was found while
training the data.
Fig. 4. Flow diagram of our experiment
Crop License Plate using Predicted Bounding Box :The saved model was used to detect the
license plate. And then, predicted bounding box coordinate was used to crop license plates from
images.
Apply threshold for Character Segmentation: The next step was to complete the
segmentation. For segmentation, threshold algorithm was used. Threshold is an old but effective
one for the segmentation process.
Apply EasyOCR model for Character Recognition: EasyOCR supports Bangla language for
optical character recognition (OCR) and can recognize printed and handwritten Bangla text from
images or videos. It uses a deep learning model trained on Bangla characters for high confidence.
8. Models
Hyperparameters:
YOLOv5 VGG16
Inception-Inception
ResNet v2
Learning Rate 0.01 0.001 1e-4
Optimizer
SGD (Stochastic
Gradient Descent)
Adam Adam
Number of
Parameters
7022326 17,099,140 73,663,490
metrics
mAP50(Mean Average
Precision)
accuracy accuracy
Table.2. Hyperparameters
Train/Test Split
YOLOv5 VGG16 Inception-ResNet v2
Training 80% 90% 80%
Testing 20% 10% 20%
Random State 0 1 0
Table.3. Train Test Data Split
Object detect using YOLOv5
Our model has use total 214 layer where 127is convolution layers. For detecting the license
plate, we trained our model for 10 epochs. For segmenting and recognizing the license plate, we
trained our model for 10 epochs. In the time of training, the hyper parameters for our model,
which are given below-
I. Batch size = 20
II. Epoch=50
III. Momentum = 0.937
IV. Weight decay=0.00046875=0.0005
V. Learning rate = 0.01
VI. Optimizer =SGD
VII. Loss= mse (mean square error )
Model details
The architecture of YOLOv5 is composed of several convolutional layers with various filter
sizes, strides, and activation functions. The model is based on a modified version of the Efficient
Net backbone architecture, which consists of a series of convolutional blocks with varying
depths and widths. Here is an overview of the key layers and activation functions used in
YOLOv5:
Fig.5.Model summary for YOLOv5
1. Convolutional Layers: The YOLOv5 architecture includes many convolutional
layers, including 1x1, 3x3, and 5x5 filters, as well as dilated and transposed
convolutions. These layers extract features from the input image at different
spatial scales.
2. Activation Functions: YOLOv5 uses the Mish activation function, which is a
smooth and non-monotonic function that has been shown to improve performance
compared to traditional activation functions like ReLU. The Mish function is
defined as
f(x) = x * tanh(softplus(x)).
3. Spatial Pyramid Pooling: YOLOv5 incorporates a Spatial Pyramid Pooling (SPP)
layer, which allows the network to capture features at multiple scales. The SPP
layer pools features from different regions of the feature map at different scales
and concatenates them into a single vector.
4. Backbone Architecture: YOLOv5 uses a modified version of the Efficient Net
backbone architecture, which includes a series of convolutional blocks with
varying depths and widths. This allows the model to efficiently learn features at
different scales while minimizing the number of parameters.
5. Object Detection Head: The final layers of YOLOv5 include a set of
convolutional layers that predict the bounding boxes and class probabilities for
each object in the input image. These layers use anchor boxes to predict the
location and size of each object and are trained using a combination of
classification and regression loss functions.
YOLOv5 is a powerful and efficient object detection model that combines state-
of-the-art convolutional neural network architectures with advanced features like
Spatial Pyramid Pooling and the Mish activation function.
Testing code:
It looks like the code you provided is attempting to perform object detection on an image using
the YOLO algorithm and then display the results using Plot. However, the YOLO predictions
function that is being called is not defined in the code snippet you provided, Assuming that the
YOLO predictions function is defined elsewhere and correctly implemented, this code should
display the original image with the detected objects overlaid on it, and print out the text that was
detected in any license plates that were identified. Without additional context or information
about the YOLO predictions function, it is difficult to provide more specific feedback or advice.
Output:
Fig.6.detect bounding box with confidence with 89%
Character reorganization code:
This code defines a function called 'extract text' that extracts text from an image region specified
by a bounding box. The code uses the EasyOCR library to perform OCR on the specified region.
Here's how the code works:
1. The 'easyocr' library is imported.
2. The 'extract text' function is defined with two parameters: 'image' and
'bounding box'. 'image' is the input image from which the text needs to be
extracted. 'bounding box' is a tuple that specifies the bounding box
coordinates of the region from which the text needs to be extracted.
3. The bounding box coordinates are used to extract the region of interest
(ROI) from the input image.
4. If the ROI has a shape of (0,0), meaning it is empty, the function returns
the string 'no number'.
5. If the ROI is not empty, the EasyOCR 'read text' function is used to extract
the text from the ROI.
6. The resulting text is concatenated into a single string and stripped of any
leading/trailing white space.
The function returns the extracted text.
Fig.7.recognized bangle character
Object detect using Inception-Resnetv2
Our model has 743 convolutional layers. For detecting the license plate, we trained our model for
180 epochs. For segmenting and recognizing the license plate, we trained our model for 180
epochs. In the time of training, the hyper parameters is given below:
I. Batch size = 10
II. Epoch=180
III. Momentum = 0.7
IV. Weight decay=
V. Learning rate = 1e-4
VI. Optimizer=ADAM
VII. Loss function=mse(mean square error)
Model details
This model for image classification using transfer learning with the InceptionResNetV2 pre-
trained model. Here is a breakdown of the key components of the code:
Fig.8.model for Inception-ResNetv2
1. InceptionResNetV2 model: This is a pre-trained image classification model included in
the Keras library. The InceptionResNetV2 function is used to load the pre-trained
weights, and the include top=False argument is used to exclude the final fully connected
layer of the model.
2. Head model: The head model variable defines the fully connected layers that will be
added to the pre-trained model. The output from the InceptionResNetV2 model is passed
as input to the head model.
3. Flatten layer: The Flatten layer is used to flatten the output from the InceptionResNetV2
model into a 1-dimensional vector.
4. Dense layers: The Dense layers define the fully connected layers in the head model. The
first Dense layer has 500 units with ReLU activation, the second has 250 units with
ReLU activation, and the final layer has 4 units with sigmoid activation.
5. Model: The Model function is used to define the final model architecture, which includes
the InceptionResNetV2 model as the base and the head model on top of it. The inputs
argument specifies the input tensor shape, and the outputs argument specifies the output
tensor shape.
6. loss function and optimizer to be used during training. Specifically, it uses mean squared
error (mse) as the loss function and the Adam optimizer with a learning rate of 1e-4.
This code creates a transfer learning model that uses the pre-trained InceptionResNetV2 model
to extract features from images and then applies a set of fully connected layers to make
predictions about the input image. The model is trained using a binary cross-entropy loss
function and the Adam optimizer.
Testing code:
The code defines a function called object detection that takes an input image, performs object
detection using a pre-trained model, draws a bounding box around the detected object, and
returns the modified image and the coordinates of the bounding box. The code then displays the
modified image and a cropped image of the detected object. This code can be useful for
performing object detection and extracting detected objects from images.
Fig.9.crop bounding box
Output:
Fig.10.detect bounding box with confidence with 61.95%
Character reorganization code
This code uses the EasyOCR library to perform optical character recognition (OCR) on an
image. Specifically, it uses the 'bn' (Bangla) language model to read text from the image.
Here's a breakdown of the code:
1. The 'easyocr' library is imported.
2. A reader object is created with the 'bn' language model.
3. The 'read text' function is used to read text from the image.
4. The resulting text is stored in the 'text' variable by looping through the text detection
results and concatenating the detected text.
Finally, the detected text is printed to the console.
Fig.11.recognize bangle character
Object detect using VGG16
Our model has 13 convolutional layers and 3 fully connected layers. For detecting the license
plate, we trained our model for 180 epochs. For segmenting and recognizing the license plate, we
trained our model for 180 epochs. In the time of training, the hyper parameters are given below:
I. Batch size = 32
II. Epoch=180
III. Momentum = 0.7
IV. Weight decay=
V. Learning rate = 0.001
VI. Optimizer=ADM
VII. Loss function= mse(mean square error)
Model details
The model is a Sequential model with the VGG16 architecture as the base.
Fig.12.Model summary for VGG16
The model is a Sequential model with the VGG16 architecture as the base. The VGG16 layers
are pre-trained on the ImageNet dataset and only the fully connected layers are added on top of
it. The fully connected layers have 128, 128, and 64 units respectively with ReLU activation
function, and the output layer has 4 units with sigmoid activation function. The second last layer
of the VGG16 model is frozen and its weights are not updated during the training. The model is
compiled with the mean squared error (MSE) loss function and the Adam optimizer with a
learning rate of 0.001. The IMAGE_SIZE variable is not defined in the code snippet, so it is
unclear what the input image size is. This is the pre-trained VGG16 model from the ImageNet
dataset, which consists of 13 convolutional layers followed by 3 fully connected layers.
1. Flatten: This layer flattens the output of the previous layer into a 1D
tensor, which can be fed into the fully connected layers.
2. Dense: This fully connected layer has 128 units and uses the ReLU
activation function.
3. Dense: Another fully connected layer with 128 units and ReLU activation
function.
4. Dense: Yet another fully connected layer with 64 units and ReLU
activation function.
5. Dense (output layer): This is the final fully connected layer with 4 units
and sigmoid activation function, which is used for binary classification.
6. The second last layer of the VGG16 base is frozen and its weights are not
updated during the training. The model is compiled with the mean squared
error (MSE) loss function and the Adam optimizer with a learning rate of
0.001.
Testing code:
The code defines a function called object detection that takes an input image, performs object
detection using a pre-trained model, draws a bounding box around the detected object, and
returns the modified image and the coordinates of the bounding box. The code then displays the
modified image and a cropped image of the detected object. This code can be useful for
performing object detection and extracting detected objects from images.
Fig.13.crop bounding box
Fig.14.detect bounding box with confidence with 77.27%
This code uses the EasyOCR library to perform optical character recognition (OCR) on an
image. Specifically, it uses the 'bn' (Bangla) language model to read text from the image.
Here's a breakdown of the code:
1. The 'easyocr' library is imported.
2. A reader object is created with the 'bn' language model.
3. The 'read text' function is used to read text from the image.
4. The resulting text is stored in the 'text' variable by looping through the text detection
results and concatenating the detected text.
Finally, the detected text is printed to the console.
Fig.12. recognized bangle character
9. Result and Analysis
YOLOv5 VGG16
Inception-ResNet
v2
Batch Size 20 32 10
Epoch 50 180 180
Confidence(%) 89 77.27 61.95
Table.4. Summary of the confidence for minimum epoch size
As we mentioned above this system build for 3 different models, for this experiment. For,
YOLOv5 total of 3 phases were developed. For license plate detection, YOLOv5 was used in the
first step. Then, in the second level, segmentation was completed. And the last phase was about
the license plates’ identification of characters. Here, all the observations of these three phases are
illustrated. Same for Inception_ResNetv2 and VGG16 model. License plate detection using
YOLOv5 had an output confidence 81% prediction is correct. License plate detection using
Inception-ResNetv2 had an output confidence 61.95% prediction is correct for training dataset,
some time we get from their for other dataset 64%. License plate detection using VGG16 had an
output confidence 77.27% prediction is correct. After training with a lot of data, the test phase
showed us a better result for license plate detection. Images from different angles were used for
testing purpose. And our proposed model worked much better regarding those issues. Figure 3
and 4 show the detection of a license plate from an image. Yolov5 can detected bounding box in
small epoch, But Inception_ResNetv2 and VGG16 need high epoch. when we use low epochs in
our models the bounding box can’t detected perfectly. For Inception_ResNetv2 when we use
epoch 50 the bounding box is detect the model show in fig.14.
Fig.15.Bounding box and detected character by YOLOv5
fig.16. Bounding box and detected character by Inception-ResNetv2
fig.17.Bounding box and detected character by VGG16
fig.18.Bounding box and detected character by VGG16 using close image
For VGG16 when we use low or high epochs in our models the bounding box can’t detected
perfectly for Inception_ResNetv2 when we use epoch 50 the bounding box is detect the model
show in fig.12.
Fig.19.bounding box for VGG16 in small epochs (50)
Fig.20.bounding box box and character detection for VGG16 in small epochs (100)
Fig.21.bounding box and character detection for Inception-Resnetv2 in epochs(180),But low
regulation image
confidence rate (RR) is calculated as :
confidence= (No. of recognized samples /No. of total samples of that sign) × 100% [6]
model Character recognition time
YOLOv5 5.11s
Inception_ResNetv2 6.09s
VGG16 4.647s
Table5: Chracter recognition time for model
We can see from above table, using same character recognition model in those 3 object detection
model. We get different time for character recognition. VGG16 is fast for character recognition
from other two model
Fig.22.Chart of confidence of 3 model
We can see from the chart (fig.22) YOLOv5 model confidence is high then other 2
models. And it detect boundary box perfectly. But character recognition time is higher
than VGG16.But detecting correct character 100%.VGG16 confidence is 77.72. it is
higher than Inception_ResNetv2 model, but can not detect boundary box correctly.
Character recognition time is high. Inception_ResNetv2 model confidence is lower than
other two models but its can detected bounding box correctly. And character recognition
time is also lower than other.
So, YOLOv5 is best for detected bounding box for our experiment.
10. Stakeholders
Any individuals or groups that are affected, either positively or negatively, by a project,
initiative, policy, or organization are considered stakeholders. They may be internal or
external.
a. The stakeholders of our project are:
• Our Project Supervisor: Hasan Muhammad Kafi (Assistant Professor of
BAUST)
0
10
20
30
40
50
60
70
80
90
100
YOLOv5 Inception-ResNetv2 VGG16
Accuracy(%)
Accuracy(%)
b. Project Developers include:
• Most.Jannat-Ul-Ferdoush
• Mst.Habiba Hena Sumi
• Md. Talath Un Nabi
c. External Stakeholders:
• All the users and testers of our model.
11. Issues Encountered
1. In certain cases, the model may incorrectly recognize text, leading to inaccurate
results.
2. When training the model on a smaller dataset, it may struggle to accurately detect
number plates, as it has less data to learn from.
3. Training the model without a GPU can result in longer training sessions, as GPUs are
optimized for parallel processing and can significantly speed up training time.
12. Conclusion, Limitations and Future Recommendations
Limitations:
1. The model struggles to detect number plates in low-quality or noisy images.
2. There are instances where the model incorrectly recognizes text.
3. The model has difficulty detecting number plates in images taken from
complex angles.
4. When working with a small dataset, the model may have challenges accurately
detecting bounding boxes.
5. If the angle of the number plate is too high, all three models fail to detect the
bounding box during testing.
6. Images with significant noise pose challenges for the model to accurately detect
the bounding box.
7. Inception-ResNetv2 and VGG16 models have difficulty detecting bounding
boxes with low epochs, whereas YOLO performs better in this scenario.
Automatic Number Plate detection is a whole package of capturing the license plate from the
vehicle and to recognize it accurately which in bounding box. The purpose of this system is to
detect the Bengali license plate to identify the vehicles properly for different 3 model. Yolo5
confidence is 89%, Inception_ResNetv2 model confidence is 61.69%, VGG16 model confidence
is 77.27%. Our experimental best model is Yolo. If we analyze the 3 models, our experimental
best model is Yolo for bounding box detection. The ResNetv2 is less accurate than other towing
models, but it is also properly detected. And the last model was VGG16. The confidence was
good, but it did not properly detect the license plate character. Every system has limitation, any
one no work 100%. So, the system still has a large scope for further developments. Experimental
models for number plate region detection and plate extraction in the perspective of Bangladesh
the background scenes are more complex and also the weather of this country is always
changing. So, any one can develop those models for climate changing condition. If anyone can
remove limitation we can apply this project and can develop traffic violation and controlling
system.
References
[1]Shahed, Md Tanvir, et al. "Automatic Bengali number plate reader." Tencon 2017-2017
IEEE region 10 conference. IEEE, 2017.
[2]Kashyap, Abhishek, et al. "Automatic number plate recognition." 2018 international
conference on advances in computing, communication control and networking (ICACCCN).
IEEE, 2018.
[3]Saif, Nazmus, et al. "Automatic license plate recognition system for bangla license plates
using convolutional neural network." TENCON 2019-2019 IEEE Region 10 Conference
(TENCON). IEEE, 2019.
[4]Kakani, Bhavin V., Divyang Gandhi, and Sagar Jani. "Improved OCR based automatic
vehicle number plate recognition using features trained neural network." 2017 8th
international conference on computing, communication and networking technologies
(ICCCNT). IEEE, 2017.
[5] Sarif, Md Mesbah, et al. "Deep learning-based Bangladeshi license plate recognition
system." 2020 4th International Symposium on Multidisciplinary Studies and Innovative
Technologies (ISMSIT). IEEE, 2020.
[6] Al Nasim, Md Abdullah, et al. "An automated approach for the recognition of bengali
license plates." 2021 International Conference on Electronics, Communications and
Information Technology (ICECIT). IEEE, 2021.
[7] Joarder, Md Mahbubul Alam, et al. "Bangla automatic number plate recognition system
using artificial neural network." Asian Transactions on Science & Technology (ATST) 2.1
(2012): 1-10.
[8] Uddin, Md Azher, Joolekha Bibi Joolee, and Shayhan Ameen Chowdhury. "Bangladeshi
vehicle digital license plate recognition for metropolitan cities using support vector
machine." Proc. International Conference on Advanced Information and Communication
Technology. 2016.
[9] Suvon, Md Naimul Islam, Riasat Khan, and Mehebuba Ferdous. "Real time bangla
number plate recognition using computer vision and convolutional neural network." 2020
IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology
(IICAIET). IEEE, 2020
[10]Rahman, MM Shaifur, et al. "Bangla license plate recognition using convolutional neural
networks (CNN)." 2019 22nd International Conference on Computer and Information
Technology (ICCIT). IEEE, 2019.
[11] Islam, Tariqul, and Risul Islam Rasel. "Real-time bangla license plate recognition
system using faster r-cnn and ssd: A deep learning application." 2019 IEEE International
Conference on Robotics, Automation, Artificial-intelligence and Internet-of-Things
(RAAICON). IEEE, 2019.
[12] Sarif, Md Mesbah, et al. "Deep learning-based Bangladeshi license plate recognition
system." 2020 4th International Symposium on Multidisciplinary Studies and Innovative
Technologies (ISMSIT). IEEE, 2020.
Appendix
Attainment of Complex Engineering Problem (CP)
S.L. CP No. Attainment Remarks
1. P1: Depth of
Knowledge Required
K3 (Engineering Fundamentals):
K4 (Engineering Specialization):
K5 (Design):
K6 (Technology):
K8 (Research):
2. P2: Range of
Conflicting
Requirements
3. P3: Depth of Analysis
Required
4. P4: Familiarity of
Issues
5. P5: Extent of
Applicable Codes
6. P6: Extent of
Stakeholder
Involvement and
Conflicting
Requirements
7. P7: Interdependence
Mapping of Complex Engineering Activities (CA)
S.L. CA No. Attainment Remarks
1. A1: Range of
resources
2. A2: Level of
interaction
3. A3:Innovation
4. A4:Consequences for
Society and the
Environment
5. A5: Familiarity

More Related Content

What's hot

Final Year Projects (Computer Science 2013) - Syed Ubaid Ali Jafri
Final Year Projects (Computer Science 2013) - Syed Ubaid Ali JafriFinal Year Projects (Computer Science 2013) - Syed Ubaid Ali Jafri
Final Year Projects (Computer Science 2013) - Syed Ubaid Ali JafriSyed Ubaid Ali Jafri
 
Passport Automation System
Passport Automation SystemPassport Automation System
Passport Automation SystemMegha Sahu
 
Project report vehicle management system
Project report vehicle management systemProject report vehicle management system
Project report vehicle management systemabdul khan
 
Car rental Project Ppt
Car rental Project PptCar rental Project Ppt
Car rental Project Pptrahul85rkm
 
SRS example
SRS exampleSRS example
SRS examplegentzone
 
Online booking system for car rental companies - Bespoke Car Rental Booking E...
Online booking system for car rental companies - Bespoke Car Rental Booking E...Online booking system for car rental companies - Bespoke Car Rental Booking E...
Online booking system for car rental companies - Bespoke Car Rental Booking E...Orisys Infotech
 
Automatic Road Sign Recognition From Video
Automatic Road Sign Recognition From VideoAutomatic Road Sign Recognition From Video
Automatic Road Sign Recognition From VideoDr Wei Liu
 
Number plate recogition
Number plate recogitionNumber plate recogition
Number plate recogitionhetvi naik
 
Airline reservation system documentation
Airline reservation system documentationAirline reservation system documentation
Airline reservation system documentationSurya Indira
 
Job portal project documentary
Job portal project documentaryJob portal project documentary
Job portal project documentaryUmang_jain
 
Airline Reservation System - Java, Servlet ASP.NET, Oracle, HTML
Airline Reservation System - Java, Servlet ASP.NET, Oracle, HTMLAirline Reservation System - Java, Servlet ASP.NET, Oracle, HTML
Airline Reservation System - Java, Servlet ASP.NET, Oracle, HTMLDeepankar Sandhibigraha
 
Airline ticket reservation system
Airline ticket reservation systemAirline ticket reservation system
Airline ticket reservation systemSH Rajøn
 
Online car parking reservation system ppt 9160262550 dinesh
Online car parking reservation system ppt   9160262550 dineshOnline car parking reservation system ppt   9160262550 dinesh
Online car parking reservation system ppt 9160262550 dineshDinesh Nalluri
 
E-TICKETING ON RAILWAY TICKET RESERVATION
E-TICKETING ON RAILWAY TICKET RESERVATIONE-TICKETING ON RAILWAY TICKET RESERVATION
E-TICKETING ON RAILWAY TICKET RESERVATIONNandana Priyanka Eluri
 
Bus Tracking Application in Android
Bus Tracking Application in AndroidBus Tracking Application in Android
Bus Tracking Application in AndroidAbhishek Singh
 
Driving behavior for ADAS and Autonomous Driving
Driving behavior for ADAS and Autonomous DrivingDriving behavior for ADAS and Autonomous Driving
Driving behavior for ADAS and Autonomous DrivingYu Huang
 
Vehicles Parking Management System project presentation 2020
Vehicles Parking Management System project presentation 2020Vehicles Parking Management System project presentation 2020
Vehicles Parking Management System project presentation 2020Vikram Singh
 
TRAIN TICKETING SYSTEM
TRAIN TICKETING SYSTEMTRAIN TICKETING SYSTEM
TRAIN TICKETING SYSTEMNimRaH NaZaR
 

What's hot (20)

Final Year Projects (Computer Science 2013) - Syed Ubaid Ali Jafri
Final Year Projects (Computer Science 2013) - Syed Ubaid Ali JafriFinal Year Projects (Computer Science 2013) - Syed Ubaid Ali Jafri
Final Year Projects (Computer Science 2013) - Syed Ubaid Ali Jafri
 
Passport Automation System
Passport Automation SystemPassport Automation System
Passport Automation System
 
Project report vehicle management system
Project report vehicle management systemProject report vehicle management system
Project report vehicle management system
 
Car rental Project Ppt
Car rental Project PptCar rental Project Ppt
Car rental Project Ppt
 
SRS example
SRS exampleSRS example
SRS example
 
Online booking system for car rental companies - Bespoke Car Rental Booking E...
Online booking system for car rental companies - Bespoke Car Rental Booking E...Online booking system for car rental companies - Bespoke Car Rental Booking E...
Online booking system for car rental companies - Bespoke Car Rental Booking E...
 
Automatic Road Sign Recognition From Video
Automatic Road Sign Recognition From VideoAutomatic Road Sign Recognition From Video
Automatic Road Sign Recognition From Video
 
project final ppt.pptx
project final ppt.pptxproject final ppt.pptx
project final ppt.pptx
 
Number plate recogition
Number plate recogitionNumber plate recogition
Number plate recogition
 
Airline reservation system documentation
Airline reservation system documentationAirline reservation system documentation
Airline reservation system documentation
 
Job portal project documentary
Job portal project documentaryJob portal project documentary
Job portal project documentary
 
Airline Reservation System - Java, Servlet ASP.NET, Oracle, HTML
Airline Reservation System - Java, Servlet ASP.NET, Oracle, HTMLAirline Reservation System - Java, Servlet ASP.NET, Oracle, HTML
Airline Reservation System - Java, Servlet ASP.NET, Oracle, HTML
 
Age and Gender Detection.docx
Age and Gender Detection.docxAge and Gender Detection.docx
Age and Gender Detection.docx
 
Airline ticket reservation system
Airline ticket reservation systemAirline ticket reservation system
Airline ticket reservation system
 
Online car parking reservation system ppt 9160262550 dinesh
Online car parking reservation system ppt   9160262550 dineshOnline car parking reservation system ppt   9160262550 dinesh
Online car parking reservation system ppt 9160262550 dinesh
 
E-TICKETING ON RAILWAY TICKET RESERVATION
E-TICKETING ON RAILWAY TICKET RESERVATIONE-TICKETING ON RAILWAY TICKET RESERVATION
E-TICKETING ON RAILWAY TICKET RESERVATION
 
Bus Tracking Application in Android
Bus Tracking Application in AndroidBus Tracking Application in Android
Bus Tracking Application in Android
 
Driving behavior for ADAS and Autonomous Driving
Driving behavior for ADAS and Autonomous DrivingDriving behavior for ADAS and Autonomous Driving
Driving behavior for ADAS and Autonomous Driving
 
Vehicles Parking Management System project presentation 2020
Vehicles Parking Management System project presentation 2020Vehicles Parking Management System project presentation 2020
Vehicles Parking Management System project presentation 2020
 
TRAIN TICKETING SYSTEM
TRAIN TICKETING SYSTEMTRAIN TICKETING SYSTEM
TRAIN TICKETING SYSTEM
 

Similar to Automatic Number Plate Recognition System in Bangla using Deep Learning model(Report)

Paper id 25201447
Paper id 25201447Paper id 25201447
Paper id 25201447IJRAT
 
Performance Evaluation of Automatic Number Plate Recognition on Android Smart...
Performance Evaluation of Automatic Number Plate Recognition on Android Smart...Performance Evaluation of Automatic Number Plate Recognition on Android Smart...
Performance Evaluation of Automatic Number Plate Recognition on Android Smart...IJECEIAES
 
Traffic Sign Recognition using CNNs
Traffic Sign Recognition using CNNsTraffic Sign Recognition using CNNs
Traffic Sign Recognition using CNNsIRJET Journal
 
AUTOMATIC LICENSE PLATE RECOGNITION USING YOLOV4 AND TESSERACT OCR
AUTOMATIC LICENSE PLATE RECOGNITION USING YOLOV4 AND TESSERACT OCRAUTOMATIC LICENSE PLATE RECOGNITION USING YOLOV4 AND TESSERACT OCR
AUTOMATIC LICENSE PLATE RECOGNITION USING YOLOV4 AND TESSERACT OCRAngie Miller
 
Automatic License Plate Recognition Using Optical Character Recognition Based...
Automatic License Plate Recognition Using Optical Character Recognition Based...Automatic License Plate Recognition Using Optical Character Recognition Based...
Automatic License Plate Recognition Using Optical Character Recognition Based...IJARIIE JOURNAL
 
Matlab based vehicle number plate identification system using ocr
Matlab based vehicle number plate identification system using ocrMatlab based vehicle number plate identification system using ocr
Matlab based vehicle number plate identification system using ocrGhanshyam Dusane
 
IRJET- Recognition of Indian License Plate Number from Live Stream Videos
IRJET-  	  Recognition of Indian License Plate Number from Live Stream VideosIRJET-  	  Recognition of Indian License Plate Number from Live Stream Videos
IRJET- Recognition of Indian License Plate Number from Live Stream VideosIRJET Journal
 
A Review of Lie Detection Techniques.pdf
A Review of Lie Detection Techniques.pdfA Review of Lie Detection Techniques.pdf
A Review of Lie Detection Techniques.pdfWhitney Anderson
 
A design of license plate recognition system using convolutional neural network
A design of license plate recognition system using convolutional neural networkA design of license plate recognition system using convolutional neural network
A design of license plate recognition system using convolutional neural networkIJECEIAES
 
A Review of Lie Detection Techniques
A Review of Lie Detection TechniquesA Review of Lie Detection Techniques
A Review of Lie Detection TechniquesIRJET Journal
 
Off-line English Character Recognition: A Comparative Survey
Off-line English Character Recognition: A Comparative SurveyOff-line English Character Recognition: A Comparative Survey
Off-line English Character Recognition: A Comparative Surveyidescitation
 
IRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
IRJET- Implementation of Gender Detection with Notice Board using Raspberry PiIRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
IRJET- Implementation of Gender Detection with Notice Board using Raspberry PiIRJET Journal
 
License plate recognition for campus auto-gate system
License plate recognition for campus auto-gate systemLicense plate recognition for campus auto-gate system
License plate recognition for campus auto-gate systemnooriasukmaningtyas
 
ENCRYPTION MODES IDENTIFICATION OF BLOCK CIPHERS BASED ON MACHINE LEARNING
ENCRYPTION MODES IDENTIFICATION OF BLOCK CIPHERS BASED ON MACHINE LEARNINGENCRYPTION MODES IDENTIFICATION OF BLOCK CIPHERS BASED ON MACHINE LEARNING
ENCRYPTION MODES IDENTIFICATION OF BLOCK CIPHERS BASED ON MACHINE LEARNINGIJNSA Journal
 
License Plate Recognition
License Plate RecognitionLicense Plate Recognition
License Plate RecognitionIRJET Journal
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language DetectionIRJET Journal
 
IRJET- Hand Sign Recognition using Convolutional Neural Network
IRJET- Hand Sign Recognition using Convolutional Neural NetworkIRJET- Hand Sign Recognition using Convolutional Neural Network
IRJET- Hand Sign Recognition using Convolutional Neural NetworkIRJET Journal
 
Automatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVMAutomatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVMIRJET Journal
 
Implementation and Performance Evaluation of Neural Network for English Alpha...
Implementation and Performance Evaluation of Neural Network for English Alpha...Implementation and Performance Evaluation of Neural Network for English Alpha...
Implementation and Performance Evaluation of Neural Network for English Alpha...ijtsrd
 
A Traffic Sign Classifier Model using Sage Maker
A Traffic Sign Classifier Model using Sage MakerA Traffic Sign Classifier Model using Sage Maker
A Traffic Sign Classifier Model using Sage Makerijtsrd
 

Similar to Automatic Number Plate Recognition System in Bangla using Deep Learning model(Report) (20)

Paper id 25201447
Paper id 25201447Paper id 25201447
Paper id 25201447
 
Performance Evaluation of Automatic Number Plate Recognition on Android Smart...
Performance Evaluation of Automatic Number Plate Recognition on Android Smart...Performance Evaluation of Automatic Number Plate Recognition on Android Smart...
Performance Evaluation of Automatic Number Plate Recognition on Android Smart...
 
Traffic Sign Recognition using CNNs
Traffic Sign Recognition using CNNsTraffic Sign Recognition using CNNs
Traffic Sign Recognition using CNNs
 
AUTOMATIC LICENSE PLATE RECOGNITION USING YOLOV4 AND TESSERACT OCR
AUTOMATIC LICENSE PLATE RECOGNITION USING YOLOV4 AND TESSERACT OCRAUTOMATIC LICENSE PLATE RECOGNITION USING YOLOV4 AND TESSERACT OCR
AUTOMATIC LICENSE PLATE RECOGNITION USING YOLOV4 AND TESSERACT OCR
 
Automatic License Plate Recognition Using Optical Character Recognition Based...
Automatic License Plate Recognition Using Optical Character Recognition Based...Automatic License Plate Recognition Using Optical Character Recognition Based...
Automatic License Plate Recognition Using Optical Character Recognition Based...
 
Matlab based vehicle number plate identification system using ocr
Matlab based vehicle number plate identification system using ocrMatlab based vehicle number plate identification system using ocr
Matlab based vehicle number plate identification system using ocr
 
IRJET- Recognition of Indian License Plate Number from Live Stream Videos
IRJET-  	  Recognition of Indian License Plate Number from Live Stream VideosIRJET-  	  Recognition of Indian License Plate Number from Live Stream Videos
IRJET- Recognition of Indian License Plate Number from Live Stream Videos
 
A Review of Lie Detection Techniques.pdf
A Review of Lie Detection Techniques.pdfA Review of Lie Detection Techniques.pdf
A Review of Lie Detection Techniques.pdf
 
A design of license plate recognition system using convolutional neural network
A design of license plate recognition system using convolutional neural networkA design of license plate recognition system using convolutional neural network
A design of license plate recognition system using convolutional neural network
 
A Review of Lie Detection Techniques
A Review of Lie Detection TechniquesA Review of Lie Detection Techniques
A Review of Lie Detection Techniques
 
Off-line English Character Recognition: A Comparative Survey
Off-line English Character Recognition: A Comparative SurveyOff-line English Character Recognition: A Comparative Survey
Off-line English Character Recognition: A Comparative Survey
 
IRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
IRJET- Implementation of Gender Detection with Notice Board using Raspberry PiIRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
IRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
 
License plate recognition for campus auto-gate system
License plate recognition for campus auto-gate systemLicense plate recognition for campus auto-gate system
License plate recognition for campus auto-gate system
 
ENCRYPTION MODES IDENTIFICATION OF BLOCK CIPHERS BASED ON MACHINE LEARNING
ENCRYPTION MODES IDENTIFICATION OF BLOCK CIPHERS BASED ON MACHINE LEARNINGENCRYPTION MODES IDENTIFICATION OF BLOCK CIPHERS BASED ON MACHINE LEARNING
ENCRYPTION MODES IDENTIFICATION OF BLOCK CIPHERS BASED ON MACHINE LEARNING
 
License Plate Recognition
License Plate RecognitionLicense Plate Recognition
License Plate Recognition
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language Detection
 
IRJET- Hand Sign Recognition using Convolutional Neural Network
IRJET- Hand Sign Recognition using Convolutional Neural NetworkIRJET- Hand Sign Recognition using Convolutional Neural Network
IRJET- Hand Sign Recognition using Convolutional Neural Network
 
Automatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVMAutomatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVM
 
Implementation and Performance Evaluation of Neural Network for English Alpha...
Implementation and Performance Evaluation of Neural Network for English Alpha...Implementation and Performance Evaluation of Neural Network for English Alpha...
Implementation and Performance Evaluation of Neural Network for English Alpha...
 
A Traffic Sign Classifier Model using Sage Maker
A Traffic Sign Classifier Model using Sage MakerA Traffic Sign Classifier Model using Sage Maker
A Traffic Sign Classifier Model using Sage Maker
 

Recently uploaded

Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 

Recently uploaded (20)

Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 

Automatic Number Plate Recognition System in Bangla using Deep Learning model(Report)

  • 1. Automatic Number Plate Recognition System in Bangla using Deep Learning model. Most.Jannat-Ul-Ferdoush(200101068),Mst.Habiba Hena Sumi (200101070), Md. Talath Un Nabi (200101076) Course Code: CSE 4132Course Title: Artificial Neural Networks and Fuzzy Systems Sessional Semester: Winter 2023 *Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology (BAUST) Abstract Traffic control and vehicle owner identification become major problems in Bangladesh. Most of the time it is difficult to identify the driver or the owner of the vehicles who violate the traffic rules or do any accidental work on the road. So, this work for Bangla number plate detection. We use 3 different model for number plate detection and easy OCR for Bangla character recognition. The trained model are i.YOLOv5(You Only Look Once),ii.VGG16 and iii.Inception_ResNet-V2 .1st model for automatic number plate recognition with (ANPR) YOLOv5 system, and text detect with OCR , here detection confidence rate is 89% . 2nd paper model for number plate recognition with (ANPR) VGG16 model system, and text detect with OCR , here detection confidence rate is 79.5%. 3rd model for automatic number plate recognition with (ANPR) ANN using Inceptiop- resnetV2 model ,and text detect with OCR , here detection confidence rate is 64.66%.Better model is YOLOv5 for this dataset. For speed, we tested our model on Google Colaboratory’s free GPU and attained a speed of 7 frames per second while detecting and recognizing the license plate numbers. Keywords—Automatic Number Plate Recognition (ANPR), Optical Character Recognition (OCR), License Plate (LP), YOlOv5, Inception-ResNetv2, VGG16, deep neural network(DNN), Convolutional Neural Network(CNN). 1. Introduction Automatic Number Plate Recognition is a vital part of controlling the traffic system intelligently and efficiently. The use of the automated parking management system is increasingly becoming popular in Bangladesh. Use the detect number for toll collection, parking lot management, enterprise entrance management, border surveillance, effective traffic control and security applications such as access control to restricted areas and tracking of wanted vehicles digitally.[1] ANPR systems contain three core steps: number plate area detection, breakdown of characters, and Optical Character Recognition (OCR).[2] Different methodologies have been used for ANPR systems, including Artificial Neural Network, Probabilistic neural network, Optical Character Recognition, MATLAB, Configurable method, Sliding Concentrating window, Back-Propagation Neural Network, and Support Vector Machine. ANPR are born on joint methodologies such as Artificial Neural Network, Probabilistic neural network, Optical Character
  • 2. Recognition, MATLAB, Configurable method, Sliding Concentrating window, Back- Propagation Neural Network, Support Vector Machine, Inductive Learning.[2] Optical Character Recognition method, which is a widely used tool for mechanical or electronic conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo or from subtitle text superimposed on an image. OCR software pre-processes the images to enhance the chances of successful recognition. The two non-intersecting images data sets were used to copy the actual-world cases where the neural network will be subjected to.[2] Here using 3 model for same dataset. One is YOLO version 5. YOLOv5 is one of the ‘state of the art’ algorithms for real time object detection and classification.[3] Using YOLOv5 as our CNN model, we achieved confidence up to 89% in our dataset which is best from other 2 model confidence. Here we use 3 different type model for different dataset and get detect number plate.YOLOv5 is a real-time object detection model that uses a single convolutional neural network to predict the class and bounding box coordinates of multiple objects in an image. It is a smaller and faster version of previous YOLO models while maintaining high confidence. 2nd model is Inception_ResNetv2.Inception model is a deep neural network architecture used for image and video classification tasks. It utilizes a combination of convolutional layers with various filter sizes to extract features from images at different scales. ResNetv2 is a deep neural network architecture that includes skip connections, enabling training of very deep neural networks up to hundreds of layers. It uses residual learning to address the vanishing gradient problem and improve model performance. We combine this 2 model for object detection. Another model is VGG16. VGG16 is a deep convolutional neural network architecture used for image classification tasks. It consists of 16 convolutional layers with small filter sizes and max pooling layers, followed by fully connected layers for classification. We use OCR for Character recognition in all model. We compare our three model for get character from bangla number plate perfectly and perform work. The most popular style of one line license plate format throughout the world, Bangla license plate is composed of two License (BRTA) standard number plates).[3] The vehicle category is indicated by the vehicle class letter (খ,গ,ঘ,এ,) .This kind of digital plate contains two rows. At lower row, there are numbers and at the upper row, there are alphabets. In the lower row, there remain two separate parts containing six digits.[8] 2. Literature Review Automatic License Plate Recognition (ALPR) is a type of technology that enables computer systems to study automatically the registration number (license number) of vehicles from digital pictures. So many works doing in this side. CNN, ANN, Deep learning models can detect easily the number plate. Although here some limitation has present. In [1], authors proposed Bangla automatic number plate recognition system using artificial neural network to detection for various climate condition and accuracy rate is 95% with an average processing time of 0.75 seconds. A robust feature extraction technique is applied to extract the feature from each characters which is invariant to the rotation and scaling. In [2], authors proposed English automatic number plate recognition system using Back-Propagation Neural Network, Support Vector Machine accuracy rate
  • 3. is 82.5% .OCR use for character recognition . In [3], authors proposed bangle automatic number plate recognition system using) convolutional neural network (CNN) based model YOLO system, and text detect with OCR , here detection accuracy rate is 99.5%. In [4], authors proposed bangle automatic number plate recognition system using) ANN using feature extraction model system, and text detect with OCR, here detection accuracy rate is 94.45%. In [4], authors proposed bangle automatic number plate recognition system using) ANN using feature extraction model system, and text detect with OCR, here detection accuracy rate is 94.45%. In [5], authors proposed bangle automatic number plate recognition system model YOLOv3 system, and text detect with CNN model, here detection accuracy rate is 97.5%. In [6], authors proposed bangle automatic number plate recognition system using Support Vector Machine has been used for classification, and text detect with OCR, here detection accuracy rate is 92.5%. In [7], authors proposed bangle automatic number plate recognition system using multilayer feed-forward network, and MLP network to recognize each characters and words to identify the number plate., here detection accuracy rate is 75.51%. In [8], authors proposed bangle automatic number plate recognition system using YOLO model system, and text detect with CNN, here detection accuracy rate is 81%. In [9], authors proposed bangle automatic number plate recognition system using YOOv3, model system, and text detect with OCR, here detection accuracy rate is 88.89%. In [10], authors proposed bangle automatic number plate recognition system using SSD model system, and text detect with CNN, here detection accuracy rate is 97.5%. In [11], authors proposed bangle automatic number plate recognition system using ANN Deep Convolutional Neural Network (DCNN) model which is a single short detection, here detection accuracy rate is 99%. In [12], authors proposed bangle automatic number plate recognition system using CNN using feature extraction model system, here detection accuracy rate is 89%. 3. Why we choose these models YOLOv5 VGG16 Inception-ResNet v2 Architecture Object Detection Convolutional Neural Network Convolutional Neural Network Single Shot Detector Yes No No Number of Parameters Varies based on the model size 138 million parameters 55.8 million parameters Object Detection Average Precision(AP) High Moderate High Performance State-of-the-art in object detection Well-established in image classification High performance in various tasks Table1:model details
  • 4. 4. Device we used 5. Dataset Description Total Images: 20,000 Annotations:20,000 Unique Images: 4,000 Augmentations: random_brightness, horizontal_flip, vertical_flip, rotation, grayscale A license plate same dataset used for train our models. We use 20,000 data and use 5 type augmentations, which are randomly brightness, grayscale, horizontal flip, vertical flip, rotation. We separate 80% data for training (for two models, another model use 90%) and 20% for validation and 5 selected data for testing from overall data. Every license plate has two line in the first line and 6 numbers in the second line, resulting in 9450 words and characters (alphanumeric symbols) that have been annotated with bounding boxes. Our dataset has also been manually augmented to avoid over fitting. For that, we randomly translated and scaled up to 20% of the captured image sizes.
  • 6. 6. Data Augmentation Fig.3. after perform data augmentation 7. Methodology For the complete detection process of characters from the license plate, three stages were split. Initially, Load image. Then crop image to find bounding box for predicted coordination, then the image was augmentation to get perfect predicted value, Process image as our model parameter and normalize it. Separate 80% data for train and rest 20% for validation. Then apply our model for train data and validation data. Then text data and predict output. Then create pipeline or threshold and apply easy OCR for character reorganization. Flow diagram of the proposed system of license plate recognition is shown in Fig 3. Data Preprocessing: Load data deletion. Then annotation was completed for the number plate of all vehicles in the first dataset. To annotate the dataset, we utilize labeling. After annotation, we contrast Normalization for license plate recognition. Fig.1.Normalize image Augmentation: convert real image into grayscale, Horizontal flip, vertical flip, rotation, random brightness. Show in fig.4
  • 7. Apply Models: YOLOv5, Inception-resNetv2,VGG16 and Train Dataset abjectness’ was used by model for bounding box prediction and cost function measurement. For each bounding box using logistic regression, models predict an object score. The cost function is calculated differently in models. And those models was used for detecting the number plates which were trained on the first dataset after the annotation. License Plate Detection and Localization:Following that, after YOLOv5(10),Inception- resNetv2(180),VGG16(180) training, the best-predicted model was used. The experiment could save the best epoch of training. After epochs, the targeted detection model was found while training the data. Fig. 4. Flow diagram of our experiment Crop License Plate using Predicted Bounding Box :The saved model was used to detect the license plate. And then, predicted bounding box coordinate was used to crop license plates from images. Apply threshold for Character Segmentation: The next step was to complete the segmentation. For segmentation, threshold algorithm was used. Threshold is an old but effective one for the segmentation process. Apply EasyOCR model for Character Recognition: EasyOCR supports Bangla language for optical character recognition (OCR) and can recognize printed and handwritten Bangla text from images or videos. It uses a deep learning model trained on Bangla characters for high confidence.
  • 8. 8. Models Hyperparameters: YOLOv5 VGG16 Inception-Inception ResNet v2 Learning Rate 0.01 0.001 1e-4 Optimizer SGD (Stochastic Gradient Descent) Adam Adam Number of Parameters 7022326 17,099,140 73,663,490 metrics mAP50(Mean Average Precision) accuracy accuracy Table.2. Hyperparameters Train/Test Split YOLOv5 VGG16 Inception-ResNet v2 Training 80% 90% 80% Testing 20% 10% 20% Random State 0 1 0 Table.3. Train Test Data Split Object detect using YOLOv5 Our model has use total 214 layer where 127is convolution layers. For detecting the license plate, we trained our model for 10 epochs. For segmenting and recognizing the license plate, we trained our model for 10 epochs. In the time of training, the hyper parameters for our model, which are given below- I. Batch size = 20 II. Epoch=50 III. Momentum = 0.937 IV. Weight decay=0.00046875=0.0005 V. Learning rate = 0.01 VI. Optimizer =SGD VII. Loss= mse (mean square error ) Model details
  • 9. The architecture of YOLOv5 is composed of several convolutional layers with various filter sizes, strides, and activation functions. The model is based on a modified version of the Efficient Net backbone architecture, which consists of a series of convolutional blocks with varying depths and widths. Here is an overview of the key layers and activation functions used in YOLOv5: Fig.5.Model summary for YOLOv5 1. Convolutional Layers: The YOLOv5 architecture includes many convolutional layers, including 1x1, 3x3, and 5x5 filters, as well as dilated and transposed convolutions. These layers extract features from the input image at different spatial scales. 2. Activation Functions: YOLOv5 uses the Mish activation function, which is a smooth and non-monotonic function that has been shown to improve performance compared to traditional activation functions like ReLU. The Mish function is defined as f(x) = x * tanh(softplus(x)). 3. Spatial Pyramid Pooling: YOLOv5 incorporates a Spatial Pyramid Pooling (SPP) layer, which allows the network to capture features at multiple scales. The SPP layer pools features from different regions of the feature map at different scales and concatenates them into a single vector. 4. Backbone Architecture: YOLOv5 uses a modified version of the Efficient Net backbone architecture, which includes a series of convolutional blocks with varying depths and widths. This allows the model to efficiently learn features at different scales while minimizing the number of parameters. 5. Object Detection Head: The final layers of YOLOv5 include a set of convolutional layers that predict the bounding boxes and class probabilities for each object in the input image. These layers use anchor boxes to predict the location and size of each object and are trained using a combination of classification and regression loss functions.
  • 10. YOLOv5 is a powerful and efficient object detection model that combines state- of-the-art convolutional neural network architectures with advanced features like Spatial Pyramid Pooling and the Mish activation function. Testing code: It looks like the code you provided is attempting to perform object detection on an image using the YOLO algorithm and then display the results using Plot. However, the YOLO predictions function that is being called is not defined in the code snippet you provided, Assuming that the YOLO predictions function is defined elsewhere and correctly implemented, this code should display the original image with the detected objects overlaid on it, and print out the text that was detected in any license plates that were identified. Without additional context or information about the YOLO predictions function, it is difficult to provide more specific feedback or advice. Output:
  • 11. Fig.6.detect bounding box with confidence with 89% Character reorganization code: This code defines a function called 'extract text' that extracts text from an image region specified by a bounding box. The code uses the EasyOCR library to perform OCR on the specified region. Here's how the code works: 1. The 'easyocr' library is imported. 2. The 'extract text' function is defined with two parameters: 'image' and 'bounding box'. 'image' is the input image from which the text needs to be
  • 12. extracted. 'bounding box' is a tuple that specifies the bounding box coordinates of the region from which the text needs to be extracted. 3. The bounding box coordinates are used to extract the region of interest (ROI) from the input image. 4. If the ROI has a shape of (0,0), meaning it is empty, the function returns the string 'no number'. 5. If the ROI is not empty, the EasyOCR 'read text' function is used to extract the text from the ROI. 6. The resulting text is concatenated into a single string and stripped of any leading/trailing white space. The function returns the extracted text. Fig.7.recognized bangle character Object detect using Inception-Resnetv2 Our model has 743 convolutional layers. For detecting the license plate, we trained our model for 180 epochs. For segmenting and recognizing the license plate, we trained our model for 180 epochs. In the time of training, the hyper parameters is given below: I. Batch size = 10 II. Epoch=180 III. Momentum = 0.7 IV. Weight decay= V. Learning rate = 1e-4 VI. Optimizer=ADAM VII. Loss function=mse(mean square error) Model details This model for image classification using transfer learning with the InceptionResNetV2 pre- trained model. Here is a breakdown of the key components of the code:
  • 13. Fig.8.model for Inception-ResNetv2 1. InceptionResNetV2 model: This is a pre-trained image classification model included in the Keras library. The InceptionResNetV2 function is used to load the pre-trained weights, and the include top=False argument is used to exclude the final fully connected layer of the model. 2. Head model: The head model variable defines the fully connected layers that will be added to the pre-trained model. The output from the InceptionResNetV2 model is passed as input to the head model. 3. Flatten layer: The Flatten layer is used to flatten the output from the InceptionResNetV2 model into a 1-dimensional vector. 4. Dense layers: The Dense layers define the fully connected layers in the head model. The first Dense layer has 500 units with ReLU activation, the second has 250 units with ReLU activation, and the final layer has 4 units with sigmoid activation. 5. Model: The Model function is used to define the final model architecture, which includes the InceptionResNetV2 model as the base and the head model on top of it. The inputs argument specifies the input tensor shape, and the outputs argument specifies the output tensor shape. 6. loss function and optimizer to be used during training. Specifically, it uses mean squared error (mse) as the loss function and the Adam optimizer with a learning rate of 1e-4. This code creates a transfer learning model that uses the pre-trained InceptionResNetV2 model to extract features from images and then applies a set of fully connected layers to make predictions about the input image. The model is trained using a binary cross-entropy loss function and the Adam optimizer.
  • 14. Testing code: The code defines a function called object detection that takes an input image, performs object detection using a pre-trained model, draws a bounding box around the detected object, and returns the modified image and the coordinates of the bounding box. The code then displays the modified image and a cropped image of the detected object. This code can be useful for performing object detection and extracting detected objects from images. Fig.9.crop bounding box
  • 15. Output: Fig.10.detect bounding box with confidence with 61.95% Character reorganization code This code uses the EasyOCR library to perform optical character recognition (OCR) on an image. Specifically, it uses the 'bn' (Bangla) language model to read text from the image. Here's a breakdown of the code:
  • 16. 1. The 'easyocr' library is imported. 2. A reader object is created with the 'bn' language model. 3. The 'read text' function is used to read text from the image. 4. The resulting text is stored in the 'text' variable by looping through the text detection results and concatenating the detected text. Finally, the detected text is printed to the console. Fig.11.recognize bangle character Object detect using VGG16 Our model has 13 convolutional layers and 3 fully connected layers. For detecting the license plate, we trained our model for 180 epochs. For segmenting and recognizing the license plate, we trained our model for 180 epochs. In the time of training, the hyper parameters are given below: I. Batch size = 32 II. Epoch=180 III. Momentum = 0.7 IV. Weight decay= V. Learning rate = 0.001 VI. Optimizer=ADM VII. Loss function= mse(mean square error) Model details The model is a Sequential model with the VGG16 architecture as the base.
  • 17. Fig.12.Model summary for VGG16 The model is a Sequential model with the VGG16 architecture as the base. The VGG16 layers are pre-trained on the ImageNet dataset and only the fully connected layers are added on top of it. The fully connected layers have 128, 128, and 64 units respectively with ReLU activation function, and the output layer has 4 units with sigmoid activation function. The second last layer of the VGG16 model is frozen and its weights are not updated during the training. The model is compiled with the mean squared error (MSE) loss function and the Adam optimizer with a learning rate of 0.001. The IMAGE_SIZE variable is not defined in the code snippet, so it is unclear what the input image size is. This is the pre-trained VGG16 model from the ImageNet dataset, which consists of 13 convolutional layers followed by 3 fully connected layers. 1. Flatten: This layer flattens the output of the previous layer into a 1D tensor, which can be fed into the fully connected layers. 2. Dense: This fully connected layer has 128 units and uses the ReLU activation function. 3. Dense: Another fully connected layer with 128 units and ReLU activation function. 4. Dense: Yet another fully connected layer with 64 units and ReLU activation function. 5. Dense (output layer): This is the final fully connected layer with 4 units and sigmoid activation function, which is used for binary classification. 6. The second last layer of the VGG16 base is frozen and its weights are not updated during the training. The model is compiled with the mean squared
  • 18. error (MSE) loss function and the Adam optimizer with a learning rate of 0.001. Testing code: The code defines a function called object detection that takes an input image, performs object detection using a pre-trained model, draws a bounding box around the detected object, and returns the modified image and the coordinates of the bounding box. The code then displays the modified image and a cropped image of the detected object. This code can be useful for performing object detection and extracting detected objects from images. Fig.13.crop bounding box
  • 19. Fig.14.detect bounding box with confidence with 77.27% This code uses the EasyOCR library to perform optical character recognition (OCR) on an image. Specifically, it uses the 'bn' (Bangla) language model to read text from the image. Here's a breakdown of the code: 1. The 'easyocr' library is imported. 2. A reader object is created with the 'bn' language model. 3. The 'read text' function is used to read text from the image. 4. The resulting text is stored in the 'text' variable by looping through the text detection results and concatenating the detected text. Finally, the detected text is printed to the console. Fig.12. recognized bangle character
  • 20. 9. Result and Analysis YOLOv5 VGG16 Inception-ResNet v2 Batch Size 20 32 10 Epoch 50 180 180 Confidence(%) 89 77.27 61.95 Table.4. Summary of the confidence for minimum epoch size As we mentioned above this system build for 3 different models, for this experiment. For, YOLOv5 total of 3 phases were developed. For license plate detection, YOLOv5 was used in the first step. Then, in the second level, segmentation was completed. And the last phase was about the license plates’ identification of characters. Here, all the observations of these three phases are illustrated. Same for Inception_ResNetv2 and VGG16 model. License plate detection using YOLOv5 had an output confidence 81% prediction is correct. License plate detection using Inception-ResNetv2 had an output confidence 61.95% prediction is correct for training dataset, some time we get from their for other dataset 64%. License plate detection using VGG16 had an output confidence 77.27% prediction is correct. After training with a lot of data, the test phase showed us a better result for license plate detection. Images from different angles were used for testing purpose. And our proposed model worked much better regarding those issues. Figure 3 and 4 show the detection of a license plate from an image. Yolov5 can detected bounding box in small epoch, But Inception_ResNetv2 and VGG16 need high epoch. when we use low epochs in our models the bounding box can’t detected perfectly. For Inception_ResNetv2 when we use epoch 50 the bounding box is detect the model show in fig.14. Fig.15.Bounding box and detected character by YOLOv5
  • 21. fig.16. Bounding box and detected character by Inception-ResNetv2 fig.17.Bounding box and detected character by VGG16
  • 22. fig.18.Bounding box and detected character by VGG16 using close image For VGG16 when we use low or high epochs in our models the bounding box can’t detected perfectly for Inception_ResNetv2 when we use epoch 50 the bounding box is detect the model show in fig.12. Fig.19.bounding box for VGG16 in small epochs (50)
  • 23. Fig.20.bounding box box and character detection for VGG16 in small epochs (100) Fig.21.bounding box and character detection for Inception-Resnetv2 in epochs(180),But low regulation image confidence rate (RR) is calculated as : confidence= (No. of recognized samples /No. of total samples of that sign) × 100% [6]
  • 24. model Character recognition time YOLOv5 5.11s Inception_ResNetv2 6.09s VGG16 4.647s Table5: Chracter recognition time for model We can see from above table, using same character recognition model in those 3 object detection model. We get different time for character recognition. VGG16 is fast for character recognition from other two model Fig.22.Chart of confidence of 3 model We can see from the chart (fig.22) YOLOv5 model confidence is high then other 2 models. And it detect boundary box perfectly. But character recognition time is higher than VGG16.But detecting correct character 100%.VGG16 confidence is 77.72. it is higher than Inception_ResNetv2 model, but can not detect boundary box correctly. Character recognition time is high. Inception_ResNetv2 model confidence is lower than other two models but its can detected bounding box correctly. And character recognition time is also lower than other. So, YOLOv5 is best for detected bounding box for our experiment. 10. Stakeholders Any individuals or groups that are affected, either positively or negatively, by a project, initiative, policy, or organization are considered stakeholders. They may be internal or external. a. The stakeholders of our project are: • Our Project Supervisor: Hasan Muhammad Kafi (Assistant Professor of BAUST) 0 10 20 30 40 50 60 70 80 90 100 YOLOv5 Inception-ResNetv2 VGG16 Accuracy(%) Accuracy(%)
  • 25. b. Project Developers include: • Most.Jannat-Ul-Ferdoush • Mst.Habiba Hena Sumi • Md. Talath Un Nabi c. External Stakeholders: • All the users and testers of our model. 11. Issues Encountered 1. In certain cases, the model may incorrectly recognize text, leading to inaccurate results. 2. When training the model on a smaller dataset, it may struggle to accurately detect number plates, as it has less data to learn from. 3. Training the model without a GPU can result in longer training sessions, as GPUs are optimized for parallel processing and can significantly speed up training time. 12. Conclusion, Limitations and Future Recommendations Limitations: 1. The model struggles to detect number plates in low-quality or noisy images. 2. There are instances where the model incorrectly recognizes text. 3. The model has difficulty detecting number plates in images taken from complex angles. 4. When working with a small dataset, the model may have challenges accurately detecting bounding boxes. 5. If the angle of the number plate is too high, all three models fail to detect the bounding box during testing. 6. Images with significant noise pose challenges for the model to accurately detect the bounding box. 7. Inception-ResNetv2 and VGG16 models have difficulty detecting bounding boxes with low epochs, whereas YOLO performs better in this scenario. Automatic Number Plate detection is a whole package of capturing the license plate from the vehicle and to recognize it accurately which in bounding box. The purpose of this system is to detect the Bengali license plate to identify the vehicles properly for different 3 model. Yolo5 confidence is 89%, Inception_ResNetv2 model confidence is 61.69%, VGG16 model confidence is 77.27%. Our experimental best model is Yolo. If we analyze the 3 models, our experimental best model is Yolo for bounding box detection. The ResNetv2 is less accurate than other towing models, but it is also properly detected. And the last model was VGG16. The confidence was good, but it did not properly detect the license plate character. Every system has limitation, any one no work 100%. So, the system still has a large scope for further developments. Experimental models for number plate region detection and plate extraction in the perspective of Bangladesh
  • 26. the background scenes are more complex and also the weather of this country is always changing. So, any one can develop those models for climate changing condition. If anyone can remove limitation we can apply this project and can develop traffic violation and controlling system. References [1]Shahed, Md Tanvir, et al. "Automatic Bengali number plate reader." Tencon 2017-2017 IEEE region 10 conference. IEEE, 2017. [2]Kashyap, Abhishek, et al. "Automatic number plate recognition." 2018 international conference on advances in computing, communication control and networking (ICACCCN). IEEE, 2018. [3]Saif, Nazmus, et al. "Automatic license plate recognition system for bangla license plates using convolutional neural network." TENCON 2019-2019 IEEE Region 10 Conference (TENCON). IEEE, 2019. [4]Kakani, Bhavin V., Divyang Gandhi, and Sagar Jani. "Improved OCR based automatic vehicle number plate recognition using features trained neural network." 2017 8th international conference on computing, communication and networking technologies (ICCCNT). IEEE, 2017. [5] Sarif, Md Mesbah, et al. "Deep learning-based Bangladeshi license plate recognition system." 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, 2020. [6] Al Nasim, Md Abdullah, et al. "An automated approach for the recognition of bengali license plates." 2021 International Conference on Electronics, Communications and Information Technology (ICECIT). IEEE, 2021. [7] Joarder, Md Mahbubul Alam, et al. "Bangla automatic number plate recognition system using artificial neural network." Asian Transactions on Science & Technology (ATST) 2.1 (2012): 1-10. [8] Uddin, Md Azher, Joolekha Bibi Joolee, and Shayhan Ameen Chowdhury. "Bangladeshi vehicle digital license plate recognition for metropolitan cities using support vector machine." Proc. International Conference on Advanced Information and Communication Technology. 2016. [9] Suvon, Md Naimul Islam, Riasat Khan, and Mehebuba Ferdous. "Real time bangla number plate recognition using computer vision and convolutional neural network." 2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET). IEEE, 2020 [10]Rahman, MM Shaifur, et al. "Bangla license plate recognition using convolutional neural networks (CNN)." 2019 22nd International Conference on Computer and Information Technology (ICCIT). IEEE, 2019.
  • 27. [11] Islam, Tariqul, and Risul Islam Rasel. "Real-time bangla license plate recognition system using faster r-cnn and ssd: A deep learning application." 2019 IEEE International Conference on Robotics, Automation, Artificial-intelligence and Internet-of-Things (RAAICON). IEEE, 2019. [12] Sarif, Md Mesbah, et al. "Deep learning-based Bangladeshi license plate recognition system." 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, 2020. Appendix Attainment of Complex Engineering Problem (CP) S.L. CP No. Attainment Remarks 1. P1: Depth of Knowledge Required K3 (Engineering Fundamentals): K4 (Engineering Specialization): K5 (Design): K6 (Technology): K8 (Research): 2. P2: Range of Conflicting Requirements 3. P3: Depth of Analysis Required 4. P4: Familiarity of Issues 5. P5: Extent of Applicable Codes 6. P6: Extent of Stakeholder Involvement and Conflicting Requirements 7. P7: Interdependence Mapping of Complex Engineering Activities (CA) S.L. CA No. Attainment Remarks 1. A1: Range of resources
  • 28. 2. A2: Level of interaction 3. A3:Innovation 4. A4:Consequences for Society and the Environment 5. A5: Familiarity