SlideShare a Scribd company logo
1 of 8
Download to read offline
International Journal of Power Electronics and Drive System (IJPEDS)
Vol. 11, No. 4, December 2020, pp. 2091∼2098
ISSN: 2088-8694, DOI: 10.11591/ijpeds.v11.i4.pp2091-2098 r 2091
A real-time system for vehicle detection with shadow
removal and vehicle classification based on vehicle features
at urban roads
Issam Atouf, Wahban Al Okaishi, Abdemoghit Zaarane, Ibtissam Slimani, Mohamed Benrabh
LTI Lab. Faculty of sciences Ben M’sik, Hassan II University of Casablanca, Morocco
Article Info
Article history:
Received Feb 2, 2020
Revised Apr 24, 2020
Accepted May 19, 2020
Keywords:
Background subtraction
Image processing
Shadow removal
Vehicle classification
Vehicle detection
ABSTRACT
Monitoring traffic in urban areas is an important task for intelligent transport applica-
tions to alleviate the traffic problems like traffic jams and long trip times. The traffic
flow in urban areas is more complicated than the traffic flow in highway, due to the
slow movement of vehicles and crowded traffic flows in urban areas. In this paper, a
vehicle detection and classification system at intersections is proposed. The system
consists of three main phases: vehicle detection, vehicle tracking and vehicle classi-
fication. In the vehicle detection, the background subtraction is utilized to detect the
moving vehicles by employing mixture of Gaussians (MoGs) algorithm, and then the
removal shadow algorithm is developed to improve the detection phase and eliminate
the undesired detected region (shadows). After the vehicle detection phase, the vehi-
cles are tracked until they reach the classification line. Then the vehicle dimensions
are utilized to classify the vehicles into three classes (cars, bikes, and trucks). In this
system, there are three counters; one counter for each class. When the vehicle is clas-
sified to a specific class, the class counter is incremented by one. The counting results
can be used to estimate the traffic density at intersections, and adjust the timing of
traffic light for the next light cycle. The system is applied to videos obtained by sta-
tionary cameras. The results obtained demonstrate the robustness and accuracy of the
proposed system.
This is an open access article under the CC BY-SA license.
Corresponding Author:
Name Issam Atouf,
Affiliation Faculty of sciences Ben M’sik, Hassan II University Casablanca,
Address Casablanca, Morocco.
Email issamatouf@yahoo.fr
1. INTRODUCTION
Traffic problems are one of the most problems encountered by the residents of large cities. Therefore,
the traffic management companies have paid great attention to solve these problems. The first step in traffic
analysis is the collection of traffic information. Several techniques have been developed for traffic data collec-
tion; many detectors (such as loop, radar, infrared and microwave) are utilized to do this task. These detectors
help traffic flow management by providing information about the level of the traffic density on the roads. How-
ever, they have many drawbacks that lead to reduce their use, as their installation requires pavements cuts; in
addition, their detection zone is small. In recent years, the vision-based systems have been widely used in
traffic management, due to their advantages compared to electronic sensors. The vision-based systems extract
useful data by covering wide-area detection with ability in determining the shape of the detection zone. The
Journal homepage: http://ijpeds.iaescore.com
2092 r ISSN: 2088-8694
first phase to analyze the traffic parameters is the vehicle detection. Several methods have been developed for
the vehicles detection, and these methods can be grouped into two main approaches: texture-based approach
and motion-based approach. The first one utilizes vehicle features like edges, corners, colors and so on; this
approach is good to detect the stopping vehicles. While the other one depends on the movement of vehicles,
it is widely used in intelligent transportation systems. There are two main methods to detect the moving ob-
jects: optical flow and background subtraction. The optical flow is accurate in the detection of object motion
and gives more information about the motion like the velocity and the motion direction. However, it has high
computational time and is not suitable for real time applications [1]. The background subtraction is the most
common method used in literature for detecting moving vehicles. In this work, the background subtraction is
utilized to separate the moving vehicles from the background model. However, the shadows of moving vehi-
cles are also detected; thus they are considered as a part of vehicle dimensions, and this leads to misclassify
the different vehicles. To cope with this issue, we proposed an algorithm to remove the shadow region based
on the edges of detected regions. This algorithm outperforms the other shadow removal algorithms. When the
vehicles are detected without shadows, they will be tracked until they reach the classification line. Then the
classification is performed to category them into a number of predefined types. The classification methods can
be grouped into two groups, one group classifies the vehicles by measuring the vehicle dimensions [2], while
the other utilizes machine learning techniques [3].
Various systems have been developed to detect and classify the vehicles. A real time system for vehi-
cles detection and classification is described in [4]. The first phase of their work is the vehicle detection, they
used the background subtraction to perform this task by employing the adaptive background update method,
then the vehicles were tracked by using the association graph between the consecutive frames, finally they used
the vehicle dimensions to classify the vehicles into two categories, small vehicles (cars) and big vehicles(vans,
trucks and buses). In [3], they developed a system for vehicle classification. The moving objects(vehicles)are
separated from the static objects(background objects) by using GMM method, and then they used an approach
based on two level of support vector machine (SVMs) to classify the detected vehicles. The second level was
employed for solving the occlusion problem. The vehicles were classified into four classes (bus, cars, minibus
and trucks). In [5], an image processing system is proposed to detect and classify the vehicles on rear view
vehicle. This system utilized the temporal median filter to establish the background model, and then the scene
frame was subtracted from the background model to detect the moving vehicles. The classification was per-
formed by using SVM method after extracting vehicle features by using deep convolutional neural network.
They classified the vehicles into two types, the passenger vehicles and other vehicle class. In [2], a real-time
traffic surveillance system has been proposed to measure traffic flow by detecting and counting the vehicles.
The background model was performed by using the temporal information of the mean and standard deviation
of gray level distribution in consecutive frames for each point. After the segmentation process, each detected
object was bounded by a rectangle box, and then the calculation of object features (height, width and aspect
ratio) was established to achieve a robust and accurate classification. Then the vehicles were classified into two
classes (cars and bikes). In this paper, a vehicle detection and classification system at intersections is proposed.
The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the
vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture
of Gaussians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection
phase and eliminate the undesired detected region (shadows). After the vehicle detection phase, the vehicles are
tracked until they reach the classification line. Then the vehicle dimensions are utilized to classify the vehicles
into three classes (cars, bikes, trucks). After the classification phase, the vehicles in each class will be counted.
The counting results can be used to estimate the traffic density at intersections, and adjust the timing of traffic
light for the next light cycle. The system is applied on videos obtained by stationary cameras. The rest of
paper is organized as follows: section 2 describes the vehicle detection and shadow removal algorithm. The
vehicle classification method is presented in section 3. Experimental result is presented in section 4. Finally,
conclusion is given in sections 5.
2. VEHICLE DETECTION
Moving object detection methods including human [6, 7], vehicles [8, 9], have been developed by sev-
eral researchers. The most common method used in literature is the background subtraction method. In the past
decade, numerous background subtraction algorithms have been proposed to extract and update the background
Int J Pow Elec & Dri Syst, Vol. 11, No. 4, December 2020 : 2091 – 2098
Int J Pow Elec & Dri Syst ISSN: 2088-8694 r 2093
model. These algorithms can be a non-recursive or a recursive, the non recursive algorithm utilizes a buffer of
video frames to get the background, while the recursive algorithm updates the background model recursively
based on each input frame. Aljammal et al [10] proposed a non-recursive method called non-parametric model.
In their method, they used the entire history of pixels to estimate the background by calculating the pixel den-
sity function for each pixel. The pixel is considered as background if this function is greater than a predefined
threshold. The drawbacks of this method are: it is time consuming and requires high memory storage .Kar-
mann and von [11] developed a recursive technique for estimating the background model based on Kalman
filter, they utilized the intensity and its temporal derivative model. The background is recursively updated by
using three matrices: the background dynamics matrix, the measurement matrix and the Kalman gain matrix.
However, this method is affected by the foreground pixels even if it works in slow rate adaptation. Kim et al
[12] proposed a multimodal backgrounds method, it is called code book method. In this method, each pixel is
summarized by a number of codewords stored in a coodbook; each codeword contains of a set of parameters.
The input pixel is considered as a background pixel if its brightness falls within the brightness range of some
codeword. In addition, the color distortion of that codeword is smaller than the detection threshold. However,
this method cannot correctly detect the darkgrey and white moving objects, because it considers the darkgrey
objects as a shadow and the white object as a sudden increase of illumination [13]. In this paper, MoGs method
[14] is used to perform the background subtraction process. It is the most method used in literature due to its
robustness against the environmental changes and its capability to handle multimodal background distributions.
The idea of this method is as such: a number of Gaussian distributions (components) represent each pixel. The
number of Gaussian components falls between three and five depending on the storage limitation and the pos-
sibility of system realization in real time, three components are sufficient for our system. The component is
considered as a matched component if the difference between the component mean and the pixel value is less
than a predefined threshold, and then its parameters are updated as follows: the weight increases, the standard
deviation decreases, and the mean moves to be close to the pixel value. If the component is none matched;
the only parameter which is updated, is the weight, it decreases exponentially. If the pixel does not have any
matched component; the component that has least weight is replaced by a new component with mean equals
the pixel value, a large initial variance, and a small weight. After that the components are ranked according to
a confidence metric (weight/standard deviation), and then a predefined threshold is applied to the components
weights. The background model is the first components, whose weights are higher than the threshold. While
the foreground pixels (moving objects pixels) are those that do not have any component in the background
model. Figure 1 shows the result of applying MoGs method to two different traffic scenes.
(a) (b) (c) (d)
(e) (f) (g) (h)
Figure 1. Vehicles detection in different scenes, (a) Scene1, frame1938, (b) Scene1, frame2015,
(c) Scene2, frame1056, (d) Scene2, frame1733, (e) MoG result, frame 1938, (f) MoG result, frame 2015
(g) MoG result, frame 1056, (h) MoG result, frame 1733
A real-time system for vehicle detection with shadow removal and ... (Issam Atouf)
2094 r ISSN: 2088-8694
2.1. Shadow removal
In this section, an algorithm that removes vehicle shadows will be developed to improve the perfor-
mance of the system. The system performance will be improved from two respective views: It reduces the cases
of occlusion problems between the vehicles. In addition, if the shadow region is not removed, it will be consid-
ered as a part of vehicle dimensions, and the vehicle will be classified into the wrong class. Thus, the shadow
removal algorithm leads to improve the classification process and increases the system performances. Many
methods have been developed to detect and remove vehicle shadows [15-19], some methods utilize the color
information to distinguish between vehicle pixels and shadow pixels because of the difference of the chromatic
and luminance between the shadow and vehicle [20, 21]. However, these methods fail to identify shadow pixels
from dark vehicle pixels. The other methods employ the texture information to identify the vehicle region [22].
Since the shadow has a little texture, it is simple to separate the vehicle region from the shadow region. In this
paper, we proposed an algorithm to remove the shadow region based on the edges of detected regions. The
proposed algorithm involves the following steps:
− When the background subtraction method is applied, the moving objects (vehicles) are detected with their
shadows (detected regions). Then the shadow removal algorithm is applied on these detected regions in
the resulted image from background subtraction method and the gray scale source image. Figure 2 (a)
and (b) shows the detected region in grey scale image and foreground image respectively.
− The Canny edge detector is applied on these two images. The results of edge detection are shown in
Figure2 (c) and (d) respectively.
− XOR operation is applied on the two images resulted from the edge detection operation. Figure 2 (e)
shows the image obtained from this operation.
− Due to applying the edge detection on the detected region in the gray scale image, it may exist undesired
background edges like edges of white marks, damaged marks, and trees or buildings shadows. Therefore,
Canny edge detector is applied on the detected region in the background model (the image when there
are no vehicles).
− The edges of the background objects are subtracted from the edges of the image obtained from step 3,
and what will remain are vehicle edges. Then we apply close operation on the resulted image to fill in
spaces between the edges to preserve the vehicle details.
The resulted image obtained from this step is shown in Figure 2 (f).
(a) (b) (c) (d)
(e) (f)
Figure 2. Shadow removal
Int J Pow Elec & Dri Syst, Vol. 11, No. 4, December 2020 : 2091 – 2098
Int J Pow Elec & Dri Syst ISSN: 2088-8694 r 2095
2.2. Vehicle tracking
There are several methods to track moving objects through image sequences. These methods can be
grouped into two categories: feature-based tracking methods, and model-based tracking methods. The feature-
based methods are most widely used in literature because of its robustness [2]. In this work, after the detection
of vehicles without shadows, a feature-based method is employed to track the moving vehicles through the
image sequences in the detection zone (that is designated by two blue horizontal lines as shown in Figure 3).
When the vehicle appears in the detection area, the centroid of vehicle is calculated, and then this feature is used
to track the detected vehicle between the consecutive frames. First, an empty vector is initialized to maintain
the position of vehicles centroid. If moving vehicles are detected in the current frame, their centroid position
are recorded in this vector. In adjacent frames, the moving objects that are spatially closest are correlated.
Therefore, the measurement of the distance between the consecutive frames is sufficient to track the moving
objects. The euclidean distance (ED) is utilized to measure the distance between the position of vehicle centroid
in the current frame and in the previous frame. For each moving vehicle in the current frame, a vehicle with
the minimum distance is searched for the previous frame to change its record to the new position. When the
vehicle exceeds the detection area, the record of this vehicle is removed from the vector.
(a) (b)
(c) (d)
Figure 3. The results of the detection and classification process, (a) The result of the frame 438,
(b) The result of the frame 350, (c) The result of the frame 1005, (d) The result of the frame 1572
3. VEHICLE CLASSIFICATION
After the detection of vehicles without shadows, the tracking process is implemented. When the
vehicle reaches the classification line (that is designated by a red horizontal line as shown in Figure 3), the
classification process is implemented based on vehicle dimensions. In this work, three parameters (aspect
ratio (AR) = height/width, height, and width) of vehicles are utilized to classify them into three classes (cars,
bikes, and trucks). The aspect ratio is calculated by using the dimensions of different types of vehicles; vehicle
dimensions are taken from [23]. We took into our consideration the transformation from 3D to 2D. The aspect
ratio of different vehicles types are as follows:
− AR of cars is between [1.17-1.4].
− AR of tracks is between [1.3-1.9].
− AR of bikes is between [1.8-2.4].
A real-time system for vehicle detection with shadow removal and ... (Issam Atouf)
2096 r ISSN: 2088-8694
The classification process involves the following steps:
− When the vehicle attains the classification line, its height and width are calculated.
− If the vehicle width is greater than a specific threshold; then there is a horizontal occlusion (when the
distance between the two adjacent vehicles is very small; they are detected as one object. This is called
a horizontal occlusion). The detected region is considered as two adjacent vehicles, and it is separated
into two regions by dividing the width by two.
− If the vehicle height is greater than a specific threshold; then there is a vertical occlusion (when the
distance between the two consecutive vehicles is very small; they are detected as one object. This is
called a vertical occlusion). The detected region is considered as two consecutive vehicles, and it is
separated into two regions by dividing the height by two.
− The aspect ratio of each vehicle is calculated. If AR is between 1.17 and 1.3, then the vehicle is a car. If
AR is between 1.41 and 1.79, then the vehicle is a truck. If AR is between 1.91 and 2.4, then the vehicle
is a bike. If AR falls in the overlapping interval [1.3- 1.4], the vehicle height is used to distinguish if the
vehicle is a car or a truck, if the height is higher than a predefined threshold, then the vehicle is a truck,
else the vehicle is considered as a car. If AR falls in the overlapping interval [1.8-1.9], the vehicle width
is used to distinguish if the vehicle is a bike or a truck, if the width is higher than a predefined threshold
then the vehicle is a truck, else the vehicle is considered as a bike.
− Three counters have been proposed to count the number of vehicles in each class. These counters are
called C-car, C-bike, and C-truck. They are initialized to zero, and when the detected vehicle is classified
into one class from the existing three classes, the counter of this class will be incremented by one.
4. EXPERIMNTAL RESULT
In this paper, a surveillance traffic system is proposed to detect and classify the different types of
vehicles. In order to confirm that our proposed method is effective to perform this task, we utilized database
that is taken from stationary traffic cameras of Casablanca city. This database contains of two different traffic
videos; the first contains of 3580 frames with a resolution 240x320, the other contains of 2030 frames with the
same resolution. Two different scenes of traffic videos are utilized. One video is taken in the area just after a
traffic light (Scene 1) and the other is taken in an urban road (Scene 2). The results obtained during realized
system operation contains of vehicle type and vehicle number. Vehicle number is displayed above the boundary
box. Each vehicle is labelled by a black rectangular box until they reach the classification line, then the box
color changes according to classification result. If the vehicle type is car, then the box color changes to blue.
If the vehicle type is bike, then the box color changes to yellow. If the vehicle type is truck, then the box color
changes to red. Figure 3 (a) shows the result of the 438-th frame in which 4 cars in the detection area, three
of them crossed the classification line. They are classified as cars and have been labelled by blue rectangular
box. Figure 3 (b) shows the result of the 350-th frame in which one truck has been counted and labelled by
red rectangular box. Figure 3 (c) shows the result of the 1005-th frame in which one bike has been counted
and labelled by yellow rectangular box and one car has been counted and labelled by blue rectangular box.
Figure 3 (d) shows the result of the 1572-th frame from video sequences 2 in which two cars have been de-
tected. The first one crossed the classification line, so it has been counted and labelled by blue rectangular
box. According to these results, the different types of vehicles can be classified and counted correctly in this
proposed system. Table.1 shows the results of vehicle counting for two different traffic scenes. The results
manifest that the average accuracy of the two different scenes is 96.78%.
Table 1. Result of vehicle counting
Scene 1 Scene 2
Vehicle Type Count Error Accuracy Vehicle Type Count Error Accuracy
Car 230 6 97.45% Car 147 3 97.59%
Bike 26 1 96.2% Bike 19 1 94.7%
Truck 18 1 94.73% Truck 10 0 100%
Average 96.13% Average 97.43%
To evaluate the efficiency of the proposed system, a comparative study is established on different
surveillance systems. There are difficulty to make a fair comparative study due to the utilization of different
database. Therefore, we conducted a qualitative comparative study instead of quantitative study. Table 2 shows
Int J Pow Elec & Dri Syst, Vol. 11, No. 4, December 2020 : 2091 – 2098
Int J Pow Elec & Dri Syst ISSN: 2088-8694 r 2097
the results of the comparative study. As noted in this table, the counting accuracy of the proposed algorithm on
an average of 96.78% denotes that the proposed algorithm is more efficient than the other compared algorithms.
Table 2. Result of the comparative study
Comparative [24] [2] [3] [25] our method
methods
Vehicles types Cars only Cars and bikes Bus, minibus, Cars only Cars, bikes, and trucks
car, and truck
Vehicle detection Optical flow Background Background Frame differencing Background subtraction
method subtraction subtraction
Vehicle classification X Features Hierarchical X Features extraction
method extraction Multi-SVMs
Vehicle counting 94.04% 96.9% Scene 1: 93% 96.04% Scene 1: 96.13%
accuracy Scene 2: 88% Scene 2: 97.43%
Scene 3: 90%
Scene 4: 95%
5. CONCLUSION
A system for vehicles detection and classification has been introduced in this paper. The system
consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the vehicle
detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaus-
sians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase
and eliminate the undesired detected region. Then the vehicles are tracked until they reach the classification
line. After that, the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, and
trucks). After the classification phase, the vehicles in each class will be counted. The system has been applied
on various traffic scenes under different weather and lighting conditions. The experimental results confirm
that the proposed system has the ability in detecting and classifying the vehicles accurately and efficiently in
real time.
REFERENCES
[1] I. Kajo, A. S. Malik, and N. Kamel, ”Motion estimation of crowd flow using optical flow techniques:
A review,” 9th International Conference on Signal Processing and Communication Systems (ICSPCS),
pp. 1-9, 2015.
[2] D.-Y. Huang, et al., ”Feature-based vehicle flow analysis and measurement for a real-time traf-
fic surveillance system,” Journal of Information Hiding and Multimedia Signal Processing, vol. 3,
pp. 279-294, 2012.
[3] H. Fu, H. Ma, Y. Liu and D. Lu, ”A vehicle classification system based on hierarchical multi-SVMs in
crowded traffic scenes,” Neurocomputing, pp. 182-190, 2016.
[4] S. Gupte, O. Masoud, R. F. Martin, and N. P. Papanikolopoulos, ”Detection and classification of vehicles,”
IEEE Transactions on intelligent transportation systems, vol. 3, no. 1 pp. 37-47, 2002.
[5] Y. Zhou, and C. Ngai-Man, ”Vehicle classification using transferable deep neural network features,”
arXiv preprint arXiv:1601.01145. Health Psychology, vol. 35, no. 4, pp. 397–402, 2016.
[6] S. Ojha, and S. Sachin, ”Image processing techniques for object tracking in video surveillance-A survey,”
IEEE International Conference on Pervasive Computing (ICPC), 2015.
[7] J. Liu, Y. Liu, G. Zhang, P. Zhu, and YQ. Chen, ”Detecting and tracking people in real time with RGB-D
camera,” Pattern Recognition Letters, vol. 53, pp. 16-23, 2015.
[8] G. Chavez, O. Ricardo, and A. Olivier, ”Multiple sensor fusion and classification for moving ob-
ject detection and tracking,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 2,
pp. 525-534, 2016.
[9] S. Kamkar, and S. Reza., ”Vehicle detection, counting and classification in various conditions,”
IET Intelligent Transport Systems, vol. 10, no. 6, pp. 406-413, 2016.
[10] A. Elgammal, D. Harwood, and L. Davis., ”Non-parametric model for background subtraction,” Euro-
pean conference on computer vision. Springer, pp. 751-767, 2000.
A real-time system for vehicle detection with shadow removal and ... (Issam Atouf)
2098 r ISSN: 2088-8694
[11] K.-P. Karmann and A. Brandt, ”Moving object recognition using and adaptive background memory,”
Time-Varying Image Processing and Moving Object Recognition, vol. 2, pp. 289-307, 1990.
[12] K. Kim, et al., ”Real-time foreground–background segmentation using codebook model,” Real-time imag-
ing, vol. 11, no. 3, pp. 172-185, 2005.
[13] Y. Benezeth, et al., ”Comparative study of background subtraction algorithms,” Journal of Electronic
Imaging, vol. 19, no. 3, 2010.
[14] C. Stauffer, and W. Grimson., ”Learning patterns of activity using real-time tracking,” IEEE Trans. on
Pattern Analysis and Machine Intelligence, vol. 22, pp. 747-757, Aug 2000.
[15] S. P. Mohammed, N. Arunkumar and A. Enas., ”Automated multimodal background detection and shadow
removal process using robust principal fuzzy gradient partial equation methods in intelligent transporta-
tion systems,” International Journal of Heavy Vehicle Systems, vol. 25, pp. 271-285, 2018.
[16] Karim, Shahid and Zhang, Ye and Ali, Saad and Asif, Muhammad Rizwan., ”An improvement of vehicle
detection under shadow regions in satellite imagery,” Ninth International Conference on Graphic and
Image Processing (ICGIP), vo. 10615, 2018.
[17] Kumar, Cheruku Sandesh et al., ”Segmentation on moving shadow detection and removal by symlet
transform for vehicle detection,” 3rd International Conference on Computing for Sustainable Global De-
velopment (INDIACom), pp. 259-264, 2016.
[18] Seenouvong, Nilakorn and Watchareeruetai, Ukrit and Nuthong, Chaiwat and Khongsomboon, Kham-
phong and Ohnishi, Noboru., ”A computer vision based vehicle detection and counting system,”
8th International Conference on Knowledge and Smart Technology (KST), pp. 224-227, 2016.
[19] Hanif, Muhammad and Hussain, Fawad and Yousaf, Muhammad Haroon and Velastin, Sergio A and
Chen, Zezhi, ”Shadow Detection for Vehicle Classification in Urban Environments,” International Con-
ference Image Analysis and Recognition, pp. 352-362, 2017.
[20] A. Tiwari, K. S. Pradeep, and A. Sobia., ”A survey on Shadow Detection and Removal in images and
video sequences,” IEEE sixth International Conference-Cloud System and Big Data Engineering, 2016.
[21] Z. Zhu, and X. Lu., ”An accurate shadow removal method for vehicle tracking,” IEEE International Con-
ference on Artificial Intelligence and Computational Intelligence, vol. 2, pp. 59-62, 2010.
[22] R. P. Avery, et al., ”Investigation into shadow removal from traffic images,” Transportation Research
Record 2000, vol. 1, pp. 70-77, 2007.
[23] Transport for NSW, Vehicle standards information, no. 5, Rev. 5, Published 9 November 2012.
[24] HS. Mohana, M. Ashwathakumar, and G. Shivakumar., ”Vehicle detection and counting by using real time
traffic flux through differential technique and performance evaluation,” IEEE International Conference
on Advanced Computer Control, pp. 791-795, 2009.
[25] N. Seenouvong, U. Watchareeruetai, C. Nuthong, K. Khongsomboon, and N. Ohnishi., ”A computer
vision based vehicle detection and counting system,” IEEE 8th International Conference on Knowledge
and Smart Technology (KST), pp. 224-227, 2016.
Int J Pow Elec & Dri Syst, Vol. 11, No. 4, December 2020 : 2091 – 2098

More Related Content

What's hot

Implementation of a lane-tracking system for autonomous driving using Kalman ...
Implementation of a lane-tracking system for autonomous driving using Kalman ...Implementation of a lane-tracking system for autonomous driving using Kalman ...
Implementation of a lane-tracking system for autonomous driving using Kalman ...Francesco Corucci
 
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...ijcsit
 
Vehicle counting for traffic management
Vehicle counting for traffic management Vehicle counting for traffic management
Vehicle counting for traffic management ADEEBANADEEM
 
Automatic Dense Semantic Mapping From Visual Street-level Imagery
Automatic Dense Semantic Mapping From Visual Street-level ImageryAutomatic Dense Semantic Mapping From Visual Street-level Imagery
Automatic Dense Semantic Mapping From Visual Street-level ImagerySunando Sengupta
 
1 s2.0-s1110982317300820-main
1 s2.0-s1110982317300820-main1 s2.0-s1110982317300820-main
1 s2.0-s1110982317300820-mainahmadahmad237
 
Petrel course Module_1: Import data and management, make simple surfaces
Petrel course Module_1: Import data and management, make simple surfacesPetrel course Module_1: Import data and management, make simple surfaces
Petrel course Module_1: Import data and management, make simple surfacesMarc Diviu Franco
 
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMA ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
 
Help the Genetic Algorithm to Minimize the Urban Traffic on Intersections
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsHelp the Genetic Algorithm to Minimize the Urban Traffic on Intersections
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsIJORCS
 
Real-Time Multiple License Plate Recognition System
Real-Time Multiple License Plate Recognition SystemReal-Time Multiple License Plate Recognition System
Real-Time Multiple License Plate Recognition SystemIJORCS
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingYu Huang
 
Airport Runway Detection Based On ANN Algorithm
Airport Runway Detection Based On ANN AlgorithmAirport Runway Detection Based On ANN Algorithm
Airport Runway Detection Based On ANN AlgorithmIJTET Journal
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgYu Huang
 
presentazione_IGARSS2011.ppt
presentazione_IGARSS2011.pptpresentazione_IGARSS2011.ppt
presentazione_IGARSS2011.pptgrssieee
 
Camera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning IICamera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning IIYu Huang
 
Semantic Mapping of Road Scenes
Semantic Mapping of Road ScenesSemantic Mapping of Road Scenes
Semantic Mapping of Road ScenesSunando Sengupta
 
Driving Behavior for ADAS and Autonomous Driving VIII
Driving Behavior for ADAS and Autonomous Driving VIIIDriving Behavior for ADAS and Autonomous Driving VIII
Driving Behavior for ADAS and Autonomous Driving VIIIYu Huang
 

What's hot (20)

Implementation of a lane-tracking system for autonomous driving using Kalman ...
Implementation of a lane-tracking system for autonomous driving using Kalman ...Implementation of a lane-tracking system for autonomous driving using Kalman ...
Implementation of a lane-tracking system for autonomous driving using Kalman ...
 
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...
 
Vehicle detection
Vehicle detectionVehicle detection
Vehicle detection
 
Final year project proposal
Final year project proposalFinal year project proposal
Final year project proposal
 
Vehicle counting for traffic management
Vehicle counting for traffic management Vehicle counting for traffic management
Vehicle counting for traffic management
 
Automatic Dense Semantic Mapping From Visual Street-level Imagery
Automatic Dense Semantic Mapping From Visual Street-level ImageryAutomatic Dense Semantic Mapping From Visual Street-level Imagery
Automatic Dense Semantic Mapping From Visual Street-level Imagery
 
1 s2.0-s1110982317300820-main
1 s2.0-s1110982317300820-main1 s2.0-s1110982317300820-main
1 s2.0-s1110982317300820-main
 
Petrel course Module_1: Import data and management, make simple surfaces
Petrel course Module_1: Import data and management, make simple surfacesPetrel course Module_1: Import data and management, make simple surfaces
Petrel course Module_1: Import data and management, make simple surfaces
 
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMA ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
 
Help the Genetic Algorithm to Minimize the Urban Traffic on Intersections
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsHelp the Genetic Algorithm to Minimize the Urban Traffic on Intersections
Help the Genetic Algorithm to Minimize the Urban Traffic on Intersections
 
Real-Time Multiple License Plate Recognition System
Real-Time Multiple License Plate Recognition SystemReal-Time Multiple License Plate Recognition System
Real-Time Multiple License Plate Recognition System
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous Driving
 
B05531119
B05531119B05531119
B05531119
 
Airport Runway Detection Based On ANN Algorithm
Airport Runway Detection Based On ANN AlgorithmAirport Runway Detection Based On ANN Algorithm
Airport Runway Detection Based On ANN Algorithm
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atg
 
presentazione_IGARSS2011.ppt
presentazione_IGARSS2011.pptpresentazione_IGARSS2011.ppt
presentazione_IGARSS2011.ppt
 
“How QuantCube Technology uses alternative data to create macroeconomic, fina...
“How QuantCube Technology uses alternative data to create macroeconomic, fina...“How QuantCube Technology uses alternative data to create macroeconomic, fina...
“How QuantCube Technology uses alternative data to create macroeconomic, fina...
 
Camera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning IICamera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning II
 
Semantic Mapping of Road Scenes
Semantic Mapping of Road ScenesSemantic Mapping of Road Scenes
Semantic Mapping of Road Scenes
 
Driving Behavior for ADAS and Autonomous Driving VIII
Driving Behavior for ADAS and Autonomous Driving VIIIDriving Behavior for ADAS and Autonomous Driving VIII
Driving Behavior for ADAS and Autonomous Driving VIII
 

Similar to A real-time system for vehicle detection with shadow removal and vehicle classification based on vehicle features at urban roads

Identification and classification of moving vehicles on road
Identification and classification of moving vehicles on roadIdentification and classification of moving vehicles on road
Identification and classification of moving vehicles on roadAlexander Decker
 
Vehicle detection and tracking techniques a concise review
Vehicle detection and tracking techniques  a concise reviewVehicle detection and tracking techniques  a concise review
Vehicle detection and tracking techniques a concise reviewsipij
 
Applying Computer Vision to Traffic Monitoring System in Vietnam
Applying Computer Vision to Traffic Monitoring System in Vietnam Applying Computer Vision to Traffic Monitoring System in Vietnam
Applying Computer Vision to Traffic Monitoring System in Vietnam Lê Anh
 
Real time deep-learning based traffic volume count for high-traffic urban art...
Real time deep-learning based traffic volume count for high-traffic urban art...Real time deep-learning based traffic volume count for high-traffic urban art...
Real time deep-learning based traffic volume count for high-traffic urban art...Conference Papers
 
Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...Conference Papers
 
traffic jam detection using image processing
traffic jam detection using image processingtraffic jam detection using image processing
traffic jam detection using image processingMalika Alix
 
FRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATION
FRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATIONFRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATION
FRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATIONsipij
 
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...Associate Professor in VSB Coimbatore
 
Deep Learning Approach Model for Vehicle Classification using Artificial Neur...
Deep Learning Approach Model for Vehicle Classification using Artificial Neur...Deep Learning Approach Model for Vehicle Classification using Artificial Neur...
Deep Learning Approach Model for Vehicle Classification using Artificial Neur...IRJET Journal
 
P ERFORMANCE M EASUREMENTS OF F EATURE T RACKING AND H ISTOGRAM BASED T ...
P ERFORMANCE  M EASUREMENTS OF  F EATURE  T RACKING AND  H ISTOGRAM BASED  T ...P ERFORMANCE  M EASUREMENTS OF  F EATURE  T RACKING AND  H ISTOGRAM BASED  T ...
P ERFORMANCE M EASUREMENTS OF F EATURE T RACKING AND H ISTOGRAM BASED T ...ijcsit
 
Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...Journal Papers
 
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic ControlNeural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Controlijseajournal
 
Application of improved you only look once model in road traffic monitoring ...
Application of improved you only look once model in road  traffic monitoring ...Application of improved you only look once model in road  traffic monitoring ...
Application of improved you only look once model in road traffic monitoring ...IJECEIAES
 
IRJET- Simulation based Automatic Traffic Controlling System
IRJET- Simulation based Automatic Traffic Controlling SystemIRJET- Simulation based Automatic Traffic Controlling System
IRJET- Simulation based Automatic Traffic Controlling SystemIRJET Journal
 
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...ITIIIndustries
 
Feature Selection Method For Single Target Tracking Based On Object Interacti...
Feature Selection Method For Single Target Tracking Based On Object Interacti...Feature Selection Method For Single Target Tracking Based On Object Interacti...
Feature Selection Method For Single Target Tracking Based On Object Interacti...IJERA Editor
 

Similar to A real-time system for vehicle detection with shadow removal and vehicle classification based on vehicle features at urban roads (20)

Identification and classification of moving vehicles on road
Identification and classification of moving vehicles on roadIdentification and classification of moving vehicles on road
Identification and classification of moving vehicles on road
 
Vehicle detection and tracking techniques a concise review
Vehicle detection and tracking techniques  a concise reviewVehicle detection and tracking techniques  a concise review
Vehicle detection and tracking techniques a concise review
 
proceedings of PSG NCIICT
proceedings of PSG NCIICTproceedings of PSG NCIICT
proceedings of PSG NCIICT
 
F124144
F124144F124144
F124144
 
Applying Computer Vision to Traffic Monitoring System in Vietnam
Applying Computer Vision to Traffic Monitoring System in Vietnam Applying Computer Vision to Traffic Monitoring System in Vietnam
Applying Computer Vision to Traffic Monitoring System in Vietnam
 
Real time deep-learning based traffic volume count for high-traffic urban art...
Real time deep-learning based traffic volume count for high-traffic urban art...Real time deep-learning based traffic volume count for high-traffic urban art...
Real time deep-learning based traffic volume count for high-traffic urban art...
 
Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...
 
traffic jam detection using image processing
traffic jam detection using image processingtraffic jam detection using image processing
traffic jam detection using image processing
 
FRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATION
FRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATIONFRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATION
FRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATION
 
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
 
Deep Learning Approach Model for Vehicle Classification using Artificial Neur...
Deep Learning Approach Model for Vehicle Classification using Artificial Neur...Deep Learning Approach Model for Vehicle Classification using Artificial Neur...
Deep Learning Approach Model for Vehicle Classification using Artificial Neur...
 
A0140109
A0140109A0140109
A0140109
 
P ERFORMANCE M EASUREMENTS OF F EATURE T RACKING AND H ISTOGRAM BASED T ...
P ERFORMANCE  M EASUREMENTS OF  F EATURE  T RACKING AND  H ISTOGRAM BASED  T ...P ERFORMANCE  M EASUREMENTS OF  F EATURE  T RACKING AND  H ISTOGRAM BASED  T ...
P ERFORMANCE M EASUREMENTS OF F EATURE T RACKING AND H ISTOGRAM BASED T ...
 
Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...
 
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic ControlNeural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control
 
Application of improved you only look once model in road traffic monitoring ...
Application of improved you only look once model in road  traffic monitoring ...Application of improved you only look once model in road  traffic monitoring ...
Application of improved you only look once model in road traffic monitoring ...
 
A017430110
A017430110A017430110
A017430110
 
IRJET- Simulation based Automatic Traffic Controlling System
IRJET- Simulation based Automatic Traffic Controlling SystemIRJET- Simulation based Automatic Traffic Controlling System
IRJET- Simulation based Automatic Traffic Controlling System
 
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
 
Feature Selection Method For Single Target Tracking Based On Object Interacti...
Feature Selection Method For Single Target Tracking Based On Object Interacti...Feature Selection Method For Single Target Tracking Based On Object Interacti...
Feature Selection Method For Single Target Tracking Based On Object Interacti...
 

More from International Journal of Power Electronics and Drive Systems

More from International Journal of Power Electronics and Drive Systems (20)

Adaptive backstepping controller design based on neural network for PMSM spee...
Adaptive backstepping controller design based on neural network for PMSM spee...Adaptive backstepping controller design based on neural network for PMSM spee...
Adaptive backstepping controller design based on neural network for PMSM spee...
 
Classification and direction discrimination of faults in transmission lines u...
Classification and direction discrimination of faults in transmission lines u...Classification and direction discrimination of faults in transmission lines u...
Classification and direction discrimination of faults in transmission lines u...
 
Integration of artificial neural networks for multi-source energy management ...
Integration of artificial neural networks for multi-source energy management ...Integration of artificial neural networks for multi-source energy management ...
Integration of artificial neural networks for multi-source energy management ...
 
Rotating blade faults classification of a rotor-disk-blade system using artif...
Rotating blade faults classification of a rotor-disk-blade system using artif...Rotating blade faults classification of a rotor-disk-blade system using artif...
Rotating blade faults classification of a rotor-disk-blade system using artif...
 
Artificial bee colony algorithm applied to optimal power flow solution incorp...
Artificial bee colony algorithm applied to optimal power flow solution incorp...Artificial bee colony algorithm applied to optimal power flow solution incorp...
Artificial bee colony algorithm applied to optimal power flow solution incorp...
 
Soft computing and IoT based solar tracker
Soft computing and IoT based solar trackerSoft computing and IoT based solar tracker
Soft computing and IoT based solar tracker
 
Comparison of roughness index for Kitka and Koznica wind farms
Comparison of roughness index for Kitka and Koznica wind farmsComparison of roughness index for Kitka and Koznica wind farms
Comparison of roughness index for Kitka and Koznica wind farms
 
Primary frequency control of large-scale PV-connected multi-machine power sys...
Primary frequency control of large-scale PV-connected multi-machine power sys...Primary frequency control of large-scale PV-connected multi-machine power sys...
Primary frequency control of large-scale PV-connected multi-machine power sys...
 
Performance of solar modules integrated with reflector
Performance of solar modules integrated with reflectorPerformance of solar modules integrated with reflector
Performance of solar modules integrated with reflector
 
Generator and grid side converter control for wind energy conversion system
Generator and grid side converter control for wind energy conversion systemGenerator and grid side converter control for wind energy conversion system
Generator and grid side converter control for wind energy conversion system
 
Wind speed modeling based on measurement data to predict future wind speed wi...
Wind speed modeling based on measurement data to predict future wind speed wi...Wind speed modeling based on measurement data to predict future wind speed wi...
Wind speed modeling based on measurement data to predict future wind speed wi...
 
Comparison of PV panels MPPT techniques applied to solar water pumping system
Comparison of PV panels MPPT techniques applied to solar water pumping systemComparison of PV panels MPPT techniques applied to solar water pumping system
Comparison of PV panels MPPT techniques applied to solar water pumping system
 
Prospect of renewable energy resources in Bangladesh
Prospect of renewable energy resources in BangladeshProspect of renewable energy resources in Bangladesh
Prospect of renewable energy resources in Bangladesh
 
A novel optimization of the particle swarm based maximum power point tracking...
A novel optimization of the particle swarm based maximum power point tracking...A novel optimization of the particle swarm based maximum power point tracking...
A novel optimization of the particle swarm based maximum power point tracking...
 
Voltage stability enhancement for large scale squirrel cage induction generat...
Voltage stability enhancement for large scale squirrel cage induction generat...Voltage stability enhancement for large scale squirrel cage induction generat...
Voltage stability enhancement for large scale squirrel cage induction generat...
 
Electrical and environmental parameters of the performance of polymer solar c...
Electrical and environmental parameters of the performance of polymer solar c...Electrical and environmental parameters of the performance of polymer solar c...
Electrical and environmental parameters of the performance of polymer solar c...
 
Short and open circuit faults study in the PV system inverter
Short and open circuit faults study in the PV system inverterShort and open circuit faults study in the PV system inverter
Short and open circuit faults study in the PV system inverter
 
A modified bridge-type nonsuperconducting fault current limiter for distribut...
A modified bridge-type nonsuperconducting fault current limiter for distribut...A modified bridge-type nonsuperconducting fault current limiter for distribut...
A modified bridge-type nonsuperconducting fault current limiter for distribut...
 
The new approach minimizes harmonics in a single-phase three-level NPC 400 Hz...
The new approach minimizes harmonics in a single-phase three-level NPC 400 Hz...The new approach minimizes harmonics in a single-phase three-level NPC 400 Hz...
The new approach minimizes harmonics in a single-phase three-level NPC 400 Hz...
 
Comparison of electronic load using linear regulator and boost converter
Comparison of electronic load using linear regulator and boost converterComparison of electronic load using linear regulator and boost converter
Comparison of electronic load using linear regulator and boost converter
 

Recently uploaded

Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 

Recently uploaded (20)

Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 

A real-time system for vehicle detection with shadow removal and vehicle classification based on vehicle features at urban roads

  • 1. International Journal of Power Electronics and Drive System (IJPEDS) Vol. 11, No. 4, December 2020, pp. 2091∼2098 ISSN: 2088-8694, DOI: 10.11591/ijpeds.v11.i4.pp2091-2098 r 2091 A real-time system for vehicle detection with shadow removal and vehicle classification based on vehicle features at urban roads Issam Atouf, Wahban Al Okaishi, Abdemoghit Zaarane, Ibtissam Slimani, Mohamed Benrabh LTI Lab. Faculty of sciences Ben M’sik, Hassan II University of Casablanca, Morocco Article Info Article history: Received Feb 2, 2020 Revised Apr 24, 2020 Accepted May 19, 2020 Keywords: Background subtraction Image processing Shadow removal Vehicle classification Vehicle detection ABSTRACT Monitoring traffic in urban areas is an important task for intelligent transport applica- tions to alleviate the traffic problems like traffic jams and long trip times. The traffic flow in urban areas is more complicated than the traffic flow in highway, due to the slow movement of vehicles and crowded traffic flows in urban areas. In this paper, a vehicle detection and classification system at intersections is proposed. The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classi- fication. In the vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaussians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase and eliminate the undesired detected region (shadows). After the vehicle detection phase, the vehi- cles are tracked until they reach the classification line. Then the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, and trucks). In this system, there are three counters; one counter for each class. When the vehicle is clas- sified to a specific class, the class counter is incremented by one. The counting results can be used to estimate the traffic density at intersections, and adjust the timing of traffic light for the next light cycle. The system is applied to videos obtained by sta- tionary cameras. The results obtained demonstrate the robustness and accuracy of the proposed system. This is an open access article under the CC BY-SA license. Corresponding Author: Name Issam Atouf, Affiliation Faculty of sciences Ben M’sik, Hassan II University Casablanca, Address Casablanca, Morocco. Email issamatouf@yahoo.fr 1. INTRODUCTION Traffic problems are one of the most problems encountered by the residents of large cities. Therefore, the traffic management companies have paid great attention to solve these problems. The first step in traffic analysis is the collection of traffic information. Several techniques have been developed for traffic data collec- tion; many detectors (such as loop, radar, infrared and microwave) are utilized to do this task. These detectors help traffic flow management by providing information about the level of the traffic density on the roads. How- ever, they have many drawbacks that lead to reduce their use, as their installation requires pavements cuts; in addition, their detection zone is small. In recent years, the vision-based systems have been widely used in traffic management, due to their advantages compared to electronic sensors. The vision-based systems extract useful data by covering wide-area detection with ability in determining the shape of the detection zone. The Journal homepage: http://ijpeds.iaescore.com
  • 2. 2092 r ISSN: 2088-8694 first phase to analyze the traffic parameters is the vehicle detection. Several methods have been developed for the vehicles detection, and these methods can be grouped into two main approaches: texture-based approach and motion-based approach. The first one utilizes vehicle features like edges, corners, colors and so on; this approach is good to detect the stopping vehicles. While the other one depends on the movement of vehicles, it is widely used in intelligent transportation systems. There are two main methods to detect the moving ob- jects: optical flow and background subtraction. The optical flow is accurate in the detection of object motion and gives more information about the motion like the velocity and the motion direction. However, it has high computational time and is not suitable for real time applications [1]. The background subtraction is the most common method used in literature for detecting moving vehicles. In this work, the background subtraction is utilized to separate the moving vehicles from the background model. However, the shadows of moving vehi- cles are also detected; thus they are considered as a part of vehicle dimensions, and this leads to misclassify the different vehicles. To cope with this issue, we proposed an algorithm to remove the shadow region based on the edges of detected regions. This algorithm outperforms the other shadow removal algorithms. When the vehicles are detected without shadows, they will be tracked until they reach the classification line. Then the classification is performed to category them into a number of predefined types. The classification methods can be grouped into two groups, one group classifies the vehicles by measuring the vehicle dimensions [2], while the other utilizes machine learning techniques [3]. Various systems have been developed to detect and classify the vehicles. A real time system for vehi- cles detection and classification is described in [4]. The first phase of their work is the vehicle detection, they used the background subtraction to perform this task by employing the adaptive background update method, then the vehicles were tracked by using the association graph between the consecutive frames, finally they used the vehicle dimensions to classify the vehicles into two categories, small vehicles (cars) and big vehicles(vans, trucks and buses). In [3], they developed a system for vehicle classification. The moving objects(vehicles)are separated from the static objects(background objects) by using GMM method, and then they used an approach based on two level of support vector machine (SVMs) to classify the detected vehicles. The second level was employed for solving the occlusion problem. The vehicles were classified into four classes (bus, cars, minibus and trucks). In [5], an image processing system is proposed to detect and classify the vehicles on rear view vehicle. This system utilized the temporal median filter to establish the background model, and then the scene frame was subtracted from the background model to detect the moving vehicles. The classification was per- formed by using SVM method after extracting vehicle features by using deep convolutional neural network. They classified the vehicles into two types, the passenger vehicles and other vehicle class. In [2], a real-time traffic surveillance system has been proposed to measure traffic flow by detecting and counting the vehicles. The background model was performed by using the temporal information of the mean and standard deviation of gray level distribution in consecutive frames for each point. After the segmentation process, each detected object was bounded by a rectangle box, and then the calculation of object features (height, width and aspect ratio) was established to achieve a robust and accurate classification. Then the vehicles were classified into two classes (cars and bikes). In this paper, a vehicle detection and classification system at intersections is proposed. The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaussians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase and eliminate the undesired detected region (shadows). After the vehicle detection phase, the vehicles are tracked until they reach the classification line. Then the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, trucks). After the classification phase, the vehicles in each class will be counted. The counting results can be used to estimate the traffic density at intersections, and adjust the timing of traffic light for the next light cycle. The system is applied on videos obtained by stationary cameras. The rest of paper is organized as follows: section 2 describes the vehicle detection and shadow removal algorithm. The vehicle classification method is presented in section 3. Experimental result is presented in section 4. Finally, conclusion is given in sections 5. 2. VEHICLE DETECTION Moving object detection methods including human [6, 7], vehicles [8, 9], have been developed by sev- eral researchers. The most common method used in literature is the background subtraction method. In the past decade, numerous background subtraction algorithms have been proposed to extract and update the background Int J Pow Elec & Dri Syst, Vol. 11, No. 4, December 2020 : 2091 – 2098
  • 3. Int J Pow Elec & Dri Syst ISSN: 2088-8694 r 2093 model. These algorithms can be a non-recursive or a recursive, the non recursive algorithm utilizes a buffer of video frames to get the background, while the recursive algorithm updates the background model recursively based on each input frame. Aljammal et al [10] proposed a non-recursive method called non-parametric model. In their method, they used the entire history of pixels to estimate the background by calculating the pixel den- sity function for each pixel. The pixel is considered as background if this function is greater than a predefined threshold. The drawbacks of this method are: it is time consuming and requires high memory storage .Kar- mann and von [11] developed a recursive technique for estimating the background model based on Kalman filter, they utilized the intensity and its temporal derivative model. The background is recursively updated by using three matrices: the background dynamics matrix, the measurement matrix and the Kalman gain matrix. However, this method is affected by the foreground pixels even if it works in slow rate adaptation. Kim et al [12] proposed a multimodal backgrounds method, it is called code book method. In this method, each pixel is summarized by a number of codewords stored in a coodbook; each codeword contains of a set of parameters. The input pixel is considered as a background pixel if its brightness falls within the brightness range of some codeword. In addition, the color distortion of that codeword is smaller than the detection threshold. However, this method cannot correctly detect the darkgrey and white moving objects, because it considers the darkgrey objects as a shadow and the white object as a sudden increase of illumination [13]. In this paper, MoGs method [14] is used to perform the background subtraction process. It is the most method used in literature due to its robustness against the environmental changes and its capability to handle multimodal background distributions. The idea of this method is as such: a number of Gaussian distributions (components) represent each pixel. The number of Gaussian components falls between three and five depending on the storage limitation and the pos- sibility of system realization in real time, three components are sufficient for our system. The component is considered as a matched component if the difference between the component mean and the pixel value is less than a predefined threshold, and then its parameters are updated as follows: the weight increases, the standard deviation decreases, and the mean moves to be close to the pixel value. If the component is none matched; the only parameter which is updated, is the weight, it decreases exponentially. If the pixel does not have any matched component; the component that has least weight is replaced by a new component with mean equals the pixel value, a large initial variance, and a small weight. After that the components are ranked according to a confidence metric (weight/standard deviation), and then a predefined threshold is applied to the components weights. The background model is the first components, whose weights are higher than the threshold. While the foreground pixels (moving objects pixels) are those that do not have any component in the background model. Figure 1 shows the result of applying MoGs method to two different traffic scenes. (a) (b) (c) (d) (e) (f) (g) (h) Figure 1. Vehicles detection in different scenes, (a) Scene1, frame1938, (b) Scene1, frame2015, (c) Scene2, frame1056, (d) Scene2, frame1733, (e) MoG result, frame 1938, (f) MoG result, frame 2015 (g) MoG result, frame 1056, (h) MoG result, frame 1733 A real-time system for vehicle detection with shadow removal and ... (Issam Atouf)
  • 4. 2094 r ISSN: 2088-8694 2.1. Shadow removal In this section, an algorithm that removes vehicle shadows will be developed to improve the perfor- mance of the system. The system performance will be improved from two respective views: It reduces the cases of occlusion problems between the vehicles. In addition, if the shadow region is not removed, it will be consid- ered as a part of vehicle dimensions, and the vehicle will be classified into the wrong class. Thus, the shadow removal algorithm leads to improve the classification process and increases the system performances. Many methods have been developed to detect and remove vehicle shadows [15-19], some methods utilize the color information to distinguish between vehicle pixels and shadow pixels because of the difference of the chromatic and luminance between the shadow and vehicle [20, 21]. However, these methods fail to identify shadow pixels from dark vehicle pixels. The other methods employ the texture information to identify the vehicle region [22]. Since the shadow has a little texture, it is simple to separate the vehicle region from the shadow region. In this paper, we proposed an algorithm to remove the shadow region based on the edges of detected regions. The proposed algorithm involves the following steps: − When the background subtraction method is applied, the moving objects (vehicles) are detected with their shadows (detected regions). Then the shadow removal algorithm is applied on these detected regions in the resulted image from background subtraction method and the gray scale source image. Figure 2 (a) and (b) shows the detected region in grey scale image and foreground image respectively. − The Canny edge detector is applied on these two images. The results of edge detection are shown in Figure2 (c) and (d) respectively. − XOR operation is applied on the two images resulted from the edge detection operation. Figure 2 (e) shows the image obtained from this operation. − Due to applying the edge detection on the detected region in the gray scale image, it may exist undesired background edges like edges of white marks, damaged marks, and trees or buildings shadows. Therefore, Canny edge detector is applied on the detected region in the background model (the image when there are no vehicles). − The edges of the background objects are subtracted from the edges of the image obtained from step 3, and what will remain are vehicle edges. Then we apply close operation on the resulted image to fill in spaces between the edges to preserve the vehicle details. The resulted image obtained from this step is shown in Figure 2 (f). (a) (b) (c) (d) (e) (f) Figure 2. Shadow removal Int J Pow Elec & Dri Syst, Vol. 11, No. 4, December 2020 : 2091 – 2098
  • 5. Int J Pow Elec & Dri Syst ISSN: 2088-8694 r 2095 2.2. Vehicle tracking There are several methods to track moving objects through image sequences. These methods can be grouped into two categories: feature-based tracking methods, and model-based tracking methods. The feature- based methods are most widely used in literature because of its robustness [2]. In this work, after the detection of vehicles without shadows, a feature-based method is employed to track the moving vehicles through the image sequences in the detection zone (that is designated by two blue horizontal lines as shown in Figure 3). When the vehicle appears in the detection area, the centroid of vehicle is calculated, and then this feature is used to track the detected vehicle between the consecutive frames. First, an empty vector is initialized to maintain the position of vehicles centroid. If moving vehicles are detected in the current frame, their centroid position are recorded in this vector. In adjacent frames, the moving objects that are spatially closest are correlated. Therefore, the measurement of the distance between the consecutive frames is sufficient to track the moving objects. The euclidean distance (ED) is utilized to measure the distance between the position of vehicle centroid in the current frame and in the previous frame. For each moving vehicle in the current frame, a vehicle with the minimum distance is searched for the previous frame to change its record to the new position. When the vehicle exceeds the detection area, the record of this vehicle is removed from the vector. (a) (b) (c) (d) Figure 3. The results of the detection and classification process, (a) The result of the frame 438, (b) The result of the frame 350, (c) The result of the frame 1005, (d) The result of the frame 1572 3. VEHICLE CLASSIFICATION After the detection of vehicles without shadows, the tracking process is implemented. When the vehicle reaches the classification line (that is designated by a red horizontal line as shown in Figure 3), the classification process is implemented based on vehicle dimensions. In this work, three parameters (aspect ratio (AR) = height/width, height, and width) of vehicles are utilized to classify them into three classes (cars, bikes, and trucks). The aspect ratio is calculated by using the dimensions of different types of vehicles; vehicle dimensions are taken from [23]. We took into our consideration the transformation from 3D to 2D. The aspect ratio of different vehicles types are as follows: − AR of cars is between [1.17-1.4]. − AR of tracks is between [1.3-1.9]. − AR of bikes is between [1.8-2.4]. A real-time system for vehicle detection with shadow removal and ... (Issam Atouf)
  • 6. 2096 r ISSN: 2088-8694 The classification process involves the following steps: − When the vehicle attains the classification line, its height and width are calculated. − If the vehicle width is greater than a specific threshold; then there is a horizontal occlusion (when the distance between the two adjacent vehicles is very small; they are detected as one object. This is called a horizontal occlusion). The detected region is considered as two adjacent vehicles, and it is separated into two regions by dividing the width by two. − If the vehicle height is greater than a specific threshold; then there is a vertical occlusion (when the distance between the two consecutive vehicles is very small; they are detected as one object. This is called a vertical occlusion). The detected region is considered as two consecutive vehicles, and it is separated into two regions by dividing the height by two. − The aspect ratio of each vehicle is calculated. If AR is between 1.17 and 1.3, then the vehicle is a car. If AR is between 1.41 and 1.79, then the vehicle is a truck. If AR is between 1.91 and 2.4, then the vehicle is a bike. If AR falls in the overlapping interval [1.3- 1.4], the vehicle height is used to distinguish if the vehicle is a car or a truck, if the height is higher than a predefined threshold, then the vehicle is a truck, else the vehicle is considered as a car. If AR falls in the overlapping interval [1.8-1.9], the vehicle width is used to distinguish if the vehicle is a bike or a truck, if the width is higher than a predefined threshold then the vehicle is a truck, else the vehicle is considered as a bike. − Three counters have been proposed to count the number of vehicles in each class. These counters are called C-car, C-bike, and C-truck. They are initialized to zero, and when the detected vehicle is classified into one class from the existing three classes, the counter of this class will be incremented by one. 4. EXPERIMNTAL RESULT In this paper, a surveillance traffic system is proposed to detect and classify the different types of vehicles. In order to confirm that our proposed method is effective to perform this task, we utilized database that is taken from stationary traffic cameras of Casablanca city. This database contains of two different traffic videos; the first contains of 3580 frames with a resolution 240x320, the other contains of 2030 frames with the same resolution. Two different scenes of traffic videos are utilized. One video is taken in the area just after a traffic light (Scene 1) and the other is taken in an urban road (Scene 2). The results obtained during realized system operation contains of vehicle type and vehicle number. Vehicle number is displayed above the boundary box. Each vehicle is labelled by a black rectangular box until they reach the classification line, then the box color changes according to classification result. If the vehicle type is car, then the box color changes to blue. If the vehicle type is bike, then the box color changes to yellow. If the vehicle type is truck, then the box color changes to red. Figure 3 (a) shows the result of the 438-th frame in which 4 cars in the detection area, three of them crossed the classification line. They are classified as cars and have been labelled by blue rectangular box. Figure 3 (b) shows the result of the 350-th frame in which one truck has been counted and labelled by red rectangular box. Figure 3 (c) shows the result of the 1005-th frame in which one bike has been counted and labelled by yellow rectangular box and one car has been counted and labelled by blue rectangular box. Figure 3 (d) shows the result of the 1572-th frame from video sequences 2 in which two cars have been de- tected. The first one crossed the classification line, so it has been counted and labelled by blue rectangular box. According to these results, the different types of vehicles can be classified and counted correctly in this proposed system. Table.1 shows the results of vehicle counting for two different traffic scenes. The results manifest that the average accuracy of the two different scenes is 96.78%. Table 1. Result of vehicle counting Scene 1 Scene 2 Vehicle Type Count Error Accuracy Vehicle Type Count Error Accuracy Car 230 6 97.45% Car 147 3 97.59% Bike 26 1 96.2% Bike 19 1 94.7% Truck 18 1 94.73% Truck 10 0 100% Average 96.13% Average 97.43% To evaluate the efficiency of the proposed system, a comparative study is established on different surveillance systems. There are difficulty to make a fair comparative study due to the utilization of different database. Therefore, we conducted a qualitative comparative study instead of quantitative study. Table 2 shows Int J Pow Elec & Dri Syst, Vol. 11, No. 4, December 2020 : 2091 – 2098
  • 7. Int J Pow Elec & Dri Syst ISSN: 2088-8694 r 2097 the results of the comparative study. As noted in this table, the counting accuracy of the proposed algorithm on an average of 96.78% denotes that the proposed algorithm is more efficient than the other compared algorithms. Table 2. Result of the comparative study Comparative [24] [2] [3] [25] our method methods Vehicles types Cars only Cars and bikes Bus, minibus, Cars only Cars, bikes, and trucks car, and truck Vehicle detection Optical flow Background Background Frame differencing Background subtraction method subtraction subtraction Vehicle classification X Features Hierarchical X Features extraction method extraction Multi-SVMs Vehicle counting 94.04% 96.9% Scene 1: 93% 96.04% Scene 1: 96.13% accuracy Scene 2: 88% Scene 2: 97.43% Scene 3: 90% Scene 4: 95% 5. CONCLUSION A system for vehicles detection and classification has been introduced in this paper. The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaus- sians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase and eliminate the undesired detected region. Then the vehicles are tracked until they reach the classification line. After that, the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, and trucks). After the classification phase, the vehicles in each class will be counted. The system has been applied on various traffic scenes under different weather and lighting conditions. The experimental results confirm that the proposed system has the ability in detecting and classifying the vehicles accurately and efficiently in real time. REFERENCES [1] I. Kajo, A. S. Malik, and N. Kamel, ”Motion estimation of crowd flow using optical flow techniques: A review,” 9th International Conference on Signal Processing and Communication Systems (ICSPCS), pp. 1-9, 2015. [2] D.-Y. Huang, et al., ”Feature-based vehicle flow analysis and measurement for a real-time traf- fic surveillance system,” Journal of Information Hiding and Multimedia Signal Processing, vol. 3, pp. 279-294, 2012. [3] H. Fu, H. Ma, Y. Liu and D. Lu, ”A vehicle classification system based on hierarchical multi-SVMs in crowded traffic scenes,” Neurocomputing, pp. 182-190, 2016. [4] S. Gupte, O. Masoud, R. F. Martin, and N. P. Papanikolopoulos, ”Detection and classification of vehicles,” IEEE Transactions on intelligent transportation systems, vol. 3, no. 1 pp. 37-47, 2002. [5] Y. Zhou, and C. Ngai-Man, ”Vehicle classification using transferable deep neural network features,” arXiv preprint arXiv:1601.01145. Health Psychology, vol. 35, no. 4, pp. 397–402, 2016. [6] S. Ojha, and S. Sachin, ”Image processing techniques for object tracking in video surveillance-A survey,” IEEE International Conference on Pervasive Computing (ICPC), 2015. [7] J. Liu, Y. Liu, G. Zhang, P. Zhu, and YQ. Chen, ”Detecting and tracking people in real time with RGB-D camera,” Pattern Recognition Letters, vol. 53, pp. 16-23, 2015. [8] G. Chavez, O. Ricardo, and A. Olivier, ”Multiple sensor fusion and classification for moving ob- ject detection and tracking,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 2, pp. 525-534, 2016. [9] S. Kamkar, and S. Reza., ”Vehicle detection, counting and classification in various conditions,” IET Intelligent Transport Systems, vol. 10, no. 6, pp. 406-413, 2016. [10] A. Elgammal, D. Harwood, and L. Davis., ”Non-parametric model for background subtraction,” Euro- pean conference on computer vision. Springer, pp. 751-767, 2000. A real-time system for vehicle detection with shadow removal and ... (Issam Atouf)
  • 8. 2098 r ISSN: 2088-8694 [11] K.-P. Karmann and A. Brandt, ”Moving object recognition using and adaptive background memory,” Time-Varying Image Processing and Moving Object Recognition, vol. 2, pp. 289-307, 1990. [12] K. Kim, et al., ”Real-time foreground–background segmentation using codebook model,” Real-time imag- ing, vol. 11, no. 3, pp. 172-185, 2005. [13] Y. Benezeth, et al., ”Comparative study of background subtraction algorithms,” Journal of Electronic Imaging, vol. 19, no. 3, 2010. [14] C. Stauffer, and W. Grimson., ”Learning patterns of activity using real-time tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, pp. 747-757, Aug 2000. [15] S. P. Mohammed, N. Arunkumar and A. Enas., ”Automated multimodal background detection and shadow removal process using robust principal fuzzy gradient partial equation methods in intelligent transporta- tion systems,” International Journal of Heavy Vehicle Systems, vol. 25, pp. 271-285, 2018. [16] Karim, Shahid and Zhang, Ye and Ali, Saad and Asif, Muhammad Rizwan., ”An improvement of vehicle detection under shadow regions in satellite imagery,” Ninth International Conference on Graphic and Image Processing (ICGIP), vo. 10615, 2018. [17] Kumar, Cheruku Sandesh et al., ”Segmentation on moving shadow detection and removal by symlet transform for vehicle detection,” 3rd International Conference on Computing for Sustainable Global De- velopment (INDIACom), pp. 259-264, 2016. [18] Seenouvong, Nilakorn and Watchareeruetai, Ukrit and Nuthong, Chaiwat and Khongsomboon, Kham- phong and Ohnishi, Noboru., ”A computer vision based vehicle detection and counting system,” 8th International Conference on Knowledge and Smart Technology (KST), pp. 224-227, 2016. [19] Hanif, Muhammad and Hussain, Fawad and Yousaf, Muhammad Haroon and Velastin, Sergio A and Chen, Zezhi, ”Shadow Detection for Vehicle Classification in Urban Environments,” International Con- ference Image Analysis and Recognition, pp. 352-362, 2017. [20] A. Tiwari, K. S. Pradeep, and A. Sobia., ”A survey on Shadow Detection and Removal in images and video sequences,” IEEE sixth International Conference-Cloud System and Big Data Engineering, 2016. [21] Z. Zhu, and X. Lu., ”An accurate shadow removal method for vehicle tracking,” IEEE International Con- ference on Artificial Intelligence and Computational Intelligence, vol. 2, pp. 59-62, 2010. [22] R. P. Avery, et al., ”Investigation into shadow removal from traffic images,” Transportation Research Record 2000, vol. 1, pp. 70-77, 2007. [23] Transport for NSW, Vehicle standards information, no. 5, Rev. 5, Published 9 November 2012. [24] HS. Mohana, M. Ashwathakumar, and G. Shivakumar., ”Vehicle detection and counting by using real time traffic flux through differential technique and performance evaluation,” IEEE International Conference on Advanced Computer Control, pp. 791-795, 2009. [25] N. Seenouvong, U. Watchareeruetai, C. Nuthong, K. Khongsomboon, and N. Ohnishi., ”A computer vision based vehicle detection and counting system,” IEEE 8th International Conference on Knowledge and Smart Technology (KST), pp. 224-227, 2016. Int J Pow Elec & Dri Syst, Vol. 11, No. 4, December 2020 : 2091 – 2098