SlideShare a Scribd company logo
1 of 9
Download to read offline
BIG DATA MINING AND ANALYTICS
ISSN 2096-0654 04/06 pp137–145
Volume 1, Number 2, June 2018
DOI: 10.26599/BDMA.2018.9020013
Big Data Oriented Novel Background Subtraction Algorithm for
Urban Surveillance Systems
Ling Hu, Qiang Ni
, and Feng Yuan
Abstract: Due to the tremendous volume of data generated by urban surveillance systems, big data oriented low-
complexity automatic background subtraction techniques are in great demand. In this paper, we propose a novel
automatic background subtraction algorithm for urban surveillance systems in which the computer can automatically
renew an image as the new background image when no object is detected. This method is both simple and robust
with respect to changes in light conditions.
Key words: big data; background subtraction; urban surveillance systems
1 Introduction
Big data research is attracting a great amount of
attention due to the seemingly infinite generation of
huge data worldwide. As one example, big data-
oriented techniques have emerged as an important
research topic for smart cities and urban surveillance
systems[1,2]
.
Urban surveillance systems are important
applications for smart cities[3,4]
. These systems
use automated object detection methods whereby
cameras are installed to automatically detect vehicles
and other objects. Because of the tremendous volume
of data generated by urban surveillance systems, to
obtain useful information from the huge numbers of
images and videos, low-complexity techniques that can
automatically identify objects from various sources
are in high demand. Hence, automated or so-called
 Ling Hu and Qiang Ni are with the School of Computing and
Communications, Lancaster University, InfoLab21, Lancaster,
LA1 4WA. E-mail: fl.hu, q.nig@lancaster.ac.uk.
 Feng Yuan is with the Chinese Academy of Sciences Smart
City Software Co. Ltd., China and also with Institute of
Software Application Technology, Guangzhou  Chinese
Academy of Sciences, Guangzhou 511458, China. E-mail:
yf@gz.iscas.ac.cn; yf@iot-cas.com.
To whom correspondence should be addressed.
Manuscript received: 2017-12-30; accepted: 2018-01-03
automatic object detection algorithms have become
an important research topic. Currently, the algorithms
under investigation can be classified into three main
groups: frame difference, background subtraction, and
optical flow calculation methods.
The frame difference algorithm is used to analyze
the image sequences of two or more adjacent frames to
identify moving targets by calculating the differences in
continuous frames. For every pixel, if the difference is
larger than a set threshold, the result is labelled 1, else
0. The larger is the threshold value, the less noise there
is[5,6]
. To detect and track moving targets, some authors
have proposed combining the frame difference method
with the particle filter algorithm[7]
. The shortcoming
of the frame difference method is that it is not able to
detect nonmoving objects, since there is no difference
between adjacent frames.
The second type of automatic object detection
algorithm is the background subtraction method. This
method first builds a scene background model and
then uses the current frame image to subtract the
background model. In this way, targets are identified.
This is a widely used method for retrieving targeted
objects. Background subtraction performance depends
mainly on the background modelling technique used,
and Gaussian Mixture Models (GMMs) are currently
the most popular. In Ref. [8], the authors combined a
GMM with a classified foreground in an interpolated
Red, Green, and Blue (RGB) domain. Since one
138 Big Data Mining and Analytics, June 2018, 1(2): 136-145
problem of GMMs is that they cannot effectively model
noisy backgrounds, to address this shortcoming, the
researchers in Ref. [9] utilized an advanced distance
measure based on support weights and a histogram
of gradients to achieve background suppression.
Current background subtraction methods (such as
GMMs) are also limited by their high computational
complexity[8,9]
, e.g., the large number of calculations
required limits their practical application in real-time as
reconstructions and background updates take too long.
The third type of automatic object detection
algorithm is based on Optical Flow (OF) calculation[10]
.
This method estimates motion scenes and combines
similar motion vectors to detect moving objects.
However, the OF method is also very computationally
expensive. Another major shortcoming of the OF
method is that it assumes that an object is always
moving, which is not necessarily true.
In this paper, we propose a novel low-complexity
automatic background subtraction method that is
computationally efficient and does not require that
objects be continuously moving.
2 Our Proposed Method
The basic idea of our proposed algorithm is to detect
whether or not there are objects (e.g., cars, humans)
in each image. If no object is detected, we update
the current image as the background. In this way, the
background is renewed in real time, which is robust
with respect to changing light conditions. Compared to
existing methods, which are computationally expensive,
our method is computationally efficient. Another
advantage of our method is that it does not depend on
object motion; even nonmoving objects are detected
automatically. In Ref. [11], we proposed a simple filter
for detecting vehicles or license plates from captured
images. We found that an object (for example, a
vehicle or human being) is normally the highest energy
frequency aspect of an image and the energy frequency
curves decrease sharply outside object boundaries. In
this paper we improve filter design for detecting and
updating the background. Our proposed filter can
automatically determine whether there are objects in the
images.
2.1 Pre-treatment of the captured image
Taking the example shown in Fig. 1, our objective is for
a computer to automatically determine whether there
are objects on a road based on this captured image.
Original Image
Fig. 1 Original image.
To do so, the computer determines whether or not
this image can be used as a background. Using the
pre-treatment method proposed in Ref. [11], we first
transfer the original image into a gray-scale image to
reduce the image data that must be calculated.
After generating the grayscale image, we calculate
its gradient using Px to denote differences in the x
direction (horizontal) and Py to denote differences
in the y direction (vertical). As in Ref. [11], after
calculating the gradients of both the horizontal and
vertical directions, for every pixel, we obtain the overall
gradient of each pixel as follows:
P D
q
Px
2
C Py
2
(1)
Figure 2 shows the gradient of this grayscale image,
in which we can see that some large areas of constant
grayscale fade into darkness due to the slow gradient
movement. Only the outline of vehicles and some other
background objects are visible.
2.2 Design of our effective filter
In Ref. [11], we designed a smart filter based algorithm
that scans and identifies the object areas in an
image. This method was inspired by the mathematical
definition of intercorrelation and the function pulse
Äą.x/, which has the following important characteristic:
Z 1
1
Äą.x a/'.x/dx D ' .a/ (2)
Fig. 2 Gradient of grayscale image.
Ling Hu et al.: Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems 139
This means that the function Äą(x a) can be utilized
to filter out another function '.x/ at the x-axis when
x=a. We use this property to filter out high frequency
areas in an image.
We use a matrix array to calculate intercorrelations in
an image. Since the matrix array is centrosymmetric,
the image intercorrelation process can be considered to
be a convolution process as follows:
F .n/ D G .n/ h .n/ D
C1
X
iD 1
G .i/ h .n i/ (3)
where G.n/ represents the original image and h.n/
is the matrix array. F.n/ denotes the image after
convolution. As we know, the results obtained
after convolution reflect the overlapping relationship
between two convolution functions. In mathematics,
convolution can be regarded as a weighted summation.
In the filtered image F.n/, since spectrum
information is displayed on the horizontal axis as
brightness, by simply choosing the middle horizontal
array line of the filtered image, we obtain the one-
dimensional brightness values. These values clearly
reflect high frequency areas of objects on the horizontal
axis. Our proposed filter successfully reduces the
two-dimensional image into a one-dimensional curve,
which significantly reduces the data required.
Since the matrix filter proposed in Ref. [11] has a bar
shape, in this paper, we refer to it as the bar filter. As we
know, in the bar filter, all the numbers are set as 1’s and
can be considered to be constituted by some line arrays.
The bar filter incurs a lot of calculations. To reduce
the computational load and still obtain satisfactory
convolution results, we propose a new and improved
filter in which we have removed some line arrays from
the original matrix array. As such the new filter looks
like a grate, so we refer to it as the grate filter, a diagram
of which is shown in Fig. 3.
In this study, we set the width of the bar filter as eight,
which means there are eight columns of 1’s integrated
into this filter. In the grate filter, we set the first
Bar filter Grate filter
Fig. 3 Bar filter (left) and our proposed grate filter (right).
and the fourth columns as 1’s, which means there are
three columns of 0’s between the 1’s columns and a
total of only two columns of 1’s remaining. From the
definition of convolution, we know that in this case the
computational load of the grate filter is just one quarter
that of the original bar filter. Therefore, it is clear that
the first advantage of the grate filter is its significant
reduction in the computational load.
We filtered the pre-treated gradient image in the
horizontal direction using the two different filters,
and then plotted the one-dimensional brightness values
for the straightforward view by choosing the middle
horizontal array line of the filtered image. The resulting
values obtained in the x-axis are shown in Figs. 4 (bar
filter) and 5 (grate filter).
We simply scan the horizontal brightness values from
left to right and choose the threshold at 0.5 for the
horizontal axis. To eliminate the impact of unwanted
brightness, we check the widths between the horizontal
Fig. 4 Horizontal axis’ brightness values and identified
result (with bar filter).
Fig. 5 Horizontal axis’ brightness values and identified
result (with grate filter).
140 Big Data Mining and Analytics, June 2018, 1(2): 136-145
points against a threshold value and discard very narrow
results. If at least one of the detected widths is wider
than the predefined width, the system indicates the
presence of at least one object in this image. In this
case, since at least one object (i.e., a vehicle) is detected,
the computer reaches the decision: “Object Detected”,
which means the original image cannot be used to renew
the background.
Figures 4 and 5 both indicate that at least one
object was detected. Considering the reduction in
computational complexity, the grate filer outperforms
the bar filter. Moreover, the grate filter obtains more
detail in the brightness values of the horizontal axis.
2.3 Flow chart of the proposed algorithm
Figure 6 shows the flow chart of our proposed method.
The time period for capturing a new image may range
from several seconds to several minutes, as the setting
is dependent on the actual situation in the surveillance
area.
In the process of searching for objects in an
image, sometimes several different objects are detected.
Because our task is to decide whether or not this image
can be used to renew the background, it is not necessary
to show all the objects in a resulting image. We use
a left-to-right rule and choose to show the first object
detected in the image. In this situation, the previous
background image is unchanged, and after a set time
period, a new image will be taken into consideration.
On the other hand, when no object is detected in the
image, we choose this image to update the background
Fig. 6 Flow chat showing the process of our proposed
background subtraction method.
image, and our program shows the original image with
the lable “Object not detected”. Then the same action
is repeated, i.e., after the set time period, a new image
will be taken into consideration.
3 Experimental Results
Next, we tested our filters to test their performance on
different images. We used various images captured on
different roads to test the two filters. Figs. 7–15
Original Image
Fig. 7 Cars on the road (1).
Fig. 8 Horizontal axis’ brightness values and selected car
(with bar filter) according to Fig. 7.
Fig. 9 Horizontal axis’ brightness values and selected car
(with grate filter) according to Fig. 7.
Ling Hu et al.: Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems 141
Original Image
Fig. 10 Vehicles on the road.
Fig. 11 Horizontal axis’ brightness values and selected bus
(with bar filter) according to Fig. 10.
Fig. 12 Horizontal axis’ brightness values and selected bus
(with grate filter) according to Fig. 10.
show the original images and identified results.
There are several cars in Fig. 7 and the corresponding
horizontal axis’ brightness values (Figs. 8 and 9) clearly
show several high-frequency areas, which indicate that
there is at least one object in the image. Both filters
identify these cars and show the first detected red
car. The difference between them is that the grate filter
shows more details in the horizontal axis’s brightness
Original Image
Fig. 13 Cars on the road (2).
Fig. 14 Horizontal axis’ brightness values and selected cars
(with bar filter) according to Fig. 13.
Fig. 15 Horizontal axis’ brightness values and selected cars
(with grate filter) according to Fig. 13.
values. Both images obtain the result “Object
Detected”, which means that the original image
cannot be used as a background image.
There are numerous vehicles (buses and cars)
inside Fig. 10 and the corresponding horizontal axis’
brightness values (Figs. 11 and 12) indicate that there
are several high-frequency areas, from which we can
conclude that there are some objects in this image.
142 Big Data Mining and Analytics, June 2018, 1(2): 136-145
Both filters detect these vehicles properly. Again, the
grate filter shows more details than the bar filter in the
horizontal axis’s brightness values. And again it is clear
that the original image cannot be used as a background
image.
In Fig. 13, we see that there are two cars travelling
very close to each other. The corresponding horizontal
axis’ brightness values (Figs. 14 and 15) show two
high-frequency areas, that are very close to each other.
Although both filters detect the fact that there are
objects on the road, the filters achieve different degrees
of precision. Since there are some smoothing effects
associated with the bar filter, the two cars detected are
mixed together in Fig. 14. In contrast, the grate filter
outperforms the bar filter in detecting the details. The
two cars are separate, and the first car is clearly selected
by the grate filter, as shown in Fig. 15.
Next, we used some images of an empty road to
test our proposed algorithm, the original images and
detected results of which are shown in Figs. 16–27.
In Fig. 16, there are no vehicles or people on the road.
However, we see that there are several flower beds and
lamp posts in the image which may affect our search
Original Image
Fig. 16 Empty road (1).
Fig. 17 Horizontal axis’ brightness values and search result
(with bar filter) according to Fig. 16.
Fig. 18 Horizontal axis’ brightness values and search result
(with grate filter) according to Fig. 16.
Original Image
Fig. 19 Empty road (2).
Fig. 20 Horizontal axis’ brightness values and search result
(with bar filter) according to Fig. 19.
process. Figures 17 and 18 show that the flower beds
and lamp posts are not wide high-frequency areas. The
high impulses at the right end of each figure is from
the edge of the original image. Although the noise
level does not meet our threshold requirement, we can
Ling Hu et al.: Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems 143
Fig. 21 Horizontal axis’ brightness values and search result
(with grate filter) according to Fig. 19.
Original Image
Fig. 22 Empty road (3).
Fig. 23 Horizontal axis’ brightness values and search result
(with bar filter) according to Fig. 22.
see some differences in the results of the two filters.
With the grate filter, the noise values are lower, not
even reaching 0.5, which is a better result. Lastly, both
results show the label: “Object not detected”. So this
image is properly recognized as a background image.
Fig. 24 Horizontal axis’ brightness values and search result
(with grate filter) according to Fig. 22.
Original Image
Fig. 25 Empty road (4).
Fig. 26 Horizontal axis’ brightness values and search result
(with bar filter) according to Fig. 25.
Figure 19 shows another image of an empty road in
which the background contains some trees, buildings,
and several lamp posts. Figures 20 and 21 show that
these objects are associated only with some narrow
high-frequency areas and, again, the high impulses
144 Big Data Mining and Analytics, June 2018, 1(2): 136-145
Fig. 27 Horizontal axis’ brightness values and search result
(with grate filter) according to Fig. 25.
at the right end in each figure is from the edge of
the original image. Although the grate filter provides
more detail in the resulting brightness values, the noise
level does not meet our threshold requirement, so
they will not be considered as objects. Lastly, both
filters conclude: “Object not detected”, which properly
indicates that the original image can be used to update
the background.
Figure 22 shows a third empty road image, in which
there are mainly green trees and blue sea, as well as
one high-rise transmission tower. The corresponding
horizontal axis’ brightness values (Figs. 23 and 24)
reflect the presence of this high-rise transmission tower.
We can see that the grate filter yields lower brightness
values than the bar filter, which makes the computer’s
decision more accurate. Lastly, the decisions from both
filters are “Object not detected”, so the original image
can be used to update the background.
Figure 25 shows a fourth empty road, along which
there are several posts in close proximity. From Figs. 26
and 27, we can see several narrow high-frequency areas,
all of which can be individually recognized. Once
again, the grate filter shows more detail in the brightness
values of the posts. None of the objects in the area meets
our threshold requirement. They are all judged to be
noise and the decisions of both filters are correct, i.e.,
“Object not detected”. Since no objects were identified
in this image, it can be used to update the background.
4 Conclusion
In this paper, we proposed a new grate filter that can
identify whether there are objects in images captured
by cameras in urban surveillance systems. With this
technique, background images are updated from time to
time when no object is detected. The proposed method
provides a low-complexity solution that facilitates the
setup of the background image for urban surveillance
systems, and thus makes the foreground detection
process easier. This method requires no complex
computation for background detection. In addition,
it is robust with respect to the impact of changing
light conditions and does not depend on the motion
of objects. We tested and compared the proposed
grate filter with the bar-shaped filter. The results
confirm that the grate filter obtains greater detail on
the intercorrelation results of images with a lower
computational load.
Acknowledgment
This work was supported by the projects under the grants
Nos. 2016B050502001 and 2015A050502003.
References
[1] M. V. Moreno, F. Terroso-Senz, A. Gonzlez-Vidal, M.
Valds-Vela, A. F. Skarmeta, M. A. Zamora, and V.
Chang, Applicability of big data techniques to smart
cities deployments, IEEE Transactions on Industrial
Informatics, vol. 13, no. 2, pp. 800–809, 2017.
[2] M. M. Rathore, A. Ahmad, and A. Paul, IoT-based
smart city development using big data analytical approach,
in 2016 IEEE International Conference on Automatica
(ICAACCA), 2016, pp. 1–8.
[3] S. Chen, H. Xu, D. Liu, B. Hu, and H. Wang, A vision
of IoT: Applications, challenges, and opportunities with
China perspective, IEEE Internet of Things Journal, vol.
1, no. 4, pp. 349–359, 2014.
[4] M. Handte, S. Foell, S. Wagner, G. Kortuem, and P. Marrn,
An internet-of-things enabled connected navigation system
for urban bus riders, IEEE Internet of Things Journal, vol.
3, no. 5, pp. 735–744, 2016.
[5] H. Liu and X. Hou, Moving detection research of
background frame difference based on gaussian model, in
2012 International Conference on Computer Science and
Service System, 2012, pp. 258–261.
[6] X. Han, Y. Gao, and Z. Lu, Research on moving object
detection algorithm based on improved three frame
difference method and optical flow, in Fifth International
Conference on Instrumentation and Measurement,
Computer, Communication and Control, 2015.
[7] Y. Zhuang, C. Wu, Y. Zhang, and S. Feng, Detection and
tracking algorithm based on frame difference method and
particle filter algorithm, in 2017 29th
Chinese Control and
Decision Conference (CCDC), 2017, pp. 161–166.
[8] J. Suhr, H. Jung, G. Li, and J. Kim, Mixture of Gaussians-
based background subtraction for bayer-pattern image
sequences, IEEE Transactions on Circuits and Systems for
Ling Hu et al.: Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems 145
Video Technology, vol. 21, no. 3, pp. 365–370, 2011.
[9] D. Mukherjee, Q. Wu, and T. Nguyen, Gaussian mixture
model with advanced distance measure based on support
weights and histogram of gradients for background
suppression, IEEE Transactions on Industrial Informatics,
vol. 10, no. 2, pp. 1086–1096, 2014.
[10] V. Sikri, Proposition and comprehensive efficiency
evaluation of a foreground detection algorithm based
on optical flow and canny edge detection for video
surveillance systems, in 2016 International Conference
on Wireless Communications, Signal Processing and
Networking (WiSPNET), 2016, pp. 1466–1472.
[11] L. Hu and Q. Ni, IoT-driven automated object
detection algorithm for urban surveillance systems
in smart cities, IEEE Internet of Things Journal, DOI:
10.1109/JIOT.2017.2705560.
Ling Hu is a PhD student at School
of Computing and Communications,
Lancaster University, Lancaster, UK.
She got Bachelor of Engineering from
Huazhong University of Science and
Technology, China, Master of Science
from École Supérieure d’Ingenieurs en
Électronique et Électrotechnique, France
and Master of Engineering from Dublin City University, Ireland
in 1997, 2004, and 2006, respectively. She has both work
experiences of industrial world and academic world as engineer
and research assistant.
Qiang Ni is a professor at School
of Computing and Communications,
Lancaster University, Lancaster, UK. He
is also a member of Data Science Institute
at Lancaster University. He is a fellow
of IET and senior member of IEEE. His
main research interests lie in the area of
future generation networks, clouds, big
data analytics, and machine learning. He had already published
over 200 papers that have attracted 6000+ citations. He received
the PhD degree in 1999 from Huazhong University of Science
and Technology, Wuhan, China.
Feng Yuan PhD, Associate Research
Fellow, Standing Deputy Director
of Software Application Technology
Research Institute, Chinese academy
of Sciences, Chief President of Smart
City Industrial Technology Innovation
Alliance Council; Member of Expert
Committee of Internet of Things of China
Communications Industry Association; Expert of Smart City
Industrial Alliance of the Ministry of Industry and Information.
His main research lies in the area of Internet of Things, big
data analysis and cloud computing. He had published over 20
articles in these areas and had successfully pushed multiple
research findings into the market. He received the PhD degree in
2006 from Institute of Software, Chinese Academy of Science,
Beijing, China.

More Related Content

Similar to Big_data_oriented_novel_background_subtraction_algorithm_for_urban_surveillance_systems.pdf

Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoIDES Editor
 
Kk3517971799
Kk3517971799Kk3517971799
Kk3517971799IJERA Editor
 
I0343065072
I0343065072I0343065072
I0343065072ijceronline
 
motion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosmotion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosshiva kumar cheruku
 
A Novel Approach for Moving Object Detection from Dynamic Background
A Novel Approach for Moving Object Detection from Dynamic BackgroundA Novel Approach for Moving Object Detection from Dynamic Background
A Novel Approach for Moving Object Detection from Dynamic BackgroundIJERA Editor
 
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...paperpublications3
 
Objects detection and tracking using fast principle component purist and kalm...
Objects detection and tracking using fast principle component purist and kalm...Objects detection and tracking using fast principle component purist and kalm...
Objects detection and tracking using fast principle component purist and kalm...IJECEIAES
 
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERING
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERINGA ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERING
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERINGIJNSA Journal
 
Optical Flow Based Navigation
Optical Flow Based NavigationOptical Flow Based Navigation
Optical Flow Based NavigationVincent Kee
 
A Moving Target Detection Algorithm Based on Dynamic Background
A Moving Target Detection Algorithm Based on Dynamic BackgroundA Moving Target Detection Algorithm Based on Dynamic Background
A Moving Target Detection Algorithm Based on Dynamic BackgroundChittipolu Praveen
 
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET-  	  Image based Approach for Indian Fake Note Detection by Dark Channe...IRJET-  	  Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...IRJET Journal
 
Strategy for Foreground Movement Identification Adaptive to Background Variat...
Strategy for Foreground Movement Identification Adaptive to Background Variat...Strategy for Foreground Movement Identification Adaptive to Background Variat...
Strategy for Foreground Movement Identification Adaptive to Background Variat...IJECEIAES
 
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
 
10 Important AI Research Papers.pdf
10 Important AI Research Papers.pdf10 Important AI Research Papers.pdf
10 Important AI Research Papers.pdfLinda Garcia
 
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLAB
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLABIRJET - A Systematic Observation in Digital Image Forgery Detection using MATLAB
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLABIRJET Journal
 
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic ControlNeural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Controlijseajournal
 
AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...
AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...
AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...ijcsit
 
Development of Human Tracking in Video Surveillance System for Activity Anal...
Development of Human Tracking in Video Surveillance System  for Activity Anal...Development of Human Tracking in Video Surveillance System  for Activity Anal...
Development of Human Tracking in Video Surveillance System for Activity Anal...IOSR Journals
 
IMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSET
IMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSETIMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSET
IMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSETIJCSEA Journal
 

Similar to Big_data_oriented_novel_background_subtraction_algorithm_for_urban_surveillance_systems.pdf (20)

Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time Video
 
Kk3517971799
Kk3517971799Kk3517971799
Kk3517971799
 
I0343065072
I0343065072I0343065072
I0343065072
 
motion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videosmotion and feature based person tracking in survillance videos
motion and feature based person tracking in survillance videos
 
A Novel Approach for Moving Object Detection from Dynamic Background
A Novel Approach for Moving Object Detection from Dynamic BackgroundA Novel Approach for Moving Object Detection from Dynamic Background
A Novel Approach for Moving Object Detection from Dynamic Background
 
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
 
Objects detection and tracking using fast principle component purist and kalm...
Objects detection and tracking using fast principle component purist and kalm...Objects detection and tracking using fast principle component purist and kalm...
Objects detection and tracking using fast principle component purist and kalm...
 
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERING
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERINGA ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERING
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERING
 
L0816166
L0816166L0816166
L0816166
 
Optical Flow Based Navigation
Optical Flow Based NavigationOptical Flow Based Navigation
Optical Flow Based Navigation
 
A Moving Target Detection Algorithm Based on Dynamic Background
A Moving Target Detection Algorithm Based on Dynamic BackgroundA Moving Target Detection Algorithm Based on Dynamic Background
A Moving Target Detection Algorithm Based on Dynamic Background
 
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET-  	  Image based Approach for Indian Fake Note Detection by Dark Channe...IRJET-  	  Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...
 
Strategy for Foreground Movement Identification Adaptive to Background Variat...
Strategy for Foreground Movement Identification Adaptive to Background Variat...Strategy for Foreground Movement Identification Adaptive to Background Variat...
Strategy for Foreground Movement Identification Adaptive to Background Variat...
 
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
 
10 Important AI Research Papers.pdf
10 Important AI Research Papers.pdf10 Important AI Research Papers.pdf
10 Important AI Research Papers.pdf
 
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLAB
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLABIRJET - A Systematic Observation in Digital Image Forgery Detection using MATLAB
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLAB
 
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic ControlNeural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control
 
AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...
AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...
AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...
 
Development of Human Tracking in Video Surveillance System for Activity Anal...
Development of Human Tracking in Video Surveillance System  for Activity Anal...Development of Human Tracking in Video Surveillance System  for Activity Anal...
Development of Human Tracking in Video Surveillance System for Activity Anal...
 
IMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSET
IMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSETIMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSET
IMAGE RECOGNITION USING MATLAB SIMULINK BLOCKSET
 

Recently uploaded

Ukraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSUkraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSAishani27
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...Suhani Kapoor
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改atducpo
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Callshivangimorya083
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
Unveiling Insights: The Role of a Data Analyst
Unveiling Insights: The Role of a Data AnalystUnveiling Insights: The Role of a Data Analyst
Unveiling Insights: The Role of a Data AnalystSamantha Rae Coolbeth
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130Suhani Kapoor
 
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts ServiceCall Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Servicejennyeacort
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
Spark3's new memory model/management
Spark3's new memory model/managementSpark3's new memory model/management
Spark3's new memory model/managementakshesh doshi
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfgstagge
 
Data Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxData Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxFurkanTasci3
 
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiVIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiSuhani Kapoor
 
Brighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingBrighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingNeil Barnes
 
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiLow Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiSuhani Kapoor
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...Pooja Nehwal
 

Recently uploaded (20)

Ukraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSUkraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICS
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
 
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
 
꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
Unveiling Insights: The Role of a Data Analyst
Unveiling Insights: The Role of a Data AnalystUnveiling Insights: The Role of a Data Analyst
Unveiling Insights: The Role of a Data Analyst
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
 
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts ServiceCall Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
Spark3's new memory model/management
Spark3's new memory model/managementSpark3's new memory model/management
Spark3's new memory model/management
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdf
 
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
 
Data Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxData Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptx
 
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiVIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
 
Brighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingBrighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data Storytelling
 
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiLow Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
 

Big_data_oriented_novel_background_subtraction_algorithm_for_urban_surveillance_systems.pdf

  • 1. BIG DATA MINING AND ANALYTICS ISSN 2096-0654 04/06 pp137–145 Volume 1, Number 2, June 2018 DOI: 10.26599/BDMA.2018.9020013 Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems Ling Hu, Qiang Ni , and Feng Yuan Abstract: Due to the tremendous volume of data generated by urban surveillance systems, big data oriented low- complexity automatic background subtraction techniques are in great demand. In this paper, we propose a novel automatic background subtraction algorithm for urban surveillance systems in which the computer can automatically renew an image as the new background image when no object is detected. This method is both simple and robust with respect to changes in light conditions. Key words: big data; background subtraction; urban surveillance systems 1 Introduction Big data research is attracting a great amount of attention due to the seemingly infinite generation of huge data worldwide. As one example, big data- oriented techniques have emerged as an important research topic for smart cities and urban surveillance systems[1,2] . Urban surveillance systems are important applications for smart cities[3,4] . These systems use automated object detection methods whereby cameras are installed to automatically detect vehicles and other objects. Because of the tremendous volume of data generated by urban surveillance systems, to obtain useful information from the huge numbers of images and videos, low-complexity techniques that can automatically identify objects from various sources are in high demand. Hence, automated or so-called Ling Hu and Qiang Ni are with the School of Computing and Communications, Lancaster University, InfoLab21, Lancaster, LA1 4WA. E-mail: fl.hu, q.nig@lancaster.ac.uk. Feng Yuan is with the Chinese Academy of Sciences Smart City Software Co. Ltd., China and also with Institute of Software Application Technology, Guangzhou Chinese Academy of Sciences, Guangzhou 511458, China. E-mail: yf@gz.iscas.ac.cn; yf@iot-cas.com. To whom correspondence should be addressed. Manuscript received: 2017-12-30; accepted: 2018-01-03 automatic object detection algorithms have become an important research topic. Currently, the algorithms under investigation can be classified into three main groups: frame difference, background subtraction, and optical flow calculation methods. The frame difference algorithm is used to analyze the image sequences of two or more adjacent frames to identify moving targets by calculating the differences in continuous frames. For every pixel, if the difference is larger than a set threshold, the result is labelled 1, else 0. The larger is the threshold value, the less noise there is[5,6] . To detect and track moving targets, some authors have proposed combining the frame difference method with the particle filter algorithm[7] . The shortcoming of the frame difference method is that it is not able to detect nonmoving objects, since there is no difference between adjacent frames. The second type of automatic object detection algorithm is the background subtraction method. This method first builds a scene background model and then uses the current frame image to subtract the background model. In this way, targets are identified. This is a widely used method for retrieving targeted objects. Background subtraction performance depends mainly on the background modelling technique used, and Gaussian Mixture Models (GMMs) are currently the most popular. In Ref. [8], the authors combined a GMM with a classified foreground in an interpolated Red, Green, and Blue (RGB) domain. Since one
  • 2. 138 Big Data Mining and Analytics, June 2018, 1(2): 136-145 problem of GMMs is that they cannot effectively model noisy backgrounds, to address this shortcoming, the researchers in Ref. [9] utilized an advanced distance measure based on support weights and a histogram of gradients to achieve background suppression. Current background subtraction methods (such as GMMs) are also limited by their high computational complexity[8,9] , e.g., the large number of calculations required limits their practical application in real-time as reconstructions and background updates take too long. The third type of automatic object detection algorithm is based on Optical Flow (OF) calculation[10] . This method estimates motion scenes and combines similar motion vectors to detect moving objects. However, the OF method is also very computationally expensive. Another major shortcoming of the OF method is that it assumes that an object is always moving, which is not necessarily true. In this paper, we propose a novel low-complexity automatic background subtraction method that is computationally efficient and does not require that objects be continuously moving. 2 Our Proposed Method The basic idea of our proposed algorithm is to detect whether or not there are objects (e.g., cars, humans) in each image. If no object is detected, we update the current image as the background. In this way, the background is renewed in real time, which is robust with respect to changing light conditions. Compared to existing methods, which are computationally expensive, our method is computationally efficient. Another advantage of our method is that it does not depend on object motion; even nonmoving objects are detected automatically. In Ref. [11], we proposed a simple filter for detecting vehicles or license plates from captured images. We found that an object (for example, a vehicle or human being) is normally the highest energy frequency aspect of an image and the energy frequency curves decrease sharply outside object boundaries. In this paper we improve filter design for detecting and updating the background. Our proposed filter can automatically determine whether there are objects in the images. 2.1 Pre-treatment of the captured image Taking the example shown in Fig. 1, our objective is for a computer to automatically determine whether there are objects on a road based on this captured image. Original Image Fig. 1 Original image. To do so, the computer determines whether or not this image can be used as a background. Using the pre-treatment method proposed in Ref. [11], we first transfer the original image into a gray-scale image to reduce the image data that must be calculated. After generating the grayscale image, we calculate its gradient using Px to denote differences in the x direction (horizontal) and Py to denote differences in the y direction (vertical). As in Ref. [11], after calculating the gradients of both the horizontal and vertical directions, for every pixel, we obtain the overall gradient of each pixel as follows: P D q Px 2 C Py 2 (1) Figure 2 shows the gradient of this grayscale image, in which we can see that some large areas of constant grayscale fade into darkness due to the slow gradient movement. Only the outline of vehicles and some other background objects are visible. 2.2 Design of our effective filter In Ref. [11], we designed a smart filter based algorithm that scans and identifies the object areas in an image. This method was inspired by the mathematical definition of intercorrelation and the function pulse Äą.x/, which has the following important characteristic: Z 1 1 Äą.x a/'.x/dx D ' .a/ (2) Fig. 2 Gradient of grayscale image.
  • 3. Ling Hu et al.: Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems 139 This means that the function Äą(x a) can be utilized to filter out another function '.x/ at the x-axis when x=a. We use this property to filter out high frequency areas in an image. We use a matrix array to calculate intercorrelations in an image. Since the matrix array is centrosymmetric, the image intercorrelation process can be considered to be a convolution process as follows: F .n/ D G .n/ h .n/ D C1 X iD 1 G .i/ h .n i/ (3) where G.n/ represents the original image and h.n/ is the matrix array. F.n/ denotes the image after convolution. As we know, the results obtained after convolution reflect the overlapping relationship between two convolution functions. In mathematics, convolution can be regarded as a weighted summation. In the filtered image F.n/, since spectrum information is displayed on the horizontal axis as brightness, by simply choosing the middle horizontal array line of the filtered image, we obtain the one- dimensional brightness values. These values clearly reflect high frequency areas of objects on the horizontal axis. Our proposed filter successfully reduces the two-dimensional image into a one-dimensional curve, which significantly reduces the data required. Since the matrix filter proposed in Ref. [11] has a bar shape, in this paper, we refer to it as the bar filter. As we know, in the bar filter, all the numbers are set as 1’s and can be considered to be constituted by some line arrays. The bar filter incurs a lot of calculations. To reduce the computational load and still obtain satisfactory convolution results, we propose a new and improved filter in which we have removed some line arrays from the original matrix array. As such the new filter looks like a grate, so we refer to it as the grate filter, a diagram of which is shown in Fig. 3. In this study, we set the width of the bar filter as eight, which means there are eight columns of 1’s integrated into this filter. In the grate filter, we set the first Bar filter Grate filter Fig. 3 Bar filter (left) and our proposed grate filter (right). and the fourth columns as 1’s, which means there are three columns of 0’s between the 1’s columns and a total of only two columns of 1’s remaining. From the definition of convolution, we know that in this case the computational load of the grate filter is just one quarter that of the original bar filter. Therefore, it is clear that the first advantage of the grate filter is its significant reduction in the computational load. We filtered the pre-treated gradient image in the horizontal direction using the two different filters, and then plotted the one-dimensional brightness values for the straightforward view by choosing the middle horizontal array line of the filtered image. The resulting values obtained in the x-axis are shown in Figs. 4 (bar filter) and 5 (grate filter). We simply scan the horizontal brightness values from left to right and choose the threshold at 0.5 for the horizontal axis. To eliminate the impact of unwanted brightness, we check the widths between the horizontal Fig. 4 Horizontal axis’ brightness values and identified result (with bar filter). Fig. 5 Horizontal axis’ brightness values and identified result (with grate filter).
  • 4. 140 Big Data Mining and Analytics, June 2018, 1(2): 136-145 points against a threshold value and discard very narrow results. If at least one of the detected widths is wider than the predefined width, the system indicates the presence of at least one object in this image. In this case, since at least one object (i.e., a vehicle) is detected, the computer reaches the decision: “Object Detected”, which means the original image cannot be used to renew the background. Figures 4 and 5 both indicate that at least one object was detected. Considering the reduction in computational complexity, the grate filer outperforms the bar filter. Moreover, the grate filter obtains more detail in the brightness values of the horizontal axis. 2.3 Flow chart of the proposed algorithm Figure 6 shows the flow chart of our proposed method. The time period for capturing a new image may range from several seconds to several minutes, as the setting is dependent on the actual situation in the surveillance area. In the process of searching for objects in an image, sometimes several different objects are detected. Because our task is to decide whether or not this image can be used to renew the background, it is not necessary to show all the objects in a resulting image. We use a left-to-right rule and choose to show the first object detected in the image. In this situation, the previous background image is unchanged, and after a set time period, a new image will be taken into consideration. On the other hand, when no object is detected in the image, we choose this image to update the background Fig. 6 Flow chat showing the process of our proposed background subtraction method. image, and our program shows the original image with the lable “Object not detected”. Then the same action is repeated, i.e., after the set time period, a new image will be taken into consideration. 3 Experimental Results Next, we tested our filters to test their performance on different images. We used various images captured on different roads to test the two filters. Figs. 7–15 Original Image Fig. 7 Cars on the road (1). Fig. 8 Horizontal axis’ brightness values and selected car (with bar filter) according to Fig. 7. Fig. 9 Horizontal axis’ brightness values and selected car (with grate filter) according to Fig. 7.
  • 5. Ling Hu et al.: Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems 141 Original Image Fig. 10 Vehicles on the road. Fig. 11 Horizontal axis’ brightness values and selected bus (with bar filter) according to Fig. 10. Fig. 12 Horizontal axis’ brightness values and selected bus (with grate filter) according to Fig. 10. show the original images and identified results. There are several cars in Fig. 7 and the corresponding horizontal axis’ brightness values (Figs. 8 and 9) clearly show several high-frequency areas, which indicate that there is at least one object in the image. Both filters identify these cars and show the first detected red car. The difference between them is that the grate filter shows more details in the horizontal axis’s brightness Original Image Fig. 13 Cars on the road (2). Fig. 14 Horizontal axis’ brightness values and selected cars (with bar filter) according to Fig. 13. Fig. 15 Horizontal axis’ brightness values and selected cars (with grate filter) according to Fig. 13. values. Both images obtain the result “Object Detected”, which means that the original image cannot be used as a background image. There are numerous vehicles (buses and cars) inside Fig. 10 and the corresponding horizontal axis’ brightness values (Figs. 11 and 12) indicate that there are several high-frequency areas, from which we can conclude that there are some objects in this image.
  • 6. 142 Big Data Mining and Analytics, June 2018, 1(2): 136-145 Both filters detect these vehicles properly. Again, the grate filter shows more details than the bar filter in the horizontal axis’s brightness values. And again it is clear that the original image cannot be used as a background image. In Fig. 13, we see that there are two cars travelling very close to each other. The corresponding horizontal axis’ brightness values (Figs. 14 and 15) show two high-frequency areas, that are very close to each other. Although both filters detect the fact that there are objects on the road, the filters achieve different degrees of precision. Since there are some smoothing effects associated with the bar filter, the two cars detected are mixed together in Fig. 14. In contrast, the grate filter outperforms the bar filter in detecting the details. The two cars are separate, and the first car is clearly selected by the grate filter, as shown in Fig. 15. Next, we used some images of an empty road to test our proposed algorithm, the original images and detected results of which are shown in Figs. 16–27. In Fig. 16, there are no vehicles or people on the road. However, we see that there are several flower beds and lamp posts in the image which may affect our search Original Image Fig. 16 Empty road (1). Fig. 17 Horizontal axis’ brightness values and search result (with bar filter) according to Fig. 16. Fig. 18 Horizontal axis’ brightness values and search result (with grate filter) according to Fig. 16. Original Image Fig. 19 Empty road (2). Fig. 20 Horizontal axis’ brightness values and search result (with bar filter) according to Fig. 19. process. Figures 17 and 18 show that the flower beds and lamp posts are not wide high-frequency areas. The high impulses at the right end of each figure is from the edge of the original image. Although the noise level does not meet our threshold requirement, we can
  • 7. Ling Hu et al.: Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems 143 Fig. 21 Horizontal axis’ brightness values and search result (with grate filter) according to Fig. 19. Original Image Fig. 22 Empty road (3). Fig. 23 Horizontal axis’ brightness values and search result (with bar filter) according to Fig. 22. see some differences in the results of the two filters. With the grate filter, the noise values are lower, not even reaching 0.5, which is a better result. Lastly, both results show the label: “Object not detected”. So this image is properly recognized as a background image. Fig. 24 Horizontal axis’ brightness values and search result (with grate filter) according to Fig. 22. Original Image Fig. 25 Empty road (4). Fig. 26 Horizontal axis’ brightness values and search result (with bar filter) according to Fig. 25. Figure 19 shows another image of an empty road in which the background contains some trees, buildings, and several lamp posts. Figures 20 and 21 show that these objects are associated only with some narrow high-frequency areas and, again, the high impulses
  • 8. 144 Big Data Mining and Analytics, June 2018, 1(2): 136-145 Fig. 27 Horizontal axis’ brightness values and search result (with grate filter) according to Fig. 25. at the right end in each figure is from the edge of the original image. Although the grate filter provides more detail in the resulting brightness values, the noise level does not meet our threshold requirement, so they will not be considered as objects. Lastly, both filters conclude: “Object not detected”, which properly indicates that the original image can be used to update the background. Figure 22 shows a third empty road image, in which there are mainly green trees and blue sea, as well as one high-rise transmission tower. The corresponding horizontal axis’ brightness values (Figs. 23 and 24) reflect the presence of this high-rise transmission tower. We can see that the grate filter yields lower brightness values than the bar filter, which makes the computer’s decision more accurate. Lastly, the decisions from both filters are “Object not detected”, so the original image can be used to update the background. Figure 25 shows a fourth empty road, along which there are several posts in close proximity. From Figs. 26 and 27, we can see several narrow high-frequency areas, all of which can be individually recognized. Once again, the grate filter shows more detail in the brightness values of the posts. None of the objects in the area meets our threshold requirement. They are all judged to be noise and the decisions of both filters are correct, i.e., “Object not detected”. Since no objects were identified in this image, it can be used to update the background. 4 Conclusion In this paper, we proposed a new grate filter that can identify whether there are objects in images captured by cameras in urban surveillance systems. With this technique, background images are updated from time to time when no object is detected. The proposed method provides a low-complexity solution that facilitates the setup of the background image for urban surveillance systems, and thus makes the foreground detection process easier. This method requires no complex computation for background detection. In addition, it is robust with respect to the impact of changing light conditions and does not depend on the motion of objects. We tested and compared the proposed grate filter with the bar-shaped filter. The results confirm that the grate filter obtains greater detail on the intercorrelation results of images with a lower computational load. Acknowledgment This work was supported by the projects under the grants Nos. 2016B050502001 and 2015A050502003. References [1] M. V. Moreno, F. Terroso-Senz, A. Gonzlez-Vidal, M. Valds-Vela, A. F. Skarmeta, M. A. Zamora, and V. Chang, Applicability of big data techniques to smart cities deployments, IEEE Transactions on Industrial Informatics, vol. 13, no. 2, pp. 800–809, 2017. [2] M. M. Rathore, A. Ahmad, and A. Paul, IoT-based smart city development using big data analytical approach, in 2016 IEEE International Conference on Automatica (ICAACCA), 2016, pp. 1–8. [3] S. Chen, H. Xu, D. Liu, B. Hu, and H. Wang, A vision of IoT: Applications, challenges, and opportunities with China perspective, IEEE Internet of Things Journal, vol. 1, no. 4, pp. 349–359, 2014. [4] M. Handte, S. Foell, S. Wagner, G. Kortuem, and P. Marrn, An internet-of-things enabled connected navigation system for urban bus riders, IEEE Internet of Things Journal, vol. 3, no. 5, pp. 735–744, 2016. [5] H. Liu and X. Hou, Moving detection research of background frame difference based on gaussian model, in 2012 International Conference on Computer Science and Service System, 2012, pp. 258–261. [6] X. Han, Y. Gao, and Z. Lu, Research on moving object detection algorithm based on improved three frame difference method and optical flow, in Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control, 2015. [7] Y. Zhuang, C. Wu, Y. Zhang, and S. Feng, Detection and tracking algorithm based on frame difference method and particle filter algorithm, in 2017 29th Chinese Control and Decision Conference (CCDC), 2017, pp. 161–166. [8] J. Suhr, H. Jung, G. Li, and J. Kim, Mixture of Gaussians- based background subtraction for bayer-pattern image sequences, IEEE Transactions on Circuits and Systems for
  • 9. Ling Hu et al.: Big Data Oriented Novel Background Subtraction Algorithm for Urban Surveillance Systems 145 Video Technology, vol. 21, no. 3, pp. 365–370, 2011. [9] D. Mukherjee, Q. Wu, and T. Nguyen, Gaussian mixture model with advanced distance measure based on support weights and histogram of gradients for background suppression, IEEE Transactions on Industrial Informatics, vol. 10, no. 2, pp. 1086–1096, 2014. [10] V. Sikri, Proposition and comprehensive efficiency evaluation of a foreground detection algorithm based on optical flow and canny edge detection for video surveillance systems, in 2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), 2016, pp. 1466–1472. [11] L. Hu and Q. Ni, IoT-driven automated object detection algorithm for urban surveillance systems in smart cities, IEEE Internet of Things Journal, DOI: 10.1109/JIOT.2017.2705560. Ling Hu is a PhD student at School of Computing and Communications, Lancaster University, Lancaster, UK. She got Bachelor of Engineering from Huazhong University of Science and Technology, China, Master of Science from École Supérieure d’Ingenieurs en Électronique et Électrotechnique, France and Master of Engineering from Dublin City University, Ireland in 1997, 2004, and 2006, respectively. She has both work experiences of industrial world and academic world as engineer and research assistant. Qiang Ni is a professor at School of Computing and Communications, Lancaster University, Lancaster, UK. He is also a member of Data Science Institute at Lancaster University. He is a fellow of IET and senior member of IEEE. His main research interests lie in the area of future generation networks, clouds, big data analytics, and machine learning. He had already published over 200 papers that have attracted 6000+ citations. He received the PhD degree in 1999 from Huazhong University of Science and Technology, Wuhan, China. Feng Yuan PhD, Associate Research Fellow, Standing Deputy Director of Software Application Technology Research Institute, Chinese academy of Sciences, Chief President of Smart City Industrial Technology Innovation Alliance Council; Member of Expert Committee of Internet of Things of China Communications Industry Association; Expert of Smart City Industrial Alliance of the Ministry of Industry and Information. His main research lies in the area of Internet of Things, big data analysis and cloud computing. He had published over 20 articles in these areas and had successfully pushed multiple research findings into the market. He received the PhD degree in 2006 from Institute of Software, Chinese Academy of Science, Beijing, China.